report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
Selected Worldwide Disease Occurrence: Foot-and-Mouth Disease According to the Department of Agriculture, a 2001 outbreak of foot-and-mouth disease in the United Kingdom resulted in the slaughter and disposal of millions of animals and economic losses conservatively estimated at $14.7 billion. Foot-and-mouth disease is a highly contagious viral disease of cloven-hoofed animals such as cattle, swine, and sheep, and does not have human health implications. Homeland security presidential directives (HSPD) have called for HHS, USDA, DHS, and other federal agencies to take action to strengthen biosurveillance, including food and agriculture disease surveillance. For example, HSPD-9: Defense of United States Agriculture and Food, issued in January 2004, directed HHS and USDA, among others, to develop robust, comprehensive, and fully coordinated biosurveillance and monitoring systems for animals, plants, wildlife, food, human health, and water. Further, DHS was to lead, integrate, and coordinate implementation efforts among federal departments and agencies to protect critical infrastructure, including agriculture. HSPD-10: Biodefense for the 21st Century, issued in April 2004, established the four pillars of biodefense: (1) threat awareness, (2) prevention and protection, (3) surveillance and detection, and (4) response and recovery. Pursuant to these presidential directives, as well as federal laws, many federal departments and agencies pursue missions and manage programs that contribute to a national biosurveillance capability. Table 1 describes selected federal departments and agencies with surveillance- related responsibilities. Selected Worldwide Disease Occurrence: Salmonella, United States, 2008 In 2008, a salmonella outbreak occurred in 43 states and the District of Columbia, with 1,500 persons reportedly ill with the outbreak strain. The initial investigations identified tomatoes as the likely source. As the outbreak continued, additional investigations showed much of the outbreak was due to jalapeno and Serrano peppers grown and packed in Mexico and distributed in the United States. According to the Department of Agriculture’s Rural Cooperative, the tomato industry sustained an estimated loss of $100 million or more. policies.and National Security Council were merged as the National Security Council Staff, but both councils continue to exist by statute. The Homeland Security Council was maintained as the principal venue for interagency deliberations on issues that affect the security of the homeland, such a biosurveillance. In May 2009, the staff serving the Homeland Security Council We have previously reported that in an era of rapid transit and global trade, the public health and agricultural industries, as well as natural ecosystems including native plants and wildlife, face increased threats of naturally occurring outbreaks of infectious disease and accidental exposure to biological threats. influenza, are known as zoonotic diseases and can be transferred between animals and humans. Influenza pandemics occur when a new influenza virus emerges and spreads around the world, and most people do not have immunity. Although human influenza pandemics have been rare in the United States, they have had devastating effects. For example, as we reported in 2011 and 2013, HHS estimated that the 2009 H1N1 pandemic in the U.S. led to as many as 403,000 hospitalizations and 18,300 deaths from April 2009 to April 2010, and HHS had over $6 billion available for influenza pandemic activities from a 2009 supplemental Some diseases, such as some strains of appropriation. Selected Worldwide Disease Occurrence: Anthrax In 2001, anthrax was intentionally spread through the postal system by sending letters with powder containing anthrax to the U.S. Capitol. Of the 22 infected persons, 5 died. The Environmental Protection Agency spent $27 million for clean up of Capitol Hill and the U.S. Postal Service was appropriated hundreds of millions of dollars to clean up affected facilities. GAO-10-645. Although the White House developed the National Strategy for Biosurveillance in July 2012, this strategy does not include information that identifies resource and investment needs as we previously recommended. In June 2010, we found that there was no integrated approach to help ensure an effective national biosurveillance capability and to provide a framework to help identify and prioritize investments. Without a unifying framework; structure; and an entity with the authority, resources, time, and responsibility for guiding its implementation, we concluded that it would be very difficult to create an integrated approach to building and sustaining a national biosurveillance capability. National and agency strategies note that coordination is important because a national biosurveillance capability relies on the ability of a complex interagency and intergovernmental network to work together and meet an ever-evolving threat. Specifically, we found there was neither a comprehensive national strategy nor a designated focal point with the authority and resources to guide the effort to develop a national biosurveillance capability. We have previously found that developing effective national strategies and establishing a focal point with sufficient responsibility, authority, and resources can help ensure successful implementation of complex interagency and intergovernmental undertakings, such as providing a national biosurveillance capability. We made two recommendations to the White House’s Homeland Security Council, which has taken some actions to address them, as shown in table 2. The National Strategy for Biosurveillance also does not address issues we raised related to state and local biosurveillance efforts, and that we previously recommended. In October 2011, we reported that nonfederal capabilities should also be considered in creating a national Because the resources that constitute a biosurveillance strategy.national biosurveillance capability are largely owned by nonfederal entities, a national strategy that considers how to strengthen and leverage nonfederal partners could improve efforts to build and maintain a national biosurveillance capability. Moreover, efforts to build the capability would benefit from a framework that facilitates assessment of nonfederal jurisdictions’ baseline capabilities and critical gaps across the entire biosurveillance enterprise. In 2011, we found that although the federal government did provide some resources to help control disease in humans and animals in tribal and insular areas, there were no specific efforts to ensure that states and local agencies can contribute to the national biosurveillance capability. In addition, we noted that the federal government had not conducted a comprehensive assessment of state and local jurisdictions’ ability to contribute to a national biosurveillance capability. While the size, variability, and complexity of the biosurveillance enterprise makes an assessment difficult, we concluded that the federal government would lack key information about the baseline status, strengths, weaknesses, and gaps across the biosurveillance enterprise until it conducts an assessment of nonfederal biosurveillance capabilities. We further reported in October 2011 that state and local officials identified common challenges to developing and maintaining their biosurveillance capabilities such as (1) state policies in response to state budget constraints that restricted hiring, travel, and training; (2) obtaining and maintaining resources, such as adequate workforce, equipment, and systems; and (3) the lack of strategic planning and leadership to support long-term investment in crosscutting core capabilities, integrated biosurveillance, and effective partnerships. For example, state and local officials we surveyed had reported facing workforce shortages among skilled professionals—epidemiologists, informaticians, statisticians, laboratory staff, animal-health staff, or animal-disease specialists. Many of the challenges that state and local officials identified were similar to issues we reported regarding biosurveillance at the federal level. We noted that many of the challenges facing the biosurveillance enterprise were complex, inherent to building capabilities that cross traditional boundaries, and not easily resolved. To address these issues, and building on our June 2010 recommendation to develop a national biosurveillance strategy, we called for such a strategy to also address the key challenges we identified in nonfederal biosurveillance, as shown in table 3. As part of the national biosurveillance capability, the maintenance of effective animal and plant surveillance systems is critical to detecting and enhancing the situational awareness of biological events that might disrupt agriculture and food production systems, such as highly pathogenic avian influenza. Although DHS, the White House’s Homeland Security Council, and USDA have made efforts to improve the coordination and implementation of federal food and agriculture defense policy, additional actions are needed. In August 2011, we found that there was no centralized coordination to oversee the federal government’s overall progress implementing HSPD-9 on the nation’s food and agriculture defense policy, responsibilities for which are distributed across several agencies. As we reported in 2011, these federal responsibilities include the development of surveillance and monitoring systems for animal, plant, and wildlife disease, food, public health, and water quality, as well as other responsibilities related to awareness and warning, vulnerability assessment, mitigation strategies, response and recovery, and research and development. Prior to 2011, the White House’s Homeland Security Council had conducted some coordinated activities to oversee federal agencies’ HSPD-9 implementation by gathering information from agencies about their progress. DHS supported these activities by coordinating agencies’ responses to the White House on their progress. However, at the time of our 2011 review, the White House and DHS had discontinued their efforts. Per HSPD-9, DHS is responsible for coordinating agencies’ overall HSPD-9 implementation efforts. In addition, the White House’s Homeland Security Council was established by executive order in 2001 to ensure the effective development and implementation of homeland security policies, including HSPD-9. Because there was no centralized coordination to oversee agencies’ HSPD-9 implementation progress at the time of our 2011 review, it was unclear how effectively or efficiently agencies were using resources in implementing the nation’s food and agriculture defense policy, including surveillance efforts. We concluded that without coordinated activities to oversee agencies’ implementation efforts, the nation may not be assured that crosscutting agency efforts to protect agriculture and the food supply are well designed and effectively implemented in order to reduce vulnerability to, and the impact of, terrorist attacks, major disasters, and other emergencies. We also reported in August 2011 that USDA’s component agencies had taken steps to implement the department’s HSPD-9 responsibilities, but USDA did not have a department-wide strategy for implementing its numerous HSPD-9 responsibilities. For example, component agencies had taken steps to implement the four HSPD-9 response and recovery efforts for which USDA has lead responsibility, such as APHIS’s development of the National Veterinary Stockpile. However, according to USDA officials, the department assigned HSPD-9 responsibilities to its component agencies based on their statutory authority and expertise and allowed individual agencies to determine their implementation and budget priorities. To address these issues, we made four recommendations to DHS, the White House’s Homeland Security Council, and USDA, and each agency generally concurred with its respective recommendations. Since we made these recommendations, in August 2011, these entities have taken some actions to address them, as shown in table 4. We reported in May 2013 that APHIS had developed a new approach for its livestock and poultry surveillance activities, but had not yet integrated these efforts into an overall strategy with goals and performance measures aligned with the nation’s larger biosurveillance policy. Under its prior approach, APHIS focused its disease surveillance programs on preventing the introduction of certain foreign animal diseases and monitoring, detecting, and eradicating other reportable diseases already present in domestic herds. Under this previous approach, information about nonreportable diseases, including those that are new or reemerging, was not always captured by the agency’s disease surveillance efforts. We reported in 2013 that under its new approach APHIS had begun to broaden its approach by monitoring the overall health of livestock and poultry and using additional sources and types of data to better detect and control new or reemerging diseases. For example, APHIS has been monitoring for the presence of pseudorabies— a viral swine disease that may cause respiratory illness and death—at slaughter facilities, but under the new approach, it has proposed monitoring these facilities for a range of other diseases as well. Although APHIS had a vision for its new approach, we found that it had not yet integrated that vision into an overall strategy with associated goals and performance measures aligned with the nation’s larger biosurveillance efforts. At the time of our 2013 review, APHIS had developed a number of planning documents related to the agency’s capabilities for disease surveillance in livestock and poultry, but these documents did not specifically address outcomes the agency seeks to accomplish or have associated performance measures. Moreover, none of APHIS’s surveillance plans indicated how they individually or collectively supported national homeland security efforts called for in HSPD-9 or other national policies to defend the nation’s food and agricultural systems against terrorist attacks, major disasters, and other emergencies. We concluded that without integrating its new approach to livestock and poultry surveillance activities into an overall strategy with goals and measures aligned with broader national homeland security efforts to detect biological threats, APHIS may not be ideally positioned to support national efforts to address the next threat to animal and human health. To address this issue, we made a recommendation to APHIS, with which APHIS concurred and is taking action to address, as described in table 5. Chairman Johnson, Ranking Member Carper, and members of the committee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Chris Currie at (404) 679-1875 or curriec@gao.gov and Steve D. Morris at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kathryn Godfrey and Mary Denigan-Macauley (Assistant Directors), Lorraine Ettaro, Elias Harpst, Tracey King, Amanda Kolling, Jan Montgomery, Erin O’Brien, Virginia Vanderlinde, John Vocino, and Brian Wanlass. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Naturally-occurring infectious disease or the intentional use of a biological agent to inflict harm could have catastrophic consequences. For example, the recent outbreak of naturally-occurring highly pathogenic avian influenza affecting wild birds and poultry in the Midwest and on the Pacific coast presents a serious threat to the economy and trade, and underscores the importance of maintaining effective food and agriculture disease surveillance systems. Biosurveillance aims to detect such events as early as possible and to enhance situational awareness related to human, animal, and plant health. Since 2010, GAO has issued a number of reports that discuss the importance of effectively conducting biosurveillance across the human, animal, and plant domains. This statement discusses prior GAO reports and the status of recommendations related to (1) federal, state, and local biosurveillance efforts, and (2) efforts related to food and agriculture disease surveillance. This testimony is based on previous GAO products issued from 2010 through 2013 related to biosurveillance, along with selected updates conducted from November 2014 through June 2015. For these updates, GAO reviewed agency responses and documents provided in response to its recommendation follow-up efforts, such as the July 2012 National Strategy for Biosurveillance . In June 2010, GAO reported that there was neither a comprehensive national strategy nor a designated focal point with the authority and resources to guide development of a national biosurveillance capability. Further, in October 2011, GAO reported that states and local agencies faced challenges in developing and maintaining their biosurveillance capabilities, such as obtaining resources for an adequate workforce, and that the federal government had not conducted an assessment of state and local jurisdictions' ability to contribute to a national biosurveillance capability. To help ensure the successful implementation of a complex, intergovernmental undertaking, GAO recommended in 2010 that the White House's Homeland Security Council direct the National Security Council Staff to develop a national biosurveillance strategy, and further recommended in 2011 that the strategy consider nonfederal capabilities. The White House issued the National Strategy for Biosurveillance in July 2012, which describes the U.S. government's approach to strengthening biosurveillance. However, the strategy did not fully respond to the challenges GAO identified. For example, it did not establish a framework to prioritize resource investments or address the need to leverage nonfederal resources. The White House was to issue an implementation plan within 120 days of publishing the strategy. GAO has reported that it is possible that the implementation plan could address issues previously identified, such as resource investment prioritization; however, the plan has not been released as of June 2015. In August 2011, GAO reported that there was no centralized coordination to oversee federal agencies' efforts to implement Homeland Security Presidential Directive 9 (HSPD-9) on the nation's food and agriculture defense policy, which includes food and agriculture disease surveillance. GAO also found that the Department of Agriculture (USDA) had no department-wide strategy for implementing its HSPD-9 responsibilities. Therefore, GAO recommended that the National Security Council Staff and the Department of Homeland Security resume their efforts to coordinate and oversee implementation, and that USDA develop a department-wide strategy. In response, the National Security Council Staff began hosting interagency working group meetings, and DHS has worked to develop a report on agencies' HSPD-9 implementation efforts, which officials stated will be finalized by late summer 2015. As of February 2015, USDA had conducted a gap analysis of its HSPD-9 implementation efforts but had not yet developed a department-wide strategy. Further, GAO reported in May 2013 that USDA's Animal and Plant Inspection Service (APHIS) had broadened its previous disease-by-disease surveillance approach to an approach in which the agency monitors the overall health of livestock and poultry, but had not yet integrated this approach into an overall strategy aligned with the nation's larger biosurveillance efforts, such as efforts called for in HSPD-9. GAO recommended that APHIS integrate its new approach into an overall strategy aligned with national homeland security efforts, and develop goals and measures for the new approach. In June 2015, officials stated that APHIS has begun to develop some measures, but noted that resource constraints limit their ability to assess their new approach to disease surveillance. Fully integrating its new approach into an overall strategy aligned with broader homeland security efforts, as GAO recommended, will better position APHIS to support national efforts to address threats to animal and human health.
|
Concurrency is broadly defined as the overlap between development and production of a system. The stated rationale for concurrency is to introduce systems in a more timely manner or to fulfill an urgent need, to avoid technology obsolescence and/or to maintain an efficient industrial development/production work force. For measuring the degree of concurrency in this report we used a statutorily required guide issued by the Department of Defense (DOD) in April 1990 for assessing concurrency and associated risk in major acquisition programs. Its measure of concurrency is the amount of initial operational testing and evaluation (IOT&E) completed before entering production of a system. Initial operational tests are field tests intended to demonstrate a system’s effectiveness and suitability for military use. IOT&E is a key internal control to ensure that decisionmakers have objective information available on a weapon system’s performance and to minimize risks of procuring costly and ineffective systems. In the late 1980s, the Congress found that DOD was acquiring a large portion of total program quantities, using the low-rate initial production (LRIP) concept, without successfully completing IOT&E. As a result, legislation was enacted in 1989 to limit LRIP quantities for major systems. The law, 10 U.S.C. 2400, defined LRIP as the minimum production quantity needed to provide production representative articles for IOT&E, establish an initial production base, and permit an orderly increase in the production rate sufficient to lead to full-rate production after completion of IOT&E. In the conference report supporting the National Defense Authorization Act for Fiscal Years 1990 and 1991 (P.L. 101-189), the conferees indicated that LRIP quantities should not total a significant percentage of a total planned procurement. Later, the Federal Acquisition Streamlining Act of 1994 prescribed new controls for LRIP. The act states that the Secretary of Defense must specifically explain to the Congress why any planned LRIP quantities exceed 10 percent of a planned production quantity of a system, as defined at the milestone II or development decision. This provision, however, was not in effect when the F-22 program reached milestone II. The F-22 passed milestone II in 1991. At that time, the Air Force planned to acquire 648 F-22 operational aircraft at a cost of $86.6 billion. After the Bottom Up Review, completed by DOD in September 1993, the planned quantity of F-22s was reduced to 442 at an estimated cost of $71.6 billion. We recently reported that aircraft systems, including the T-45 trainer aircraft, B-1B bomber, and the C-17 cargo aircraft, as well as many other smaller systems, entered LRIP before successfully completing any IOT&E.This resulted in the purchase of systems requiring significant and sometimes costly modifications to achieve satisfactory performance, acceptance of less capable systems than planned, and in some cases deployment of substandard systems to combat forces. The LRIP contract award is scheduled for September 1997. LRIP aircraft are those to be procured during the period of concurrency. In 1990, DOD performed a statutorily required analysis of the concurrency in acquisition programs partly to define the appropriate measures for evaluating the degree of concurrency and associated risk in programs. The Office of the Secretary of Defense defined a highly concurrent program as one that proceeds into LRIP before significant IOT&E is complete. Using DOD guidelines, concurrency in the F-22 program is high because the F-22 program is scheduled to proceed into LRIP well before any IOT&E is started. Further, considering the new technology advancements being developed for use in the aircraft, the level of concurrency increases the cost, schedule, and technical risks of the program. We found that development flight tests of critical F-22 technology advances are not scheduled to begin until about 1 year after LRIP is scheduled to start and over $2 billion will have been committed to procure F-22 aircraft. According to the F-22 acquisition plan, the Air Force will commit to LRIP quantities that increase from 4 aircraft a year to 36 a year (an 800-percent increase), totaling 80 aircraft, before completion of IOT&E. Production of 36 aircraft a year under LRIP represents 75 percent of the planned full-production rate. The estimated cost of those 80 aircraft is $12.4 billion. Figure 1 shows the planned schedule of commitments to procurement of F-22 aircraft and the estimated cumulative costs prior to completion of IOT&E. A first set of hardened tooling is required initially to produce the developmental aircraft for testing. Program office officials told us that the maximum quantity of F-22s that can be produced with the first set of tooling is about 6 to 8 aircraft a year. The concurrency of development, testing and production in the F-22 program is shown in figure 2, which shows concurrent development and production from September 1997 through February 2002. Low-rate production of the F-22 is scheduled to begin in September 1997. However, IOT&E is not scheduled to take place until December 1999 through February 2002. Thus, the testing is not scheduled to be complete until over 4 years after the start of production and the commitment at an estimated cost of $12.4 billion to procure 80 aircraft (4 preproduction aircraft and 76 production aircraft), or 18 percent of all 442 aircraft to be procured. Although laboratory tests are underway and simulations of the avionics are planned, the Air Force does not plan to flight test several of the critical F-22 technology advances on an F-22 until well after the start of production in September 1997. Flight tests of low observability are not scheduled to begin until September 1998. Although the highest risk element of the F-22 program was reported to be the integrated avionics, the first flight test of an F-22 equipped with a complete integrated avionics system is not scheduled to begin until September 1999, 2 years after the start of production. By the time that testing begins, the Air Force will have already made commitments to procure 20 aircraft and long lead materials for an additional 24. For programs entering the engineering and manufacturing phase of the acquisition cycle, the Federal Acquisition Streamlining Act of 1994 requires the Secretary of Defense to explain to the Congress any plans to procure more than 10 percent of the total procurement quantity in the LRIP phase. This provision of the act is not retroactive to the F-22 program. In 1991, when milestone II was approved for the F-22 program, the total aircraft procurement quantity planned was 648. Accordingly, 10 percent would have been 65 aircraft. Currently, 442 aircraft are to be procured, meaning 10 percent would be 44 aircraft. The number of F-22 LRIP aircraft currently planned is 80, exceeding 10 percent, in either case. The Air Force’s planned commitment to production of F-22’s prior to completion of IOT&E, as a percentage of total production, exceeds the commitments made for recent fighter programs except the F-15, in which the percent is about the same as the F-22. Figure 3 compares the planned percentage of aircraft committed to production before completion of IOT&E for the F-22 and percentages committed for other recent fighter programs. Although the actual number of F-22 aircraft to be acquired before completion of IOT&E is lower than in the F-14, F-15, F-16, and F/A-18, the other fighters were acquired before the end of the Cold War when a greater degree of urgency existed for procuring aircraft. The Air Force plans to use advances in technologies and innovations to provide high performance and increased reliability and maintainability for the F-22. The integrated avionics, engine, and stealth characteristics are the primary areas that increase the cost, schedule, and technical risk in the F-22 program. After reviewing the program, the DOD Defense Science Board (DSB) concluded that concurrency was acceptable and risks were readily controllable, but noted that the F-22 program is very ambitious technically. Descriptions of some of the problems that have occurred in the development program are included below. The purpose of these descriptions is to illustrate that there remain important cost, schedule, and technical risks in the F-22 program. The F-22 Program Office has taken a number of steps to reduce the technical risks of the program, including a 54-month demonstration/validation phase using an F-22 prototype, and a risk management program for engineering and manufacturing. Some deficiencies associated with the higher risk features of the F-22 have been experienced during ground tests, requiring expensive redesigns. The F-22’s integrated avionics are expected to provide unprecedented situational awareness to the pilot. The F-22 is the first aircraft to use integrated avionics, that is, critical systems such as the radar, the weapons management system, and electronic warfare sensors that work as one unit. The key to achieving the necessary performance is the successful development of highly advanced integrated computer processors, known as the common integrated processors, and large amounts of software. Avionics and software integration has been characterized by the DOD Defense Acquisition Board as one of the highest risks to the successful development of the F-22. The risk assessment was prepared for the DOD Defense Acquisition Board to evaluate the readiness of the F-22 to begin the engineering and manufacturing development phase of the acquisition cycle in 1991. This report in June 1991 explained that the estimated 1.3 million single lines of software code needed for the F-22 represented the largest software task ever for an attack/fighter onboard software program. Further, the DSB in 1993 rated the integrated avionics as the highest technical risk in the F-22 program. Program managers for the F-22 agreed in October 1994 that the avionics and software integration are the most risky tasks facing the contractors. In a separate report, we concluded in 1994 that although the Air Force’s planned strategy for the F-22 software was generally sound in concept, some significant features of the strategy were not being followed. For example, the independent verification and validation of software products—part of the quality assurance process—was less rigorous than planned. In addition, the technical risks being encountered with the system/software engineering environment and common integrated processor were not being formally reported to DOD management. Finally, we indicated that the Air Force had begun actions to respond to our concerns. DOD responded to that report in February 1995, indicating that the quality assurance program is now being complied with as planned. DOD also stated that common automated tools had matured and would support completion of the software development effort through the engineering and manufacturing development phase of the program. We have not verified the DOD response. The F-22’s engine has not been flight tested, but has experienced problems during ground tests. The F-22’s engine is expected to be the first to provide the ability to fly faster than the speed of sound for an extended period of time without the high fuel consumption characteristic of aircraft that use afterburners to achieve supersonic speeds. It is expected to provide high performance and high fuel efficiency at slower speeds as well. Problems with performance of the F-22’s engine first surfaced after the initial engine ground tests began in December 1992. The contractor is conducting a series of interim tests, with a goal of having a complete engine with a redesigned turbine and other changes qualified for flight by December 1996 if tests now planned for 1995 are successful. If not, F-22 flight tests will be started with an engine that is not fully representative of the current approved configuration. An Executive Independent Review Team was formed to provide advice on engine development issues, including a turbine problem. The team stated that it did not consider the nature and number of engine problems to be excessive for a highly sophisticated engine at this stage of development. They also stated that the proposed solutions can only be proven by exposing authentic hardware to the full range of realistic testing. Through November 1994, the Air Force had identified engine problems that may cost as much as $479 million to remedy. The Air Force increased the target cost of the engine development contract by $218 million to design and test solutions to the engine problems. The incorporation of corrective modifications to future production engines is expected to increase production costs by $123 million. The Air Force believes its current program estimate can cover the $341 million increase ($218 million plus $123 million), but the Air Force has identified other potential design changes that may add $138 million to development and production costs. The other potential design changes are not currently part of the planned program. The low observability or stealth characteristics of the F-22 is another risk area. The F-22 is to be the first supersonic, highly maneuverable fighter that uses low observable technologies to reduce radar, infrared, acoustic and optical signatures of the aircraft, making it difficult for an adversary to detect. An evaluation of the complete F-22 radar signature using computer models and a scale version of the aircraft concluded that the aircraft’s radar signature did not meet the Air Force’s operational requirement. Although DOD advised us that these problems were not considered major, design changes, such as reducing the number of aircraft maintenance access panels and fuel drain holes, and reshaping the airframe were evaluated through December 1994 to determine if these changes were successful in reducing the signature. DOD further stated that the contractual specifications are being revised. The estimated development cost to resolve these problems is about $20 million according to the F-22 program office. Additional production costs of about $110 million could also be required, however, program officials told us the total estimated cost ($71.6 billion) of the F-22 program should not be affected. The DSB, in its review of the F-22 program’s concurrency and technical risks, identified a number of other concerns. Examples of concerns mentioned by the DSB include control of excess aircraft weight; use of new materials and fabrication processes; uncertain durability of composite materials in the F-22 application; probable inability of the engine to meet performance and durability goals design of certain low observable features and applicable manufacturing very challenging development of electronic warfare system; late scheduling of tests relative to increasing production to 12 aircraft a year; and the need for a long, evolutionary software development. Overall, the DSB characterized the F-22 program as very ambitious technically. We recommend that the Secretary of Defense reduce the degree of concurrency in the program because independent testing of technology advances (IOT&E), will not be completed before significant commitments are made to produce F-22s; the percentage of planned F-22s to be committed to production before completion of IOT&E is higher than most recent fighter programs; and the need for the F-22 is not urgent. To minimize commitments to production of F-22s until after successful completion of IOT&E, we recommend that the Secretary of Defense limit LRIP quantities to that which can be produced using the first set of hard tooling, about six to eight aircraft a year. DOD partially agreed with the findings in this report, but disagreed with the recommendations. DOD indicated it believed that the F-22 program had an acceptable degree of concurrency based on the DSB’s evaluation that risks associated with premature entry into successively higher rates of production were readily controllable through insistence on meeting certain key events and test criteria already built into the F-22 plan. The record shows, however, that DOD has often been unwilling or unable to curtail production of other systems after it starts, despite discovery of significant problems in development or operational tests. We believe the degree of concurrency in the program should be addressed now, because (1) independent testing of important technology advances is not planned until after commitments are made to produce F-22s, (2) program concurrency is high according to DOD’s prescribed measure, and (3) the need for the F-22 is not urgent. DOD disagreed with (1) our use of the completion of IOT&E as a measure of concurrency and risk in the program, (2) our positions on the level of risk in the F-22 program, and (3) the comparison of the F-22 to prior fighter programs using the percentage of planned aircraft procured during LRIP as a measurement. We first applied DOD’s own guidance for measuring the degree of concurrency in the program, that is, the amount of IOT&E completed prior to entering productions. We also used other metrics, such as the percent of the total program committed to production before completion of IOT&E. Further, DOD’s comments appear to discount the risks in the program identified by the DSB. Our comparison of the F-22 to recent fighter programs, although not the same as comparisons made by the DSB, provides an important historical perspective that DOD’s planned LRIP meets or exceeds the fighter programs undertaken during the Cold War. In conducting our work, we obtained information and interviewed officials from the Office of the Secretary of Defense, DSB, and Air Force Headquarters, Washington, D.C.; F-22 System Program Office, Wright-Patterson Air Force Base, Ohio; the Air Force Air Combat Command, Langley Air Force Base, Virginia; and the Air Force Operational Test and Evaluation Center, Kirtland Air Force Base, New Mexico. We interviewed officials in charge of program management, the operation of tactical fighter aircraft, risk assessment and the operational testing of Air Force weapon systems. We reviewed documents, including program office briefings, program schedules, test plans and reports, technology risk assessments, requirements documents and cost reports. We used these interviews and documents to determine the program management philosophy, the amount of program concurrency, the planned flight testing of F-22 technologies, program technology requirements and program risk assessments. We also reviewed DOD instructions, Air Force regulations, Office of Secretary of Defense guidance, publications from the Defense Systems Management College, our prior reports, a report of another audit organization, congressional reports, an Institute for Defense Analyses report, a DSB report, a report prepared for the Defense Acquisition Board, executive summaries, and monthly program reports. In addition, we interviewed officials from the Air Force’s Air Combat Command and examined the F-22 System Operational Requirements Document, Statements of Need, and the Mission Element Need Statement for new fighter aircraft. We performed our work from August 1994 through February 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force and the Director of the Office of Management and Budget. Copies will also be made available to others on request. Please contact me on (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated February 24, 1995. 1. These comments are dealt with on pages 12 and 13 of the report and in our responses to the DOD specific comments that follow. 2. For the most part, the risk/concurrency guidelines listed in the Office of the Secretary of Defense’s April 1990 guide are specific requirements that should be met before a program progresses. We are aware of many of those requirements that are incorporated in the F-22 program. However, the only assessment provided for in the guide for measuring the degree of concurrency is the amount of initial operational testing and evaluation (IOT&E) completed at the time low-rate initial production (LRIP) begins. By that measure, the F-22 program clearly has a high degree of concurrency. In our opinion, the ramp up of production from 4 a year to 36 a year under the LRIP phase, and initiation of long lead for 48 a year essentially represents a plan to achieve a full-rate production schedule (now defined as 48 a year) before IOT&E is completed. 3. The F-22 program, as currently planned, schedules procurement of 80 LRIP aircraft at an estimated cost of $12.4 billion. We believe that exceeds the minimum needed to successfully complete the LRIP phase of the program and that the production rates should be restricted during LRIP. Although many important F-22 development tests are scheduled prior to the acceleration of production rates, many other critical developmental tests and most IOT&E testing are not scheduled to be complete until after significant commitments are made to production. 4. We adjusted the report to reflect this information. However, it should be noted that the total number of each type of aircraft produced was much higher than planned for the F-22. This results in a higher degree of concurrency in the F-22 program when using the percentage of aircraft procured at completion of IOT&E as a measure of concurrency. 5. Our report does not, either explicitly or implicitly, suggest “total avoidance” of concurrency. 6. The Defense Science Board (DSB) portrayal of the F-22 program as relatively conservative was based on the amount of development testing to be completed at early production decision points. However, using the measure called for by DOD’s own 1990 guidance—the amount of IOT&E completed at the time LRIP begins—shows that the F-22 program is far from conservative. 7. Production ramp up from 4 aircraft a year to 36 aircraft a year appears to provide a more rapid acceleration than we believe is necessary in the LRIP phase of the program. In our opinion the ramp up of production from 4 a year to 36 a year under the LRIP phase, and initiation of long lead to support 48 a year, essentially represents a plan to achieve a full-rate production schedule before IOT&E is completed. 8. This material has been deleted from the report. 9. Additional information concerning this matter has been added to the body of the report. 10. DOD response to our prior report on embedded computers has been recognized in the body of this report. 11. We did not attempt to quantify potential cost growth in the F-22 program that may result from a change in the program schedule. However, the thrust of the LRIP legislation is to authorize only minimum necessary quantities. DOD acquisition profiles created for other weapon programs have often proven to be optimistic and are rarely carried out as initially planned because of technical, financial, or test problems. If the baseline against which to compare potential growth of costs is optimistic, an estimate of cost growth would have limited meaning at this point because those problems are likely to occur in highly concurrent programs that involve substantial advances in technology. Brenda Waterfield, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO assessed the concurrency between the development and production phases of the Air Force's F-22 fighter program and the risks associated with that concurrency. GAO found that: (1) the F-22 program has a high degree of concurrency because it will enter production before initial operational testing and evaluation (IOT&E) is completed; (2) F-22 concurrency poses substantial production and operational risks because the aircraft may be procured before technological advances are flight-tested; (3) the Air Force plans to procure 80 F-22 aircraft at a cost of $12.4 billion before completing IOT&E; (4) the F-22 low-rate initial production (LRIP) quantities substantially exceed the 10-percent guideline included in federal acquisition streamlining requirements; (5) the percentage of F-22 committed to production before IOT&E is higher than most recent fighter programs; (6) the Air Force plans to accelerate F-22 production rates in the LRIP phase of the program so that 75 percent of the full production rate will be achieved; (7) the planned rate of acceleration appears to exceed the amount that is needed to complete the program's LRIP phase and represents a plan to commit to a full-rate production schedule before IOT&E is completed; (8) the Air Force should limit LRIP quantities each year given the program's high degree of concurrency; (9) technology advances and innovations are critical to F-22 operational success; (10) the need for F-22 aircraft is not urgent and its procurement could be deferred; and (11) existing operational and technological problems need to be addressed before significant commitments are made to F-22 production.
|
The Mineral Leasing Act of 1920 charges Interior with responsibility for oil and gas leasing on federal lands and on private lands where the federal government has retained mineral rights. Several other statutes and regulations also affect oil and gas leasing and development on federal lands. For instance, the protection of resources that may be affected by oil and gas activity is governed by resource-specific laws, such as the Clean Air Act, the Clean Water Act, and the Endangered Species Act. Under the National Environmental Policy Act (NEPA), federal agencies are to evaluate the likely environmental effects of proposed projects, including oil and gas lease sales, through an environmental assessment or, if projects are likely to significantly affect the environment, a more detailed environmental impact statement. In addition, under the Federal Land Policy and Management Act, BLM manages federal lands for multiple uses, including recreation; range; timber; minerals; watershed; wildlife and fish; and natural scenic, scientific, and historical values, as well as for the sustained yield of renewable resources. BLM manages oil and gas development on federal lands using a three-step process. First, BLM develops areawide land use plans, called resource management plans, specifying what areas will be open to oil and gas development and the conditions to be placed on such development. Second, BLM may issue leases for the development of specific sites within an area, subject to requirements in the plans. Finally, a lessee may file an application for a permit to drill, which requires BLM review and approval. BLM’s lease sale process includes several key steps: Nomination of lands for sale. Interested members of the public and industry can nominate lands for competitive lease by sending to a particular BLM state office letters expressing interest in specific tracts of land desired for lease. BLM itself may also identify parcels for potential lease, although the majority of parcels leased in recent years have been nominated by the oil and gas industry. Parcels nominated for lease can vary in size; in the contiguous 48 states, the maximum size of a parcel nominated for competitive lease is 2,560 acres. Review of parcels. Parcels nominated for lease are evaluated by BLM field staff to determine whether the proposed land is available to be leased and whether it conforms with BLM policies, regulations, and land use plans. If the parcel is determined to be available, the potential impacts of oil and gas leasing on the environment are then evaluated as required under NEPA. If required, leasing restrictions (called stipulations) are added to the proposed parcel to mitigate potential impacts of leasing. Notice of lease sale. Once BLM has completed its reviews of nominated parcels, it identifies those parcels it has determined may be offered at the lease sale. These eligible parcels are included in a public “notice of competitive lease sale,” which is to be published at least 45 days before the lease sale. BLM may, however, withdraw, or defer, parcels included in the lease sale notice at any time before the lease sale takes place. Such parcels may be subsequently offered in a future lease sale if the agency conducts further review and determines the parcels’ suitability for leasing. Public protest period. The publication of a lease sale notice starts the public protest period, in which concerned entities can file a protest to BLM’s inclusion of any or all parcels in that lease sale notice. Included in the lease sale notice is guidance to the public on the process to follow for protesting BLM’s decision to offer lands identified in the notice. Under BLM guidance, the agency considers only protests received at least 15 calendar days before the date of the lease sale, generally providing 30 days for the public to submit protests. BLM dismisses a protest if the protest lacks a statement of reasons to support it. Although BLM aims to review and resolve protests before lease sales, if it cannot do so, it may elect to include protested parcels in a lease sale. In such cases, BLM resolves the protests before issuing leases for those parcels. If BLM finds a protest to have merit, the agency does not issue leases for the affected parcels, and it refunds any payments made. Competitive lease sale. The lease sale itself is a public auction, with leases sold to the highest qualified bidder. Federal oil and gas leases operate under a system in which the lessee receives the right to develop and produce oil and gas resources under a specified time frame and conditions in exchange for certain payments, including a lump-sum payment called a bonus bid. Under the Mineral Leasing Act, “leases shall be issued within 60 days following payment by the successful bidder of the remainder of the bonus bid, if any, and the annual rental for the first lease year,” thus completing the lease transaction. BLM policy also directs agency staff to resolve any protests related to a parcel before issuing the lease on that parcel. The company pays annual rent on the leased parcel until it begins to produce oil or gas (at which time, the lease owner or operator pays royalties on the volume of oil and gas produced) or until the lease expires or ends. Parcels that do not receive competitive bids are available noncompetitively the day after the sale and remain available for leasing for up to 2 years after the competitive lease sale date. The Energy Policy Act of 1992 requires BLM to offer all competitive and noncompetitive leases at 10-year primary terms. Over the past two decades, the number of federal onshore oil and gas leases BLM has issued, as well as the number of acres, have varied. Leasing activity was highest at the beginning of the period, with more than 9,000 leases and over 12 million acres leased in fiscal year 1988. Both the number of leases and area leased then fell sharply for several years, and in recent years the number has fluctuated between 2,000 and about 4,500 leases, and the area did not exceed 5 million acres leased (see fig. 1). The issuance of a lease starts a series of steps toward exploring for and producing oil, gas, or both on the leased land. Along the way, variables such as the market price of oil and gas and the costs of infrastructure influence industry’s estimates of the economic viability of pursuing development on leased lands. Lease owners may analyze available geologic information and conduct seismic or other testing to ascertain the land’s oil or gas potential and find the resource. Companies may also try to acquire leases for surrounding parcels to ensure they have sufficient acreage to make exploration and production worthwhile. If companies believe that economically viable reserves exist on their leased lands, they may begin preparing for drilling, including completing environmental studies required to apply for drilling permits. Before an oil and gas company can drill on federally leased lands, it must submit to BLM an application for a permit to drill. Once such permits are approved, companies may begin exploration or development activities, including building roads to well sites, drilling wells, and constructing pipelines and pipeline facilities needed to transport the oil and gas to market. This entire process can take as little as a few years or as long as 10 years, and ultimately, leased areas may not necessarily contain oil and gas in commercial quantities. Although BLM has taken steps to collect information related to protests to its lease sales, we found that the information it maintained and made available publicly was incomplete and inconsistent across the four state offices we reviewed. In addition, protester groups have raised concerns about the timing and extent of publicly available information. In May 2010, the Secretary of the Interior announced several agencywide leasing reforms that are to take place at BLM, some of which may address concerns raised by protester groups, by providing the public with earlier and more consistent data on which parcels may become available for leasing, thereby giving these groups longer to consider or prepare protests. Although BLM has taken steps to collect agencywide protest data, we found that these data were incomplete, inaccurate, inconsistent or ambiguous, and therefore of limited utility. To better track protests, BLM in 2007 required its staff to begin using a new module, which it had added as a component of its LR2000 lease record-keeping system specifically to capture, among other things, information related to lease sale protests. All parcels included in a lease sale notice are to be entered into LR2000, each with an assigned serial number and other basic information, including location and acreage. In addition, for each protested parcel, staff are to enter into the LR2000 module who filed the protest; reasons for the protest; and the outcome, or status, of the protest. The module should therefore contain complete information on every parcel listed in lease sale notices that was protested during the lease sale process. These parcels include parcels deferred before a competitive lease sale, parcels sold at a competitive lease sale, and parcels that did not receive a bid at a competitive lease sale. Concerning the completeness of the data, we found that some data identifying parcels that had been protested were missing from the module, particularly in the case of parcels that were deferred. We compared the module’s data with protest records obtained from BLM state offices for a random sample of 12 of the 53 lease sales held in Colorado, New Mexico, Utah, and Wyoming during fiscal years 2007 through 2009. For this sample, we found that the four state offices varied in the extent to which data identifying protested parcels had been entered into the module, ranging from fully complete to missing information on deferred parcels, and potentially missing information on parcels that had not been sold at a competitive lease sale (see table 1). Specifically, data obtained from BLM state offices in our sample showed that 68 parcels were protested and deferred. When we looked for these same data in the module, however, we found that 28 of the parcels—over 40 percent of deferred and protested parcels in our sample—were missing. Although the results from our sample of 12 lease sales cannot be generalized to all 53 lease sales, the extent of missing information we found suggests that information on protested parcels beyond our sample could also be missing. Further, when we examined protest data available in the module for all 53 lease sales, we found that protest information recorded in the module was inaccurate, inconsistent or ambiguous, and therefore of limited utility. For example, we found that the field in the module identifying the status of a protest was left blank or read “pending” for more than 1,100 parcels, even when leases for those parcels had already been issued. In such cases, any protests would presumably have been resolved, either because the protest was deemed to have no merit or because concerns raised in the protests were addressed. We also found that BLM state offices often used the same term in the module to describe different outcomes in the leasing process. For example, in some cases, the term “dismissed” was used for protests to parcels that had been deferred, without indicating whether the agency had deemed the protest to have merit. In other cases, the term “dismissed” was applied to parcels for which protests had been found by the agency to be without merit, and the parcels had been leased. In addition, much of the information was entered into the module so generically that it was difficult to discern what the information meant. Specifically, BLM guidance calls for staff to enter the reason for a protest, but the corresponding data field is limited to 255 characters (approximately three lines of text). In practice, staff in the four state offices entered only basic information, such as two- or three-word phrases, without explanation or a reference to fuller information contained in the protests themselves. For example, staff in the Colorado and Wyoming state offices often listed “environmental concerns” as the issue raised in protests. In matching descriptions of issues in the module with the original protest letters, however, we found that “environmental concerns” included a broad range of issues, including concerns over threats to sensitive species or water quality, as well as economic issues such as loss of recreational or agricultural land uses. BLM officials at both headquarters and state offices told us that although staff are entering protest data into the module, they are not using protest information from the module to monitor protest activity but instead rely on other sources of information. According to a BLM headquarters official, to monitor protests to lease sales, headquarters officials rely on regular briefing memos provided by the state offices for each lease sale, rather than review information in the module. Similarly, across each of the four state offices, BLM officials said that instead of the module, they use their own detailed, informal spreadsheets to track protest activity and their responses, which they can easily maintain and organize, often lease sale by lease sale. BLM officials acknowledged that maintaining protest information is important, although they also said that the LR2000 module is not the most efficient or effective way to do so. BLM state officials added that not only is the module’s software unable to extract and summarize data easily, but it is also inefficient for entering certain information into the module. For example, if a protest letter covers multiple parcels, initial protest information, including who protested and the reasons for the protest, can be entered into the module once and automatically applied to multiple parcels in a single batch. But after BLM resolves and responds to the protest, the module’s software does not allow the response to be entered once and applied automatically to the batch of parcels, instead forcing the outcome of the protest to be entered separately for each of the parcels. According to BLM state office officials, this process can be time-consuming. (During the period of our review, the total number of parcels in a lease sale notice ranged from 13 to 265.) We found that the amount of protest-related information BLM makes publicly available varies across the four state offices in our review. For example, the Utah state office is the only office of the four to provide protest letters, as well as BLM’s responses, on its Web site. Similarly, only the New Mexico state office publishes on its Web site an advance list of the parcels under consideration for inclusion in a notice of lease sale. The other three state offices do not make this information available on their Web sites, although a BLM Wyoming state office program manager said the office would provide this information upon request. According BLM guidance, the agency uses preliminary parcel lists primarily to request concurrence and stipulation recommendations from selected federal or state entities. Generally, such lists are not available to the publ ic and do not constitute official notice of a proposed BLM action, accordingto the guidance. Nonetheless, protester groups we spoke with stated that e to they wanted information in a time frame that was more conduciv meaningful public participation. Specifically, several representatives of protester groups said that because the protest period was generally the one opportunity BLM provided for public input during the lease sale decision-making process, it was critical that they have enough time to thoroughly review each parcel included in a lease sale notice before the formal 30-day protest period. During its land use planning under the Federal Land Policy and Management Act of 1976, as amended (43 U.S.C. § 1701 et seq.), in which BLM determines, among other things, which lands in a planning area may be available for leasing, BLM provides opportunities for public involvement and comment, as well as a specific protest period, before finalizing its land use plans. It is not uncommon, however, for many years to pass between the time the land use plan is issued and when a specific parcel is reviewed for lease sale. Our review focuses only on the information made publicly available during the lease sale process. its release. BLM officials also said that in general, all documents supporting a lease sale decision—including parcel reviews conducted with other federal, state, and local entities; recommendations from BLM field offices regarding the leasing of parcels; and protest letters and decisions— would be available for review by the public upon request. Some protester groups we spoke with stated that although BLM’s deferral of protested parcels from a lease sale achieved their intended result, they nevertheless could not determine from publicly available information whether this outcome was tied to reasons raised in their protests. They also said they lacked information from BLM as to whether deferred parcels would be offered at a future sale or to what extent their concerns would be factored into BLM’s future decision making on those parcels. In May 2010, the Secretary of the Interior announced several agencywide leasing reforms that are to take place at BLM. Some of these reforms may address some concerns raised by protester groups, by providing the public with earlier and more consistent data about which parcels may become available for leasing. BLM field offices are to provide a new 30-day public review-and-comment period that precedes the 30-day protest period. Doing so will potentially give stakeholders longer to review parcels and decide whether to file a protest and, if so, longer to prepare the protest. The reforms also require BLM state offices to make available on their Web sites their responses to protest letters filed during the protest period after a notice of lease sale. According to BLM, among other goals, the intent of these changes (which we have not evaluated) is to provide meaningful public involvement, as well as more predictability and certainty, in the leasing process. Most parcels identified in BLM lease sale notices from fiscal year 2007 through fiscal year 2009 in Colorado, New Mexico, Utah, and Wyoming were protested; protests came from a diverse group of entities, including nongovernmental organizations representing environmental and hunting interests, state and local governments, businesses, and private individuals. These groups and individuals listed a wide variety of reasons for their protests, including concerns that oil and gas activity would (1) impair fish and wildlife habitats or air and water quality or (2) adversely affect recreational or agricultural uses of the land. Overall, we found that 74 percent of parcels whose leases were competitively sold in the 53 lease sales that took place in the four state offices from fiscal years 2007 to 2009 were protested, although this percentage varied considerably by state (see table 2). Similarly, in our review of a sample of lease sales, we found that most parcels were protested. To gain a further understanding of the extent of protests beyond those parcels competitively sold (in other words, to capture parcels deferred before lease sale and those that did not sell competitively), we examined protest information for our random sample of 12 of the 53 lease sales. Overall, we found that 1,035 of the 1,244 parcels (about 83 percent) in our sample were protested over the 3 fiscal years, although the number of parcels that were protested varied across the state offices (see table 3). We also found that at least half the parcels were protested for each lease sale in our sample (see app. II). Of the 1,035 protested parcels in our sample, 68 parcels (about 7 percent) were deferred before lease sales; BLM dismissed protests for 763 parcels (about 74 percent); and as of March 2010, BLM had yet to issue responses for protests to 204 parcels (about 20 percent). We found that a diverse group of entities filed protests for parcels included in our sample of lease sales, including nongovernmental organizations, governments, businesses, and individuals (see app. II). Many of the nongovernmental organizations were environmental organizations; for example, the Center for Native Ecosystems was listed as a party on 13 of the 86 protest letters across three state offices in our sample. Other nongovernmental organizations representing hunting, fishing, and recreational interests also commonly filed protests. Governments included both state and local governments, such as a state natural resource department and county commissioners. Businesses were represented by ranching and recreational interests, and private individuals were often residents concerned that their lifestyles or properties would be affected by the proposed leasing activity. In many instances, several groups jointly filed a single protest letter. For example, for one lease sale in New Mexico, multiple businesses—representing ranching, recreational, and other interests—and several nongovernmental organizations submitted a protest letter. Similarly, in one lease sale in Wyoming, an association of churches signed a protest letter alongside five nongovernmental conservation organizations. In addition, according to BLM officials, the agency also often received “repeat” protests, where the same groups raised issues they had previously raised in protests that BLM had dismissed in earlier lease sales; “blanket” protests, where all the parcels identified in a lease sale notice were protested for general reasons; or “mass duplicate protests,” where multiple entities filed the same letter. For our sample of protest letters, we did not analyze the extent to which any of the protests fell into these categories. In our analysis of each of the 86 protest letters in our sample, we found that the reasons cited for the protests varied considerably. We found that the reasons outlined in the letters generally fell into four broad areas: alleged impacts on fish and wildlife and their habitats; degradation of the natural environment, such as air or water quality; effects on human uses, such as recreation or agriculture; or potential violations of statutes or policies (see app. II). For instance, many of the letters stated that certain parcels identified for oil and gas leasing were located on lands of high conservation value and that oil and gas activities would disrupt important species’ habitats, such as sage grouse breeding and nesting sites; migratory routes and winter ranges for big game, such as elk and mule deer; or the riparian habitats of sensitive fish species, such as cutthroat trout. Several of the letters stated that because some of the parcels were located in areas that had been proposed for or had received a wilderness or other conservation designation, leasing the area to oil and gas development would come into direct conflict with that proposed designation. Many of the letters also raised concerns that oil and gas development on the land would affect use of the land for recreational or business-related purposes, including hunting, fishing, hiking, horseback riding, ranching, and other agricultural uses. In addition, entities filing protests frequently raised concerns that offering certain parcels for lease would violate particular statutes or policies. For instance, a number of the protest letters stated that offering certain parcels for lease would be in potential violation of the Federal Land Policy and Management Act because leasing those parcels would be inconsistent with BLM’s current land use plans or responsibility to ensure that public lands were not unnecessarily or unduly degraded. Other protest letters stated that BLM had potentially neglected to (1) conduct sufficient site-specific environmental analyses, (2) identify potential adverse environmental effects, or (3) consider an adequate range of alternatives when selecting certain parcels for lease sale—allegations that, if true, could put BLM in violation of NEPA. We could not measure the extent to which protests influenced BLM’s leasing decisions through the information BLM maintains because the agency did not document the role protests played in its decisions to defer parcels; protests were, however, associated with delays in leasing. In addition, we found that despite industry concerns, protests did not significantly affect bid prices and that the effects of protests on nationwide oil and gas production in the near term are not likely to be significant. We could not measure the extent to which protests affected BLM’s lease sale decisions because of limited information BLM maintains on protests. Not only were protest data incomplete, but BLM did not consistently document the reasons for its deferrals or the extent to which it found protests to have merit. In our review of a sample of 12 lease sales in the four state offices, we found that when BLM deferred a protested parcel before the lease sale, the agency did not provide the reasons for the deferral in its response to the protest letter. Rather, BLM stated that because the parcel was deferred, the protest was “dismissed as moot” or the parcel was “not subject to protest.” For such deferrals, BLM did not indicate whether the protest had merit or to what extent, if at all, the protest factored into the agency’s decision to defer the parcel. Similarly, although in principle a protest could also play a role in BLM’s decision to modify the acreage or stipulations on a parcel, in reviewing BLM’s responses to the protest letters in our sample, we could not determine if BLM made any such changes because of a protest. BLM officials explained that many interacting factors influenced leasing decisions, and it was not always possible to specify the extent to which protests affected their decisions. In our sample of protested parcels deferred before lease sales, however, we found that issues similar to those raised in the protest letters were often cited by BLM officials as the reason for deferrals. Specifically, we found that for 56 of the 68 deferred protested parcels in our sample, the reasons BLM cited were similar to issues raised in the protest letters for those same parcels. For example, several conservation groups protested the lease sale of several parcels in Utah’s February 2007 lease sale because, according to the protest letter they filed jointly, recent archaeological research showed that a particular mountain gap had special significance as an ancient astronomical observatory. According to BLM officials, BLM deferred the sale of these parcels, on which they had already placed some restrictions to oil and gas development, so they could further review the area’s importance as a cultural resource and the potential need for additional protection. On the other hand, some protested parcels were deferred for administrative reasons unrelated to issues cited in protest letters. For example, the New Mexico state office deferred one parcel from its July 2008 lease sale after it determined that land within the parcel was already under lease. BLM officials provided anecdotal accounts in which protests influenced their decisions, and they acknowledged that the protest process can serve as a check on agency decisions to offer parcels for lease sale. In some instances, according to the officials, protests brought issues to their attention that they may not otherwise have factored into their decision making and therefore ultimately improved their decisions. For example, according to a BLM Colorado state office program manager, the office deferred the lease sale of several parcels after a conservation group alerted the office through the protest process that the parcels potentially contained habitat for a threatened plant species, as well as areas that had been designated for state and national historic and natural preservation. Similarly, officials in the New Mexico state office said they deferred the lease sale of multiple parcels after reviewing information submitted by protesters, including a letter submitted by the New Mexico Department of Game and Fish, and determining that the areas contained key habitat for the desert bighorn sheep, a state endangered species, and that further review of the lands’ leasing suitability would therefore be warranted. In addition, some protests resulted in appeals to the Interior Board of Land Appeals or litigation in federal court, which could have ultimately affected BLM’s leasing decisions. Although data were not available to determine how many appeals or legal challenges were associated with the protests submitted during the period of our review, we did examine appeals and litigation associated with our sample of lease sales. Within our sample, one appeal to the Interior Board of Land Appeals was filed by a group that had protested parcels included in Wyoming’s April 2008 lease sale. The board dismissed this appeal in October 2009, holding that the protesting organization lacked standing to appeal because it failed to establish that it or any of its members had used, or in the future would use, any of the protested parcels. In addition, groups filed lawsuits challenging BLM’s lease sale decisions from New Mexico’s July 2008 lease sale and Colorado’s August 2008 lease sale; both cases were pending as of May 2010 (see app. III). We found that a majority of leases for protested parcels in the four state offices from fiscal year 2007 through 2009 were issued after the 60-day window specified in the Mineral Leasing Act. BLM officials explained that, starting in the early 2000s, the overall number of protests rose in tandem with an increase in oil and gas development activities and an increase in activities in contentious areas, such as those potentially containing wilderness-quality lands or areas that had not before been leased for oil and gas. According to BLM officials, responding to the large number of protests, some of which raised complex issues, increased staff workloads and made it difficult for them to respond to protests and issue leases within the 60-day window. When we examined lease issuance time frames for all competitively sold leases for parcels from the 53 lease sales held in the four state offices during fiscal years 2007 through 2009, we found that BLM was able to issue leases within the 60-day window for almost all unprotested parcels. But BLM was not able to meet this window for almost 91 percent of the protested parcels it sold competitively during this time. The percentage varied by state office: In New Mexico the percentage was about 52 percent, while in the other three state offices it was more than 91 percent, ranging up to almost 100 percent in Wyoming (see table 4). The Wyoming state office prepared one consolidated response to all protest letters filed for a particular lease sale, and thus, a BLM official explained, leases were not issued for any protested parcels until concerns raised in each of the protests were resolved and BLM had responded. The time it took BLM to issue the leases also varied. For the protested parcels for which leases were issued, about 46 percent were issued within 6 months, about 54 percent were issued within 6 months to 1 year, and less than 1 percent took up to 2 years. In addition, as of March 2010, BLM had not issued leases for more than 1,200 protested parcels (representing about 24 percent of all parcels sold competitively during this time), the majority of which were from lease sales held during fiscal year 2009 in Utah and Wyoming. While our analysis is consistent with the assertion from BLM officials that an increased workload from protests resulted in delays issuing leases, it was not sufficient to establish a cause-and-effect relationship because the available data did not allow us to examine whether factors other than protests, such as other workload demands in the state office, may also have contributed to lease issuance delays. We found that protest activity did not systematically decrease bid prices for leases during the period we reviewed and that overall effects on near- term nationwide oil and gas production are not likely to be significant, despite industry concerns over protests and delays in issuing leases. Specifically, industry officials we spoke with said that if an energy company cannot count on timely issuance of leases, it could be hard- pressed to make fully informed decisions on how to develop a group of leased parcels. If the lease on one parcel within a group is delayed, for example, a company may not find it cost-effective or feasible to develop the rest of the parcels in that group. In some cases, companies are concerned that capital may be tied up while BLM is resolving protests and deciding whether to issue the companies’ leases. Because companies make payments to BLM at the time of lease sale, they may find themselves financially constrained while awaiting BLM’s decision and at the same time have no assurance that BLM will grant their leases. According to industry representatives, uncertainty over protested parcels—including delays in lease issuance, parcels’ ultimate availability, and additional restrictions that may be placed on them—might lower the amount potential lessees may be willing to bid for those parcels. In addition, industry representatives expressed concern that the delays and uncertainty related to protests could result in reduced acreage available for leasing and therefore ultimately also limit domestic oil and gas production. The results of our analysis showed no systematic effect of measures of protest activity on bid prices, although our analysis did not account for all possible determinants of bid prices. For example, when we compared the average bid price per acre for protested parcels against the average bid price per acre for unprotested parcels for lease sales held in the four state offices during fiscal years 2007 through 2009, we did not find a systematic effect of protest activity on bid price. In the 29 lease sales where estimation was possible, we found that for 3 lease sales in Wyoming, the average bid price per acre was significantly higher for unprotested parcels than for protested ones. In 3 other lease sales in Colorado, New Mexico, and Utah, however, we found a significant association between higher bid price per acre and protested parcels. In the 23 other sales, we found no statistically significant correlation. Similarly, when we analyzed the number of protests per parcel and average bid prices, we did not find a systematic effect. Here, in the 36 lease sales where estimation was possible, we found that for 4 of them—1 in Colorado and 3 in Wyoming— higher average bid prices per acre were associated with fewer protests. For 2 lease sales in New Mexico and Utah, the converse was true, and lower average bid prices per acre were associated with fewer protests. In the 30 other sales, there was no significant relationship. Finally, for the number of days of delay in issuing leases on protested parcels, we found no consistently significant statistical relationship with lower average bid price. While industry representatives also expressed concern that protest activity could result in reduced acreage available for leasing, it was not possible to determine the extent to which acreage was withheld from leasing as a result of protests because BLM did not document whether protests influenced its decisions to defer parcels from lease sales. During the period of our review, about 1 million acres, or 15 percent, of the approximately 6.9 million acres of land included in the lease sale notices in the four state offices were deferred before lease sale. Given the limitations of BLM’s data, however, we could not determine how much of this deferred acreage was protested or, for deferred acreage that was protested, whether it was subsequently leased in a later sale. This deferred acreage thus represents an upper limit to the potential acreage that could have been withheld from leasing because of protests to date in the four state offices. In addition, BLM had not yet resolved protests filed on another 1.4 million acres, or about 20 percent, of the approximately 6.9 million acres of land identified in the lease sale notices, and resolution of many of these protests has been on hold following direction from BLM headquarters to await specific policy changes before resolving pending protests. For instance, according to officials in the Wyoming state office, the office was directed not to issue protest responses for its protested parcels—which included more than 1,000 parcels covering approximately 1.2 million acres for parcels protested during our review period—until the parcels’ suitability for leasing was reviewed in light of new guidance covering sage grouse habitat and wilderness policy. As a result, it is too early to determine the effects of protests on the acreage where protests have yet to be resolved, and ultimately it may not be possible to distinguish the effects of protests from the effects of simultaneous policy changes. Further, because oil and gas producers generally have up to 10 years from a lease’s issuance in which they can begin developing the lease, the effect of leasing decisions may not be felt for several years after the lease sale. At the national level, the near-term effect of protests on U.S. oil and gas production is likely to be relatively modest because federal lands account for a small fraction of the total onshore and offshore nationwide oil and gas output. Specifically, in fiscal year 2009, federal lands accounted for 5.8 percent of the nation’s total oil production and 12.8 percent of total natural gas production. Assuming the federal share of production remains comparable in the future, and production on federal lands falls by 15 percent (the percentage of deferred acreage), nationwide oil production would be reduced by 0.9 percent, and natural gas production would fall by 1.9 percent. If, in addition to the 15 percent of deferred acreage, BLM were to withdraw the acreage represented by the additional 20 percent of protested parcels whose protest decisions were still pending—a total reduction of 35 percent—the corresponding combined loss nationwide would be 2.0 percent for oil and 4.5 percent for natural gas. With the current supply of federal lands already under lease, however, oil and gas development and production may be able to increase along with any demand for such production. Of federal lands that are currently leased, 12 million acres are producing oil or gas, whereas 33 million acres have not been developed. Factoring in both federal onshore and offshore leases, a total of 67 million acres have not been developed, while 22 million acres are producing oil or natural gas. While they may not all contain viable resources, some of these 67 million acres may provide a buffer for the energy industry—federal lands or waters that could be developed—if producers wanted to respond to market conditions with a rapid rise in development and production activity. Energy industry representatives said that while various factors influence a company’s decision to develop leases, the prices of oil and gas are a big driver. We examined the movements of oil and gas prices from 1990 through 2009 in relation to development activities as measured by oil and gas wells drilled and found that percentage changes in the prices of oil and gas closely paralleled percentage changes in development activity (see fig. 2). The peaks and troughs in the patterns of these variables largely overlapped, strongly suggesting that during the past two decades, development activity reacted quickly and proportionally to changes in the prices of oil and gas. BLM must continue to balance interest in developing the nation’s domestic sources of oil and natural gas on federal lands with ensuring that such development is done in an environmentally responsible manner and in line with its mandate to manage these lands for multiple uses. The protest period provided before new oil and gas leases are issued allows the public an opportunity to comment on a parcel before the right to develop that parcel passes to a private company, and protests provide an opportunity for BLM to carefully examine lease sale decisions in light of the issues that protests raise. This protest process has its trade-offs, however. Specifically, issues raised in protests can help BLM ensure that the best leasing decisions are made, but protests have also been associated with delays and may increase industry uncertainty over the availability of federal lands for oil and gas leasing. Although BLM has taken steps to collect agencywide protest data, when we tried to evaluate the effects of protests, we were hindered by the incompleteness, inconsistency, and ambiguity of these data. Protester groups have also been dissatisfied with BLM’s lack of protest-related information. Without more robust protest information, BLM, Congress, and the public lack a full picture of protest activity and how protests affect leasing decisions. As Interior reforms the leasing process, BLM has an ideal opportunity to (1) revisit how it maintains protest-related information and makes it publicly available and (2) develop the means to respond to protests and issue leases with fewer delays, without compromising the thoroughness of review. To improve the efficiency and transparency of BLM’s process with regard to protests of its lease sale decisions and to strengthen how BLM carries out its responsibilities under the Mineral Leasing Act, we recommend that the Secretary of the Interior direct the Director of BLM to take the following two actions: revisit the agency’s use of the module for tracking protest information and, in so doing, determine and implement an approach for collecting protest information agencywide that is complete, consistent, and available to the public and in implementing the Secretary of the Interior’s leasing policy reform issued in May 2010, take steps to improve (1) the transparency of leasing information provided to the public, including information to explain the basis of agency decisions to include or exclude particular parcels in a lease sale and, to the extent feasible, documentation of the role, if any, that protests played in final lease decisions, and (2) the timeliness of lease issuance, without compromising the thoroughness of review. We provided the Department of the Interior with a draft of this report for review and comment and received a written comment letter from Interior (see app. IV). In its written comments, Interior generally agreed with our findings and concurred with our recommendations. The department also identified specific actions it has taken and plans to take to implement these recommendations. With regard to our first recommendation, about revisiting BLM’s use of the module for tracking protest information, Interior wrote that by the end of calendar year 2011, BLM will determine if the module can be redesigned or if another application would be more effective and will implement an approach to better track protest-related information. In addressing our second recommendation, on improving the transparency of its lease decisions and the timeliness of lease issuance, Interior wrote that its onshore leasing reform policies will provide the increased public participation, transparency, and timeliness called for in the recommendation. Interior’s letter states that with leasing reform, there will be additional environmental review and a new opportunity for public comment and that adjustments to the “lease parcel list” may be made on the basis of public comments received. We stress, however, that as any adjustments to parcel lists are made, it will be important for BLM to explain and document the rationale behind its decisions to include or exclude particular parcels in a lease sale. Interior’s letter also stated that the department believes its ability to adequately address a protest within required time frames will be addressed by posting the lease sale notice 90 days before lease sale (instead of 45 days), extending the period BLM has to evaluate and respond to protests before a lease sale. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Secretary of the Interior, Director of the Bureau of Land Management, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines (1) the extent to which the Bureau of Land Management (BLM) maintains and makes publicly available information related to protests, (2) the extent to which parcels were protested and the nature of protests, and (3) the effects of protests on BLM’s lease sale decisions and on oil and gas development activities. For all three report objectives, we reviewed relevant laws, regulations, and Department of the Interior and BLM guidance. We interviewed officials in BLM headquarters and visited and interviewed officials from BLM state offices in Colorado, New Mexico, Utah, and Wyoming. (The New Mexico state office has jurisdiction over Kansas, Oklahoma, and Texas, as well as New Mexico, and the Wyoming state office has jurisdiction over Wyoming and Nebraska. The data presented in this report for the New Mexico and Wyoming state offices include data for all the states under their jurisdiction.) We selected these four states because collectively they accounted for 69 percent of oil and 94 percent of natural gas produced on federal lands from fiscal year 2007 through fiscal year 2009 and, according to BLM headquarters officials with whom we spoke, received a high number of protests to their lease sales over this same period. In addition, we interviewed stakeholder groups, including representatives from the energy industry, state government, and nongovernmental organizations, to discuss their concerns about BLM’s lease sale and protest process, including the effects—both actual and potential—associated with protests to BLM oil and gas lease sales. To conduct our work, we obtained and analyzed BLM data from three different sources. First, using lease sale records from the BLM state offices for the 53 lease sales held in the four selected state offices from fiscal year 2007 through 2009, we gathered data on each of the parcels contained in the lease sale notices, including parcel number, acreage amount, and whether the parcel was deferred or the acreage was modified before lease sale. The 53 lease sales comprised 6,451 parcels covering 6.9 million acres of land. For those parcels that were offered at lease sale, we gathered data on final acreage amounts and whether the parcels sold competitively (that is, during the lease sale auction; parcels unsold at auction may be leased noncompetitively later). For parcels that sold competitively, we also recorded the winning bid amount per acre, as well as the total bid amount. Second, we obtained lease information from the agency’s lease record- keeping system, Legacy Rehost System 2000 (LR2000), for all leases issued in the four state offices from fiscal year 2007 through March 25, 2010, including the type of lease (competitive or noncompetitive), the lease sale date, and the date the lease was issued. Third, for fiscal years 2007-2009, we obtained protest information from BLM’s “public challenge module,” which it developed as a component of LR2000 to track protests, among other things, to its lease sales. (BLM required staff to begin entering protest information in the module starting in 2007.) Using unique identifiers assigned to each parcel, we then matched the records obtained from the three data sources and merged them to conduct various data analyses. To determine the reliability of the three data sources, we interviewed officials responsible for the data and data systems; reviewed system documentation including manuals, users’ guides, and guidance; and performed electronic and logic tests of the data. On the basis of our assessment, we concluded that the lease sale record data and the LR2000 lease data were sufficiently reliable for our purposes. To further assess the completeness of the protest information contained in the module, we compared the module’s data with protest records obtained from BLM state offices for a random sample of 12 of the 53 lease sales held in the four state offices during fiscal years 2007-2009. The 12 lease sales comprised 1,244 parcels covering roughly 1.4 million acres of land. From our assessment of the module, we found that it did not contain complete records: While the module was sufficiently reliable in containing parcels that sold competitively, it did not always contain records for parcels BLM withdrew (deferred) before lease sales. Additionally, we found that the protest-related information the module did contain was not always complete, accurate, or consistent and therefore was not reliable. To determine what information BLM makes publicly available related to protests, we reviewed the process followed by each BLM state office for reviewing protests and providing information about such decisions to the public, which included interviewing BLM state office officials, reviewing protest-related information available on BLM’s Web site and through other sources, and synthesizing information gathered during our interviews with stakeholder groups. To determine the extent to which parcels were protested and the nature of protests, we compared BLM’s lease sale records with the data available in BLM’s public challenge module. In addition, we further reviewed protest information for our random sample of 12 lease sales. Specifically, for each lease sale in our sample, we obtained and analyzed all submitted protest letters, which totaled 86, and BLM’s responses to these letters. We analyzed information on whether each parcel included in the notice for each of these lease sales was protested and, for protested parcels, the outcome of the protests, including whether BLM’s protest decisions were subsequently appealed or litigated. We also analyzed information on the groups filing the protests and their reasons for filing them (see app. II). For protested parcels BLM deferred from lease sales, we also interviewed BLM state office leasing officials about the reasons they deferred these parcels and compared their reasons with the protest letters for the same parcels. To determine the extent to which protests could affect the timing of BLM’s lease sale decisions, we analyzed data on all parcels BLM sold competitively in the four state offices during fiscal years 2007-2009, using BLM’s lease sale records, lease issuance dates from LR2000, and protest information from the public challenge module. Specifically, for all parcels sold competitively during this period whose leases had been issued or remained unissued as of March 25, 2010, we calculated the length of time between each parcel’s sale date and lease issuance date. We based our determination of whether a lease was issued late on the date of the lease sale plus 15 days to allow for the 10 business days that winning bidders have to submit required payments to BLM. We cross-tabulated the data into a three-way table and examined the association among whether a parcel was issued late, whether it was protested, and the state in which the parcel was located. In conducting tests of statistical significance, we found that protested parcels were significantly more likely to be issued late, even after accounting for state office. Given the data available, however, we were unable to examine the association between whether a lease was issued late and other potentially relevant factors, including workload in the state offices, the number of protests, the validity of concerns raised in protest letters, and the amount of review that was required by BLM to resolve protests. Thus, although our analysis is consistent with the hypothesis that protests contribute to lease delays, it is not sufficient to establish a cause-and-effect relationship. To examine the extent to which protests could affect the bid prices of leases, we analyzed BLM’s lease sale and protest data for all competitively sold leases for parcels in the four state offices during fiscal years 2007- 2009. Specifically, to determine if bids and protest activity were associated, we conducted several statistical analyses. We analyzed data on the price of bids per acre and several measures of protest activity, including whether the lease sale was protested, the number of protests received for a specific parcel, and various measures of delay in issuing leases on protested parcels after a lease sale. We conducted a separate statistical analysis for each lease sale in each of the four state offices, which allowed us to control for location (at the state office level) and for factors that might vary over time, such as oil and natural gas prices. To analyze the extent to which protests could affect oil and gas development activities, we collected and analyzed national data on oil and gas development and production activities, specifically, the number of exploratory and developmental wells drilled and data on oil and gas prices from the Bureau of Labor Statistics and the Energy Information Administration. To assess the ability of development and production in the oil and gas industries to respond to changes in oil and natural gas prices, we analyzed how movements in those prices from 1990 through 2009 changed in relation to development and production activities over the same period. To determine the proportion of federal lands that were leased, the proportion leased and under production, and how these proportions compared with total oil and gas production nationwide, we obtained from BLM and analyzed oil and gas leasing and production data on federal lands, and we obtained U.S. production data from the Energy Information Administration; these data were for fiscal year 2009. We assessed the reliability of these data and found them to be sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2009 through July 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following tables present information based on our review of a sample of 12 lease sales held in the state offices of Colorado, New Mexico, Utah, and Wyoming from fiscal year 2007 through fiscal year 2009. The tables are based on a total of 86 protest letters associated with the 12 sampled lease sales. To analyze the reasons for filing protests, we reviewed each of the 86 protest letters associated with the 12 lease sales in our sample. To document the concerns raised in each letter, we developed categories through an inductive process that involved reviewing a small number of protest letters and then identifying natural groupings, or categories, of concerns. Two analysts then independently reviewed the letters and compared the categories. Table 7 presents the overall categories of concern we encountered and illustrates the types of concerns we identified in reviewing the protest letters. This appendix describes litigation surrounding several of BLM’s oil and gas lease sales held during fiscal years 2007-2009: New Mexico’s April and July 2008 lease sales of parcels across New Mexico, Colorado’s August 2008 lease sale of parcels atop the Roan Plateau in northwestern Colorado, and Utah’s December 2008 lease sale of parcels in eastern Utah. In March 2008, several environmental and community organizations filed a protest opposing the leasing of all 51 parcels located in the state of New Mexico that BLM identified in its lease sale notice for its April 2008 lease sale, arguing, among other things, that BLM failed to adequately analyze the environmental effects of greenhouse gas emissions that would result from past, present, and future oil and gas development on BLM lands. In April 2008 BLM carried out the lease sale after removing 40 of the 100 originally proposed parcels from the sale, and in July it dismissed the protests on the remaining parcels that were offered at the lease sale. The agency noted that on receipt of the groups’ protest letter, it directed each BLM field office in New Mexico to prepare a new environmental assessment to analyze the potential impacts from lease exploration and development and to account for potential greenhouse gasses during exploration, development, and transportation. In May 2008, BLM announced the next lease sale, identifying 80 parcels, to be held in July. Numerous groups filed protests against all the parcels located in New Mexico, raising issues similar to those that were raised at the April sale. BLM field offices completed their greenhouse gas environmental assessments just before the July sale. BLM held the sale in July, offering 78 parcels for lease, and dismissed all the protests the following October. In January 2009, several of the groups that had filed protests challenged the April and July 2008 New Mexico lease sales in federal court. The groups argued, among other things, that BLM’s planning and decision-making process for the lease sales failed to address the global-warming impacts of the oil and gas development, in violation of the National Environmental Policy Act, the Federal Land Policy and Management Act, and Department of the Interior Secretarial Order 3226. As of May 2010, this case was pending. In June 2007, BLM approved a resource management plan providing for oil and gas development on the Roan Plateau. In August 2008, BLM conducted a lease sale including parcels on top of the plateau, all of which were protested by multiple groups. The Assistant Secretary of the Interior for Lands and Minerals dismissed the protests related to the parcels on the plateau, and BLM issued these leases in September 2008. Environmental organizations filed a lawsuit challenging both the resource management plan and the lease sale, arguing that these actions violated the National Environmental Policy Act and the Federal Land Policy Management Act. Four settlement conferences have occurred, the most recent in May 2010, but the parties did not reach agreement, and as of May 2010, the case was pending. See table 8 for a more detailed chronology of the events surrounding the Roan Plateau lease sale. In December 2008, BLM’s Utah state office held a lease sale offering over 100 parcels in eastern Utah, many of which were protested. In January 2009, in response to a lawsuit by several environmental groups, a federal district court entered a temporary injunction against the sale of 77 of the parcels after concluding that the groups had established a likelihood of success on their claims that the lease sale violated the National Environmental Policy Act, the Federal Land Policy and Management Act, and the National Historic Preservation Act. In February 2009, the Secretary of the Interior concluded that the issues raised by the court, along with other concerns that had been raised about the lease sale, merited a special review. Citing controversy over the degree of coordination between BLM and the National Park Service regarding some of the parcels offered for sale, as well as over the adequacy of BLM’s environmental analyses associated with the parcels, the Secretary issued a memorandum to BLM’s Utah state office, directing it to withdraw the 77 parcels covered by the injunction from further consideration in this lease sale. In May 2009, several winning bidders and three Utah counties filed suits in federal district court in Utah, seeking to compel the government to issue the leases. The bidders and counties argued, among other things, that the Secretary’s action violated a provision of the Mineral Leasing Act stating that “leases shall be issued within 60 days following payment by the successful bidder of the remainder of the bonus bid, if any, and the annual rental for the first lease year.” The government contends that nothing in the 60-day provision prevents the Secretary from withdrawing a parcel from consideration in a lease sale at any time before lease issuance. As of May 2010, these cases were still pending. In addition to the individual name above, Tim Minelli, Assistant Director; Catherine Bombico; Adam Bonnifield; Mark A. Braza; Ellen W. Chu; Bernice Dawson; Justin Fisher; Charlotte Gamble; Alyssa M. Hundrup; Richard P. Johnson; Michael Kendix, Michael Krafve; Jena Sinkfield; Douglas Sloane; and Jeff Tessin made key contributions to this report.
|
The development of oil and natural gas resources on federal lands contributes to domestic energy production but also results in concerns over potential impacts on those lands. Numerous public protests about oil and gas lease sales have been filed with the Bureau of Land Management (BLM), which manages these federal resources. GAO was asked to examine (1) the extent to which BLM maintains and makes publicly available information related to protests, (2) the extent to which parcels were protested and the nature of protests, and (3) the effects of protests on BLM's lease sale decisions and on oil and gas development activities. To address these questions, GAO examined laws, regulations, and guidance; BLM's agencywide lease record-keeping system; lease sale records for the 53 lease sales held in the four BLM state offices of Colorado, New Mexico, Utah, and Wyoming during fiscal years 2007-2009; and protest data from a random sample of 12 of the 53 lease sales. GAO also interviewed BLM officials and industry and protester groups. While BLM has taken steps to collect agency-wide protest data, the data it maintains and makes publicly available are limited. Although in 2007 BLM required its staff to begin using a module, added to its lease record-keeping system, to capture information related to lease sale protests, GAO found that the information BLM collected was incomplete and inconsistent across the four reviewed BLM state offices and, thus, of limited utility. Moreover, in the absence of a written BLM policy on protest-related information the agency is to make publicly available during the leasing process, each state office developed its own practices, resulting in state-by-state variation in what protest-related information was made available. As a result, protester groups expressed frustration with both the extent and timing of protest-related information provided by BLM. In May 2010, the Secretary of the Interior announced several agency-wide leasing reforms that are to take place at BLM. Some of these reforms may address concerns raised by protester groups, by providing earlier opportunities for public input in the lease sale process, thereby potentially giving stakeholders more time to assess parcels and decide whether to file a protest. A diverse group of entities protested the majority of parcels BLM identified in its lease sale notices during fiscal years 2007 through 2009 in the four states, for a variety of reasons. GAO found that 74 percent of parcels whose leases were sold competitively during this period by BLM state offices in Colorado, New Mexico, Utah, and Wyoming were protested. In examining a random sample of lease sales, GAO found that protests came from various entities, including nongovernmental organizations representing environmental and hunting interests, state and local governments, businesses, and private individuals. Their reasons for protesting ranged from concerns over wildlife habitat to air or water quality to loss of recreational or agricultural land uses. The extent to which protests influenced BLM's leasing decisions could not be measured because BLM's information does not include the role protests played in its decisions to withdraw parcels from lease sale. Regardless, BLM officials stated that the protest process can serve as a check on agency decisions to offer parcels for lease. In reviewing BLM's lease sale data in the four selected states during fiscal years 2007 through 2009, GAO found that 91 percent of the time, BLM was unable to issue leases on protested parcels within the 60-day window specified in the Mineral Leasing Act. Industry groups expressed concern that these delays increased the cost and risk associated with leasing federal lands. GAO found that, despite industry concerns, protest activity and delayed leasing have not significantly affected bid prices for leases; if protests or subsequent delays added significantly to industry cost or risk, it would be expected that the value of, and therefore bids for, protested parcels would be reduced. In addition, because federal lands account for a small fraction of the total onshore and offshore nationwide oil and gas output, the effects of protests to BLM leasing decisions on U.S. oil and gas production are likely to be relatively modest. GAO recommends that BLM (1) revisit the way it tracks protest information and in so doing ensure that complete and consistent information is collected and made publicly available and (2) improve the transparency of leasing decisions and the timeliness of lease issuance. Interior concurred with GAO's recommendations.
|
Natural catastrophes have a low probability of occurrence, but when they do occur the consequences can be of high severity. Insurance companies face catastrophe risk associated with their provision of property-casualty insurance. Major reinsurers are insurance companies with global insurance and reinsurance operations. Insurers and reinsurers are subject to “moral hazard,” which is “the incentive created by insurance that induces those insured to undertake greater risk than if they were uninsured, because the negative consequences are passed through to the insurer.” Therefore, reinsurers have incentives to limit the possibility that ceding insurers take actions that would create negative consequences for the reinsurer. Indemnity reinsurance contracts have the potential to increase a reinsurer’s risk exposure to the extent that the reinsurer might be unaware of the underwriting and claims settlement practices of the ceding insurer. Traditional reinsurance is generally indemnity-based and tailored to the needs of the ceding company because traditional reinsurance depends, in part, on well-developed contractual and business relationships between insurers and reinsurers. When reinsurance coverage is not indemnity- based, the ceding insurer is exposed to basis risk—the risk that there may be a difference between the payment received from the reinsurance coverage and the actual accrued claims of the ceding insurance company. Property-casualty reinsurance agreements are typically single-event, excess of loss contracts. A single-event contract means that the reinsurer’s obligations are specific to an event, such as a hurricane in a contractually specified geographic area. Excess of loss means that the reinsurer makes payments that are based on a contractually specified share of claims in excess of a minimum amount, subject to a maximum claim payment. The financial industry has developed instruments through which primary financial products, such as lending or insurance, can be funded in the capital markets. Lenders and insurers continue to provide the primary products to the customers, but these financial instruments allow the funding of the products to be “unbundled” from the lending and insurance business; instead, the funding comes from securities sold to capital market investors. This process, called securitization, can give insurers access to the large financial resources of the capital markets. With respect to funding catastrophe risk in property-casualty insurance, the risk of investing is tied to the potential occurrence of a specified catastrophic event and to the quality of underwriting by insurers and reinsurers. In evaluating risk, capital market investors face the issue of moral hazard because in the absence of well-developed contractual and business relationships with primary market insurers, capital market investors might be unable to monitor the primary insurance company’s underwriting and claims settlement practices that can act to increase risk. Nonindemnity- based coverage is a means to limit moral hazard for the investor by tying payment to industry loss indexes, parametric measures, and models of claims payments rather than actual claims that could be affected by lax underwriting standards or lax settlement of claims by the ceding insurer. However, such coverage introduces basis risk for the sponsoring insurance company. Insurance companies are not regulated at the federal level but are to comply with the laws of the states in which they operate. The insurance regulators of the 50 states, the District of Columbia, and U.S. territories have created NAIC to coordinate regulation of multistate insurers. NAIC serves as a forum for the development of uniform policy, and its committees develop model laws and regulations governing the U.S. insurance industry. Although not required to do so, most states either adopt model laws or modify them to meet their specific needs and conditions. NAIC also has established statutory accounting standards, which are intended for use by insurance departments, insurers, and auditors when state statutes or regulations are silent. If not in conflict with state statutes and regulation, or in cases when the state statutes are silent, statutory accounting standards promulgated by NAIC are intended to apply. In addition to statutory accounting standards, insurers use GAAP, which are promulgated by FASB and are designed to meet the varying needs of both insurance and noninsurance companies. Although NAIC’s statutory accounting standards use the framework established by GAAP, GAAP stresses the measurement of earnings from period to period, while NAIC’s standards stress the measurement of ability to pay claims in the future. NAIC has also developed the Risk-Based Capital for Insurers Model Act, adopted in some form in all states, which imposes automatic requirements on insurers to file plans of action when their capital falls below minimum standards. Natural catastrophes are infrequent events that can cause severe financial losses. Traditional reinsurance helps insurance companies respond to severe losses by limiting their individual liability on specific risks and thereby increases individual insurers’ capacity. However, insurance companies have been faced with higher reinsurance premiums for certain coverage following significant past natural catastrophes. Higher costs of reinsurance helped spur the development of risk-linked securities as an alternative to traditional reinsurance. Although natural catastrophes occur relatively infrequently compared with other insured events, they can affect large numbers of persons as well as their property. The U.S. property and casualty insurance industry has paid, on average, $9.7 billion in catastrophe-related claims per year from 1989 through 2001, and the amount of claims paid can be highly variable. More than 68 million Americans now live in hurricane-vulnerable coastal areas. Eighty percent of Californians live near active faults. When natural disasters occur they cause damage and destruction, which may or may not be covered by insurance. The four most costly types of insured catastrophic perils in the United States are earthquakes, hurricanes, tornadoes, and hailstorms, although earthquakes and hurricanes pose the most significant catastrophe risk in insurance markets. Figure 1 shows the combined relative risk of these hazards across the United States. In August 1992, Hurricane Andrew swept ashore in Florida south of Miami and at the time set a new record for insured losses. As shown in figure 2, estimated losses from Andrew were about $30 billion, of which $15.5 billion was insured. Payments of claims stemming from Andrew reduced the capital of affected insurance companies and sharply reduced their capacity to issue new policies. Some of Florida’s largest homeowner insurance companies had to be rescued by their parent companies and others had to tap their surpluses to pay claims. Eleven property-casualty insurance companies went into bankruptcy. In January 1994, an earthquake occurred about 20 miles northwest of downtown Los Angeles in the Northridge area of the San Fernando Valley. Also shown in figure 2, estimated losses from the Northridge earthquake were about $30 billion, of which approximately $12.5 billion was insured. Earthquake insurance coverage availability declined precipitously after the Northridge earthquake. Losses from the Kobe, Japan, earthquake and the September 11, 2001, terrorist attack on the World Trade Center also are included in figure 2 to illustrate the global nature of the insurance capacity problem and to provide perspective on the size of losses. For many individuals and organizations, insurance is the most practical and effective way of handling a major risk such as a natural catastrophe. By obtaining insurance, individuals and organizations spread risk so that no single entity receives a financial burden beyond its ability to pay. But catastrophic loss presents special problems for insurers in that large numbers of those insured incur losses at the same time. Reinsurance helps insurance companies underwrite large risks, limit liability on specific risks, increase capacity, and share liability when claims overwhelm the primary insurer’s resources. In reinsurance transactions, one or more insurers agree, for a premium, to indemnify another insurer against all or part of the loss that an insurer may sustain under its policies. Figure 3 illustrates traditional insurance, reinsurance, and retrocessional transactions. Reinsurance is a global business. According to RAA, almost half of all U.S. reinsurance premiums were paid to foreign reinsurance companies. Catastrophe reinsurance has experienced cycles in prices, both nationally and in specific geographic areas. Figure 4 presents a national reinsurance price index since 1989, which shows that, overall, reinsurance prices increased both before and after Hurricane Andrew and decreased after the Northridge earthquake. The price trend presented in figure 4 does not reflect the situations specific to Florida and California, where insurers refused to continue writing catastrophe coverage. In 1993, the Florida state legislature responded by establishing the Florida Hurricane Catastrophe Fund to provide reinsurance for insurance companies operating in Florida. Also, the Northridge earthquake raised serious questions about whether insurers could pay earthquake claims for any major earthquake. In 1994, insurers representing about 93 percent of the homeowners insurance market in California severely restricted or refused to write new homeowners policies. In 1996, the California state legislature responded by establishing the California Earthquake Authority (CEA) to sell earthquake insurance to homeowners and renters. Appendix III more fully discusses the mechanisms established by Florida and California to deal with the risks posed by such catastrophes. In one comprehensive study analyzing the pricing of U.S. catastrophe reinsurance, the authors concluded that a catastrophic event, such as a hurricane, reduced capital available to cover nonhurricane catastrophe reinsurance, thereby affecting reinsurance prices. This finding is consistent with the “bundled” nature of capital investment in traditional reinsurance (i.e., capital investors face both the risks associated with company management and the various perils covered by the insurance company). Therefore, the finding suggests that price and availability swings for catastrophe reinsurance covering one peril are affected by catastrophes involving all other perils. Given the cyclic nature of the reinsurance market, investors have incentives to look for alternative capital sources. Hurricane Andrew and the Northridge earthquake provided an impetus for insurance companies and others to find different ways of raising capital to help cover catastrophic risk and helped spur the development of risk-linked securities and other alternatives to traditional reinsurance. Catastrophe risk securitization began in 1992 with the introduction of index-linked catastrophe loss futures and options contracts by the Chicago Board of Trade (CBOT). For more information on catastrophe options, see Appendix II. Other risk-linked securities, especially catastrophe bonds, were created and used in the mid-1990s in the aftermath of Hurricane Andrew and the Northridge earthquake. During this time, traditional reinsurance prices were relatively high compared with other time periods. While the most direct means for insurance companies to raise capital in the capital market is issuing company stock, an investor in an insurance company’s stock is subject to the risks of the entire company. Therefore, an investor’s decision to purchase stock will depend on an assessment of the insurance company’s management, quality of operations, and overall risk exposures from all perils. In contrast, an investor in an indemnity-based, risk-linked security can face risk associated with the insurance company’s underwriting standards but does not take on the risk of the overall insurance (or reinsurance) company’s operations. The cost of issuing risk- linked securities, such as catastrophe bonds, includes the legal, accounting, and information costs that are necessary to issue securities and market them to investors who do not have contractual and/or business relationships with the insurance company receiving coverage. The market test for a securitized financial instrument, such as a catastrophe bond, depends, in part, on how well investors can evaluate the probability and severity of loss that may affect returns from the investment. However, the willingness of capital market investors to purchase instruments that securitize catastrophe risk, such as catastrophe bonds, and therefore the yields they will require, depends on a number of factors, including the investors’ capacity to evaluate risk and the degree to which the investment can facilitate diversification of overall investment portfolios. Demand for risk-linked securities by insurance and reinsurance company sponsors will depend, in part, on the basis risk faced and the ability of sponsors to hedge this basis risk. Although issuance of risk-linked securities has been limited, many of the catastrophe bonds issued to date have provided reinsurance coverage for catastrophe risk with the lowest probability and highest financial severity. Insurance industry officials we interviewed told us that their use of risk- linked securities has lowered the cost of some catastrophe protection. In addition, one official told us that the presence of risk-linked securities as a potential funding option has helped lower the cost of obtaining catastrophe protection covering low-probability, high-severity catastrophes from traditional reinsurers. According to the Swiss Reinsurance Company, in 2000, risk-linked securities represented less than 0.5 percent of worldwide catastrophe insurance and, according to estimates provided by Swiss Re and Goldman Sachs, between 1996 and August 2002, about $11 to $13 billion in risk- linked securities had been issued worldwide. As of August 2002, over 70 risk-linked securitizations had been done, according to Goldman Sachs. Risk-linked securities have covered perils that include earthquakes, hurricanes, and windstorms in the United States, France, Germany, and Japan. Catastrophe options offered by CBOT beginning in 1995 were among the first attempts to market risk-linked securities. The contracts covered exposures on the basis of a number of broad regional indexes that exposed insurers to basis risk, and trading in CBOT catastrophe options ceased in 1999 due to lower-than-expected demand (see app. II). Insurance companies and investment banks developed catastrophe bonds, and the bonds are offered through the SPRVs. Recent catastrophe bonds have been nonindemnity-based to limit moral hazard; therefore, they expose the sponsor to basis risk. The SPRVs are usually established offshore to take advantage of lower minimum required levels of capital, favorable tax treatment, and a generally reduced level of regulatory scrutiny. Currently most risk-linked securities are catastrophe bonds. Most catastrophe bonds issued to date have been noninvestment-grade bonds. Catastrophe bonds achieved recognition in the mid-1990s. They offered several advantages that catastrophe options did not, among them customizable offerings and multiyear pricing. Catastrophe bonds, to date, have been offered as private placements only to qualified institutional buyers. A catastrophe bond offering is made through an SPRV that is sponsored by an entity that may be an insurance or reinsurance company. The SPRV provides reinsurance to a sponsoring insurance or reinsurance company and is backed by securities issued to investors. The SPRVs are similar in purpose to the special purpose entities (SPE) that banks and other entities have used for years to obtain funding for their loans. These SPEs pay investors from principal and interest payments made by borrowers to the SPE. In contrast, the SPRVs that issue catastrophe bonds receive payments in three forms (premiums, principal, and interest); invest in securities; and pay investors in another form (interest). The SPRV returns the principal to the investor if the specified catastrophe does not occur. Figure 5 illustrates cash flows among the participants in a catastrophe bond. As shown in figure 5, the sponsoring insurance company enters into a reinsurance contract and pays reinsurance premiums to the SPRV to cover specified claims. The SPRV issues bonds or debt securities for purchase by investors. The catastrophe bond offering defines a catastrophe that would trigger a loss of investor principal and, if triggered, a formula to specify the compensation level from the investor to the SPRV. The SPRV is to hold the funds raised from the catastrophe bond offering in a trust in the form of Treasury securities and other highly rated assets. To avoid consolidation on the sponsor’s balance sheet, the trust also is to contain a minimum independent equity-capital investment of at least 3 percent of the SPRV’s assets, per GAAP. According to a rating agency official, the 3 percent equity capital is usually obtained from capital markets in the form of preferred stock. Typically, investors earn a return of the London Interbank Offered Rate (LIBOR) plus an agreed spread. The SPRV deposits the payment from the investor as well as the premium from the company into a trust account. The premium paid by the insurance or reinsurance company and the investment income on the trust account provide the funding for the interest payments to investors and the costs of running the SPRV. Under the terms of nonindemnity-based catastrophe bonds, for the sponsoring insurance company to collect part or all of the investors’ principal when the catastrophe occurs, an independent third party must confirm that the objective catastrophic conditions were met, such as an earthquake reaching 7.2 in moment magnitude as reported by the U.S. Geological Survey. Such nonindemnity bonds also allow the sponsor to continue to write new business without impacting the risk level of the bond and provide for faster reimbursement to the sponsor in the event of a catastrophe. The sponsor is exposed to basis risk because the claims on the investors’ principal might not fully hedge the sponsor’s actual catastrophe exposure. However, the sponsor has minimal credit risk—the risk of nonpayment in the event of the covered catastrophe—because the bond is fully collateralized. The SPRVs are usually established offshore—typically in Bermuda or the Cayman Islands—to take advantage of lower minimum required levels of capital, favorable tax treatment, and a generally reduced level of regulatory scrutiny. Bond rating agencies, such as Fitch, Moody’s, and Standard & Poors, provide bond ratings that are based on their assessment of loss probabilities and financial severity. Some SPRVs have issued catastrophe bonds in tranches having more than one risk structure. The rating agencies rate the bonds according to expected loss. Catastrophe bonds issued to date have generally received noninvestment-grade ratings because investors face a higher risk of loss of their principal. The rating agencies rely, in part, on the risk assessments of three major catastrophe- modeling firms—the same firms are used by traditional reinsurers to help them understand catastrophe risk. These modeling firms rely on large computing capacity; sophisticated mathematical modeling techniques; and very large databases containing information on past catastrophes, population densities, construction techniques, and other relevant information to assess loss probabilities and financial severities. Catastrophe bond-offering statements to investors include rating information and the results from the catastrophe modeling. One example of a catastrophe bond is Redwood Capital I, Ltd., which is linked to California earthquakes. Lehman Re, a reinsurance company, is the sponsor of the bond. Due to the catastrophe bond structure, investors are exposed to potential loss of principal of about $160 million. The contract provides insurance for 12 months beginning January 1, 2002, covering specified earthquake losses to property in California. The interest rates promised on the principal-at-risk variable rate notes and preference shares are LIBOR+5.5 percent and LIBOR+7 percent. Investor losses are tied to the Property Claim Services (PCS) index, an indicator of insured property losses for catastrophes. The issuer provides reinsurance coverage for the earthquake peril in California to Lehman Re, the sponsor, for triggering events causing industry losses that range from $22.5 billion to $31.5 billion as estimated by PCS. Proceeds from the issuance of the securities are to be deposited into a collateral account and invested in securities that are guaranteed or insured by the U.S. government and in highly rated commercial paper and other securities. The securities have been offered only to qualified institutional buyers as defined by SEC Rule 144A. Moody’s rated the bond a Ba2 (i.e., a noninvestment-grade bond rating) on the basis of the determination that it is comparable to a Ba2-rated conventional bond of similar duration. The rating took into account the risk analysis of a catastrophe-modeling firm. We identified and analyzed regulatory, accounting, tax, and investor issues that might affect the use of risk-linked securities. Our analysis included (1) current accounting treatment of risk-linked securities and proposed changes to accounting treatment, (2) potential changes in equity requirements for the SPRVs, (3) a preliminary tax proposal by insurance industry representatives to encourage domestic issuance of catastrophe bonds by creating “pass-through” tax treatment, and (4) reasons for limited investor participation in risk-linked securities. Under certain conditions, NAIC’s Statutory Accounting Principles allow an insurance company that obtains reinsurance to reflect the transfer of risk (effected by the purchase of reinsurance) on the financial statement it files with state regulators. These regulatory requirements are designed to ensure that a true transfer of risk has occurred and the reinsurance company will be able to pay any claims. In receiving “credit” for reinsurance, an insurance company may count the payments owed it from the reinsurance company on claims it has paid as an asset or as a deduction from liability. In doing so, a company can increase earnings reported on its financial statement and lower the amount of capital it needs to meet risk- based capital requirements established by regulators. The ability to record an asset or to take a deduction from gross liability for reinsurance is consequent upon the transfer of risk and can strongly affect an insurance company’s financial condition. Traditional reinsurance pays off on an indemnity trigger—that is, payment is based on the actual claims incurred by the insurance company. Some risk-linked securities have also provided payments from principal on an indemnity basis, and, under insurance accounting principles, these risk- linked securities have enabled the SPRVs to provide reinsurance that has received what is called “underwriting accounting treatment,” thereby allowing the SPRV sponsor to gain credit for reinsurance. In other cases, recovery under a catastrophe bond may not be indemnity based and may rely on a financial model of the insured claims of the insurance company rather than on the actual claims of the company. In these cases, there is a risk that the modeled claims will not equal the insurance company’s actual claims. There are also risks that the financial model will produce a recovery less or greater than the companies’ incurred claims. Current accounting guidance requires that the contract must indemnify the company against loss or liability associated with insurance risk in order to qualify for reinsurance accounting. However, NAIC is currently reconsidering the appropriate statutory accounting treatment of nonindemnity-based insurance, which would include risk-linked securities. Both exchange-traded instruments and over-the-counter instruments can be used to hedge underwriting results (i.e., to offset risk). The triggering event on a risk-linked contract must be closely related to the insurance risks being hedged so that the payoff is expected to be consistent with the expected claims, even though some basis risk may still exist. This correlation is known as “hedge effectiveness.” NAIC is currently considering how hedge effectiveness should be measured. Should NAIC determine a hedge-effectiveness measure, statutory insurance accounting standards could be changed so that a fair value of the contract could be calculated and recognized as an offset to insurance losses, hence allowing a credit to the insurer similar to that granted for reinsurance. If nonindemnity-based, risk-linked securities are accepted by insurance regulators as an effective hedge of underwriting results, they could help make such contracts more appealing to insurance companies by providing treatment similar to that afforded traditional reinsurance. Nevertheless, it is important yet difficult for U.S. insurance regulators to develop an effective measure to account for risk reduction for nonindemnity-based coverage so that insurance company reporting on both risk evaluation and capital treatment properly reflects the risk retained. Appendix IV contains a discussion of credit for reinsurance accounting treatment and the balance sheet implications of such treatment. An SPE is created solely to carry out an activity or series of transactions directly related to a specific purpose. The use of an SPE (or more specifically an SPRV) in a catastrophe bond securitization transaction involves a number of complex financial accounting issues in the United States. Current FASB guidance generally provides that the sponsor of an SPE report all assets and liabilities of the SPE in its financial statements, unless all of the following criteria are met: 1. Independent third-party owner’s(s’) investment in the SPE is at least 3 percent of the SPE’s total debt and equity or total assets. 2. The independent third-party owner(s) has a controlling financial interest in the SPE (generally meaning that the owner holds more than 50 percent of the voting interest of the SPE). 3. Independent third-party owners must possess the substantive risk and rewards of its investment in the SPE (generally meaning that the owner’s investment and potential return are “at risk” and not guaranteed by another party). In response to issues arising from Enron’s use of SPEs, FASB is currently considering a new approach to accounting for SPEs. The new FASB interpretation would require the primary beneficiary of an SPE to consolidate (list assets and liabilities of) the SPE in its financial statements, unless the SPE has “economic substance” sufficient not to be consolidated; that is, the SPE would have to have the ability to fund or finance its operations without assistance from or reliance on any other party involved in the SPE. In turn, the SPE would have that ability if it had independent third-party owners who have substantive voting equity investment at risk, exposure to variable returns, and the ability to make decisions and manage the SPE’s activities. A presumption is set that substantive equity investment in an SPE should be at least 10 percent of the SPE’s total assets throughout the life of the SPE. Therefore, according to information provided by FASB, many existing SPEs would probably be consolidated on the sponsors’ financial statements under the new requirement. The potential revision for equity requirements is intended to improve transparency in capital markets. According to rating agency officials, the current 3 percent independent equity requirements in recent catastrophe bond transactions have been met by issuing preferred stock. Our work did not determine the extent to which the 3 percent independent equity requirement is currently being met by the insurance industry. Bond market representatives told us that the proposed FASB equity requirements also have the potential to create a substantial hurdle to structuring catastrophe bond SPEs because few investors would be willing to purchase preferred shares because of the difficulties in understanding the risks. These representatives argue that risk-linked securitizations are different from other securitizations using SPEs because the insurer does not control the funds held by the SPE, and therefore, should not be subject to the new 10 percent equity investment requirement. The proposed new FASB interpretation also considers who bears the largest potential risks of the SPE when determining whether to consolidate with the primary beneficiary. Should the primary beneficiary bear the largest dollar loss if the SPE should fail, then consolidation would be required with the primary beneficiary. According to one FASB representative, one issue that needs to be considered is whether the insurer or the investors should be responsible for reporting or consolidating the assets and liabilities of the SPE in financial statements depending on who bears the largest potential risks of the SPE. If an insurer must consolidate the assets and liabilities of the SPE onto its own balance sheet, the insurer will also lose part of the benefit of the reinsurance contract that it enters into with the SPE. While the proposed guidance is intended to improve financial transparency in capital markets, it could also increase the cost of issuing catastrophe bonds and make them less attractive to sponsors. If the proposed rule were implemented, sponsors might turn to risk-linked securities that do not require an SPE, such as catastrophe options. NAIC is concerned that offshore SPRVs reduce economic efficiency and limit the oversight ability of insurance regulators. To further encourage the use of onshore SPRVs, NAIC’s working group on securitization has interacted with a group of insurance industry representatives that is considering how to structure a legislative proposal to make onshore SPRVs tax-exempt entities. The SPRVs have been established in offshore tax haven jurisdictions, where the SPRV itself is not subject to any income or other tax; the SPRVs also usually operate in a manner intended to help ensure that they avoid U.S. taxation by conducting most activities outside of the United States. Taxation of the U.S. holders of SPRV-issued securities depends upon whether the securities are characterized as debt or equity. This characterization in turn depends upon a number of factors, including the likelihood of loss of principal, the relative degree of subordination of the instrument in the SPRV’s capital structure, and the accounting treatment of the instrument. Although almost all SPRVs have been established offshore, there has been interest in facilitating the creation of onshore transactions because it is argued that onshore SPRVs would lessen transactional costs and afford regulators greater scrutiny of the SPRVs’ activities. NAIC has already approved a model state insurance law that allows for the creation of an onshore SPRV. Under the model law, an onshore SPRV would be an entity domiciled in and organized under state law for a limited purpose. Insurance regulators’ scope of authority would be limited for the SPRVs, which would be required to be minimally capitalized, and the domiciliary state’s laws on insolvency would apply to the SPRV. However, it is likely that the onshore SPRV would be subject to federal income taxation, making the transaction more expensive. To further encourage the use of onshore SPRVs, a group of industry attendees at the NAIC’s insurance securitization working group is considering a legislative proposal to make the onshore SPRVs tax-exempt. Currently, the industry representatives are considering using a structure that would receive tax treatment similar to the treatment received by an issuer of asset- or mortgage-backed securities. Issuers of asset-backed securities are generally not subject to tax on the income from underlying assets as they pass through the issuer to the investors in the securities. It would not be economical for an SPE to issue an asset-backed security if the SPE incurred material tax costs on the payments collected and paid over to the investors as taxable income. Securitizations address the problem of taxes in one of two ways: First, if an asset-backed security is considered debt for tax purposes, deductions are allowed for the interest expense, and the tax burden is shifted to the investors. Second, if the securities are not classified as debt, tax is avoided by treating the SPE as a pass-through entity with income allocated and taxed to its owners. The current proposal by the industry representatives would create a structure similar to a Real Estate Mortgage Investment Conduit (REMIC) or a Financial Asset Securitization Investment Trust (FASIT). REMICs and FASITs are pools of real property mortgages or debt instruments that issue multiple classes, or tranches, of financial payments among investors. The REMIC and FASIT legislation adopt two approaches to avoiding an issuer tax: They treat the issuer as a pass-through entity and classify regular interest as debt for purposes of allowing an interest deduction to the issuer. The proposal would mimic REMICs and FASITs by providing pass-through treatment for the onshore SPRV and ensuring that the regular payments in the SPRV are classified as debt. To the extent that domestic SPRVs gained business at the expense of taxable entities, the federal government could experience tax revenue losses. The statutory and regulatory requirements used to implement any such legislation would also affect tax revenue. Expanded use of catastrophe bonds might occur with favorable implementing requirements, but such legislative actions may also create pressure from other industry sectors for similar tax treatment. Also, some elements of the insurance industry believe that any consideration of changes to the tax treatment of domestic SPRVs would have to take into account the taxation of domestic reinsurance companies. Domestic reinsurance companies are taxed under the special rules of Subchapter L of the Internal Revenue Code. Under these rules, all insurance companies are taxed as corporations. Premiums earned by a domestic reinsurance company, after deducting premiums paid for retrocessional insurance coverage, are taxable. Investment income earned by the reinsurer is also taxable. A ceding commission paid by a reinsurer to an insurer to cover costs, including marketing and sale of the premium, is taxable to the ceding insurance company. However, many reinsurers are either incorporated offshore or are affiliated with companies created offshore to take advantage of reduced levels of taxation. Payments to an offshore reinsurer may be subject to an excise tax. In addition, because of the potential for abuses, the Secretary of the Treasury has special statutory authority to reallocate deductions, assets, and income between unrelated parties when a reinsurance transaction has a significant tax avoidance effect. RAA officials expressed concerns about the impact of NAIC’s model act creating an onshore SPRV. RAA objects to both the special regulatory treatment in the model act and the tax advantages proposed for the onshore SPRV. RAA argues that the NAIC model act creates a new class of reinsurer that will operate under regulatory and tax advantages not afforded to existing U.S. licensed and taxed reinsurance companies. RAA maintains that the SPRV will act as a reinsurer and yet not be subject to insurance regulation, thus endangering solvency regulation and creating an uneven playing field for reinsurers. Catastrophe bonds have not attracted a wide range of investors beyond institutional investors. Investor participation in risk-linked securities is limited in part because the risks of these securities are difficult to assess. Investment bank representatives and investment advisors we interviewed noted that catastrophe bonds have thus far been issued only to sophisticated institutional investors and a small number of large investment fund managers for inclusion in bond portfolios that include noninvestment-grade bonds. Most catastrophe bonds carry noninvestment- grade bond ratings from the rating agencies, but a low rating by itself has not been a barrier to active investor interest in other types of bonds, such as corporate bonds. The investment fund managers told us that catastrophe bonds comprise 3 percent or less of the portfolios in which they are included. On the one hand, the managers like the diversification aspects of catastrophe bonds because the risks are generally uncorrelated with the credit risks of other parts of the bond portfolio. On the other hand, managers stated that they have concerns about the limited liquidity and track record of catastrophe bonds as well as the lack of in-house expertise to understand the perils, indexes, and other features of the bonds. As requested, we explored the potential for individual investors to purchase shares in mutual funds that purchase catastrophe bonds for inclusion with other securities in a mixed asset fund. We analyzed the SEC rules governing catastrophe bond issuance and mutual fund composition and confirmed with SEC that these rules and regulations do not preclude mutual funds from purchasing catastrophe bonds. One of the investment advisors we interviewed told us that his firm included a small amount of catastrophe bonds in mutual funds sold to the public. However, a mutual fund industry association official told us that the mutual fund companies that the association surveyed—including three of the largest—have not included catastrophe bonds in funds available to individual investors because the companies lack the capacity to evaluate the risks. The mutual fund industry official also raised the issue of whether the risk associated with risk-linked securities would be appropriate or suitable for investments by a broad range of investors, including moderate-income investors. We received written comments on a draft of this report from NAIC, RAA, and BMA. We also obtained technical comments from Treasury, SEC, CFTC, NAIC, RAA, and BMA that have been incorporated where appropriate. NAIC commented that it supports developing alternative sources of reinsurance capacity, the securitizing of catastrophic risk within the United States, and subjecting SPRVs to U.S. insurance regulation. As stated in our report, a group of insurance industry representatives interacting with NAIC’s working group on securitization is considering how to structure a legislative proposal to make the onshore SPRV a tax-exempt entity. Our report also indicates that such legislation also could result in tax revenue losses and other potential costs. NAIC stated that SPRVs, however, would be subject to onshore supervision by U.S. regulators, but it is not clear to us how risk-linked securities would actually be regulated once brought onshore. RAA commented that our report provides an excellent summary on the use of risk-linked securities in providing coverage for catastrophes. However, RAA took exception to (1) our characterization of reinsurance industry capacity and (2) our description of risk-linked securities as an alternative to reinsurance. RAA noted that in recent occurrences of major catastrophic events in the United States, insurers and reinsurers had sufficient capital to meet their obligations and added that most of the California and Florida market was underwritten by insurers that relied very little, if at all, on reinsurance capacity. First, we note that while the reinsurance industry has been able to meet its obligations from recent events with existing capacity, the industry’s capacity must be considered along with issues related to (1) the price and availability of catastrophic reinsurance in high-risk areas and (2) its ability to handle multiple, sequential catastrophes. Some insurers who self-reinsure might do so partially because they believe that the price of reinsurance to cover their exposure to catastrophic events is not attractive. Second, RAA asked that we characterize risk-linked securities as a supplement to reinsurance rather than as an alternative because of the relatively small amount of reinsurance coverage currently provided through risk-linked securities. We agree, and our report states that risk- linked securities add to or supplement reinsurance capacity, but we also note that sponsors of catastrophe bonds view these securities as alternatives to traditional reinsurance when they are more cost-effective. BMA stated that our report was accurate and well-researched and commented on several policy issues raised in the report. Their letter raised several concerns with our discussion of tax treatment, accounting treatment, and investor interest in risk-linked securities. First, BMA disagreed with concerns cited in our report that pass-through tax treatment for risk-linked securities could result in (1) tax revenue losses and (2) regulatory and tax advantages that are not afforded to existing U.S.- licensed and taxed reinsurance companies. BMA commented that because a large percentage of entities that provide reinsurance coverage is based outside of the United States, including all reinsurance companies established since September 11, 2001, the tax impact would not be dramatic. In addition, BMA noted that any potential loss of U.S. tax revenue must be weighed against the policy benefits associated with creating additional private-sector capacity to absorb and distribute insurance risk. We agree that many reinsurance entities are not U.S.-based, but the potential tax revenue losses would depend on a number of factors, including business lost by taxable entities and the regulatory requirements used to implement such legislation. We also agree that many considerations must be weighed in the policy decision to grant special tax treatment for onshore SPRVs, including potential tax revenue losses and the extent to which an uneven playing field is created for domestic reinsurance companies. Second, BMA commented that our description of FASB’s SPE consolidation proposal was not based on the final exposure draft and that they interpret the proposal to allow SPRVs to apply only a variable interests approach and not satisfy a particular outside equity threshold. Our draft report discussion of the FASB proposal was based on the final exposure draft. While we did not evaluate BMA’s interpretation of the FASB proposal, we included their position in our report. Finally, BMA commented that our discussion of reasons for the lack of broader investor participation in risk-linked securities was incomplete and somewhat inaccurate. They noted that several mutual funds have purchased risk-linked securities as part of their overall portfolios, that mutual fund managers are well-equipped to evaluate the risk associated with these securities, and that lack of broader investor participation may be due to limited issuance. We agree that some mutual funds have purchased risk-linked securities and that lack of broader participation may be attributed to some degree to limited issuance of risk- linked securities. However, information we obtained indicates that some of the largest mutual fund companies did not include risk-linked securities in their mutual fund portfolios mainly because of their unusual and unfamiliar risk characteristics. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member of the House Committee on Financial Services and the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing and Urban Affairs; and the House Committee on Ways and Means. We also will make copies available to others upon request. In addition, this report will be available for no charge on GAO’s Internet home page at http://www.gao.gov. Please contact Bill Shear, Assistant Director, or me at (202) 512-8678 if you or your staff have any questions concerning this report. Key contributors to this work were Rachel DeMarcus, Lynda Downing, Patrick Dynes, Christine Kuduk, and Barbara Roesmann. You asked us to report on the potential for risk-linked securities to cover catastrophic risks arising from natural events. As agreed with your office, our objectives were to (1) describe catastrophe risk and how insurance and capital markets provide for insurance against such risks; (2) describe how risk-linked securities, particularly catastrophe bonds, are structured; and (3) analyze how key regulatory, accounting, tax, and investor issues might affect the use of risk-linked securities. Even though we did not have audit or access-to-records authority with the private-sector entities, we obtained extensive documentary and testimonial evidence from a large number of entities, including insurance and reinsurance companies, investment banks, institutional investors, rating agencies, firms that develop models to analyze catastrophic risks, regulators, and academic experts. However, we did not verify the accuracy of data provided by these entities. Some entities we met with voluntarily provided information they considered to be proprietary, and therefore we did not report details from such information. In other cases, companies decided not to provide proprietary information, and this limited our inquiry. For example, we did not obtain any reinsurance contracts representing either traditional reinsurance or reinsurance provided through issuance of risk-linked securities. To describe catastrophe risk and how insurance and capital markets provide for insurance against such risks, we examined a variety of documents, including books on insurance and reinsurance; academic articles and essays; and analyses done by the Insurance Information Institute, the Insurance Services Office, modeling firms, and the Congressional Budget Office. We also interviewed officials from insurance companies, reinsurance companies, the California Earthquake Authority (CEA), the Florida Hurricane Catastrophe Fund (FHCF), modeling firms, and university finance departments and schools. To describe how risk-linked securities, particularly catastrophe bonds, are structured, we examined catastrophe bond-offering circulars, investment bank documents, reinsurance company analyses, rating agency reports, academic studies, futures exchange documents, and analyses prepared by the American Academy of Actuaries. We also met with officials of investment banks, insurance companies, reinsurance companies, rating agencies, modeling firms, a futures exchange, investment advisors, and the American Academy of Actuaries. To analyze how key regulatory, accounting, tax, and investor issues might affect the use of risk-linked securities, we examined a variety of documents, including books on insurance accounting and taxation, the Financial Accounting Standards Board’s (FASB) proposed consolidation principles for special-purpose entities, accounting firm publications, the National Association of Insurance Commissioners’ (NAIC) Statutory Accounting Principles, and the proceedings of NAIC’s Working Group on Securitization. We met with officials from many organizations, including NAIC’s Working Group on Securitization, the Bond Market Association (BMA), the Reinsurance Association of America, the Investment Company Institute—a mutual fund company association, and FASB. We also met with officials from the Securities and Exchange Commission (SEC), the Commodity Futures Trading Commission (CFTC), and the Department of the Treasury (Treasury). We faced a number of limitations in our work. We did not verify the accuracy of data provided by the various entities we contacted. While we obtained publicly available data on U.S. reinsurance prices, we could not obtain information to assess the reliability of the price data nor the methodology used to construct the reported price index. We obtained offering statements for some catastrophe bond offers. However, we could not determine whether the offering statements were representative of the universe of catastrophe bond offers, and we relied on summary information on the various offers provided to us by bond rating agencies. We also faced limitations in identifying the specific financing arrangements made to provide independent capital investments to special purpose reinsurance vehicles (SPRV) used to avoid consolidation with the sponsor’s balance sheet. In addition, without access to reinsurance contracts, we could not determine the extent to which insurance and reinsurance companies received credit for reinsurance, including those companies that relied, in part, on risk-linked securities to transfer catastrophe risk. Although we identified factors that industry and capital markets experts believe might cause the use of risk-linked securities to expand or contract, it was not within the scope of our work to forecast increased or reduced future use of these securities—either under current accounting, regulatory, and tax policies or under changed policies. It also was not within the scope of our work to take a position on whether the increased use of risk-linked securities is beneficial or detrimental. We conducted our work between October 2001 and August 2002 in Washington, D.C.; Chicago, Ill.; New York, N.Y.; and various locations in California and Florida, in accordance with generally accepted government auditing standards. Catastrophe options were offered by the Chicago Board of Trade (CBOT) in 1995. These options contracts were among the first attempts to market natural disaster-related securities. Catastrophe options offered the advantage of standardized contracts with low transaction costs traded over an exchange. Specifically, the purchaser of a catastrophe option paid the seller a premium, and the seller provided the purchaser with a cash payment if an index measuring insurance industry catastrophe losses exceeded a certain level. If the catastrophe loss index remained below a specified level for the prescribed time period, the option expired worthless, and the seller kept the premium. The option might have been purchased by an insurance company that wanted to hedge its catastrophe risk and might have been sold by firms that would do well in the event of a catastrophe— for example, homebuilders—or by investors looking for a chance to diversify outside of traditional securities markets. Catastrophe option contracts were revised several times and covered exposures on national, regional, and state bases. On the one hand, because the payouts on the contracts were based on an index of insurance industry catastrophe losses, the transactions did not expose the investor to moral hazard or adverse selection risk. The indexes used were the Property Claim Services (PCS) catastrophe loss indexes. On the other hand, the contracts created basis risk for purchasers—the differences in the claim patterns between an individual insurer’s portfolio and the industry index. The options were to have offered minimal credit risk because the CBOT clearinghouse guaranteed the transactions. However, low trading volumes on options also raised questions about liquidity risk. Trading in CBOT catastrophe options ceased in 1999 due to lower-than-expected demand; CBOT delisted catastrophe options in 2000. The insurance markets in California and Florida illustrate the difficulties that the catastrophe insurance industry has faced nationally. Because California and Florida are markets with high catastrophe risk, these states have developed programs to increase insurer capacity in these markets. The Northridge earthquake raised serious questions about whether insurers could pay earthquake claims for any major earthquake. In 1994, insurers representing about 93 percent of the homeowners insurance market in California severely restricted or refused to write new homeowner policies because the insurers grew concerned that another earthquake would exhaust their resources. Florida experienced a similar insurance crisis after Hurricane Andrew in 1992. In response, the state created two organizations to provide primary insurance coverage and additional reinsurance capacity. In 1996, the California legislature established CEA as a privately funded and publicly managed entity to help residents protect themselves against earthquake loss. CEA sells earthquake insurance to homeowners, including condominium owners and renters. Insurers doing business in California must offer earthquake insurance in their homeowners insurance policies, whether a CEA policy or their own. The basic CEA policy carries a deductible of 15 percent on the home’s insured value, provides up to $5,000 to replace contents and personal possessions, and up to $1,500 for emergency living expenses. In 2001, the average policy for a house cost $560, but costs were several times higher in areas with high seismic risk. While companies must offer earthquake insurance, there is no state requirement that consumers purchase earthquake insurance or that mortgage lenders require it. About 16 percent of California residences had earthquake insurance at the end of 2001, and CEA insured 65 percent of those with earthquake insurance. As of January 2002, CEA had more than 814,000 policies and a claims paying capacity of more than $7 billion against an exposure from all policies of about $175 billion. Their claims paying capacity consisted of layers of capital, insurance company assessments, and reinsurance and a line of credit. Recent external and internal reviews—conducted by the California State Auditor, CEA staff, and others—of CEA’s finances have focused on its claims paying capacity. The common concern of these reviews has been the heavy dependence on the reinsurance market—some 40 percent of CEA’s $7.2 billion claims paying capacity. Reviewers recommend that some of CEA’s claims paying capacity be converted to catastrophe bonds. Such a conversion would make CEA the largest catastrophe bond issuer in the world. As shown in figure 6, CEA is currently exploring catastrophe bond placements on two layers for $400 million and $338 million. Recently the CEA’s Governing Board decided not to support CEA issuance of catastrophe bonds because catastrophe bonds are done in offshore tax havens. A CEA official told us that the Governing Board would revisit the issue when catastrophe bonds can be done onshore. Following Hurricane Andrew in 1992, there was a property insurance crisis, and the Florida state legislature created two organizations to provide coverage and additional capacity—the Florida Residential Joint Underwriting Association (JUA) and the FHCF. JUA provides residential coverage in specifically designated areas that are most vulnerable to windstorm damage. Qualified recipients are property owners who could not obtain coverage from private insurers after Hurricane Andrew. The JUA had 68,000 policyholders and an $11 billion exposure as of January 2001. Rates charged by the JUA in each county must be at least as high as the highest rate charged by the 20 largest private insurance companies in Florida. The JUA’s capacity to pay claims was $1.9 billion as of January 2001; claims would be paid by drawing down its surplus, private reinsurance, assessments of members, pre-event notes, a line of credit, and reimbursements from the state’s catastrophe fund. In March 2002, the Florida legislature approved a plan to merge JUA with the Florida Windstorm Underwriting Association (FWUA), thereby forming an organization called the Citizen’s Property Insurance Corporation. The FHCF was created as a source of reinsurance capacity to supplement what was available from private sources. The FHCF is run by Florida and was set up to encourage insurers to stay in the Florida marketplace in the aftermath of Hurricane Andrew, when reinsurance became more difficult to obtain. The FHCF reimburses insurers for a portion of their claims from future severe hurricanes. Unlike California, where catastrophe coverage is voluntary, Florida homeowners’ policies must include hurricane coverage. The FHCF is the world’s largest hurricane reinsurer, and Florida’s two residential pools (JUA and FWUA) and private insurers depend on it. Participation by the state’s insurers is mandatory, but insurers may choose different levels of coverage (45 percent, 75 percent, or 90 percent) above a high-retention or deductible level for the participating insurers. The fund is financed by (1) about 260 property insurers doing business in the state on the basis of their exposure to hurricane loss and (2) bonding secured by emergency assessments on other insurers. If the FHCF cash balance is not sufficient to reimburse covered losses, it can issue tax-exempt revenue bonds, which are financed by an emergency assessment of all property- casualty insurers excluding workers’ compensation writers. Premiums paid relative to coverage purchased are significantly below those in the private- sector. The FHCF’s capacity is currently $11 billion against an exposure of over $1 trillion. The $11 billion capacity comprises approximately $4.9 billion in cash and $6.1 billion in borrowing capacity. FHCF is also exempt from federal income tax. Although no major claims have occurred since Hurricane Andrew, the FHCF is designed to handle a $16.3 billion ground up residential property loss, which would include its $11 billion current capacity limit along with an aggregate insurance industry retention of $3.8 billion and an aggregate copayment by insurers of about $1.5 billion. Florida has not announced plans to use risk-linked securities to address capacity issues. Over the term of insurance policies, premiums that an insurance company collects are expected to pay for any insured claims and operational expenses of the insurer while providing the insurance company with a profit. The amount of projected claims that a single insurance policy may incur is estimated on the basis of the law of averages. An insurance company can obtain indemnification against claims associated with the insurance policies it has issued by entering into a reinsurance contract with another insurance company, referred to as the reinsurer. The original insurer, referred to as the ceding company, pays an amount to the reinsurer, and the reinsurer agrees to reimburse the ceding company for a specified portion of the claims paid under the reinsured policy. Reinsurance contracts can be structured in many different ways. Reinsurance transactions over the years have increased in complexity and sophistication. Reinsurance accounting practices are influenced not only by state insurance departments through NAIC, but also by SEC and FASB. If an insurer or reinsurer engages in international insurance, both government regulatory requirements and accounting techniques will vary widely among countries. Statutory Accounting Principles promulgated by NAIC allow an insurance company that obtains reinsurance to reflect the transfer of risk for reinsurance on the financial statements that it files with state regulators under certain conditions. The regulatory requirements for allowing credit for reinsurance are designed to ensure that a true transfer of risk has occurred and any recoveries from reinsurance are collectible. By obtaining reinsurance, ceding companies are able to write more policies and obtain premium income while transferring a portion of the liability risk to the reinsurer. Under many reinsurance contracts, a commission is paid by the reinsurer to the ceding company to offset the ceding company’s initial acquisition cost, premium taxes and fees, assessments, and general overhead. For example, if an insurer would like to receive reinsurance for $10 million and negotiates a 20 percent ceding commission, then the insurer will be required to pay the reinsurer $8 million ($10 million premiums ceded, less $2 million ceding commission income). The effect of this transaction is to reduce the ceding company’s assets by the $8 million paid for reinsurance, while reducing the company’s liability for unearned premiums by the $10 million in liabilities transferred to the reinsurer. The $2 million is recorded by the ceding company as commission income. This type of transaction results in an economic benefit for the ceding company because the ceding commission increases equity. The reinsurer has assumed a $10 million liability and would basically report a mirror entry that would have the opposite effects on its financial statements. Figure 7 shows the effects of the reinsurance transaction on both the ceding insurance company and reinsurance company’s balance sheets and is intended to show how one transaction increases and decreases assets and liabilities. Reinsurance contracts do not relieve the ceding insurer from its obligation to policyholders. Failure of reinsurers to honor their obligations could result in losses to the ceding insurer. An insurer may also obtain risk reduction from an SPRV that issues an indemnity-based, risk-linked security; the recovery by the insurer would be similar to a traditional reinsurance transaction. However, if an insurer chooses to obtain risk reduction from sponsoring a nonindemnity-based, risk-linked security issued through an SPRV, the recovery could differ from the recovery provided by traditional reinsurance. Even though the insurer is reducing its risk, the accounting treatment would not allow a reduction of liability for the premiums. Ms. Davi M. D’Agostino Director, Financial Institutions and Community Investment United States General Accounting Office Dear Ms. D’Agostino: Thank you for giving the NAIC the opportunity to comment on the report “Catastrophe Insurance Risks: the Role of Risk-Linked Securities and Factors Affecting Their Use”. The National Association of Insurance Commissioners (NAIC) is a voluntary organization of the chief insurance regulatory officials of the 50 states, the District of Columbia and four U.S. territories. The association’s overriding objective is to assist state insurance regulators in protecting consumers and helping maintain the financial stability of the insurance industry by offering financial, actuarial, legal, computer, research, market conduct and economic expertise. The NAIC formed a working group on Insurance Securitization in 1998 to “investigate whether there needs to be a regulatory response to continuing developments in insurance securitization, including the use of non-U.S. special purpose vehicles and to prepare educational material for regulators.” As a result of its deliberations, the NAIC has taken the position that U.S. insurance regulators should encourage the development of alternative sources of capacity such as insurance securitizations and risk linked securities as long as such developments are commensurate with the overriding goal of the NAIC membership of consumer protection. As such, the NAIC believes that one goal should be to encourage and facilitate securitizations within the United States. If transactions that are currently performed offshore were brought back to the United States, they would be subject to on-shore supervision by U.S. regulators. Both the NAIC’s Special Purpose Reinsurance Vehicle Model Act and the Protected Cell Company Model Act would require that at least one U.S. insurance commissioner would review each transaction in depth and set the appropriate standards. In addition, an NAIC member chairs the International Association of Insurance Supervisors’ Subgroup on Insurance Securitization and fully agrees with these views. At present, off-shore insurance securitizations are not subject to U.S. regulation, and the NAIC members are concerned about the appropriate use of Special Purpose Vehicles. The recent events at Enron have demonstrated how inappropriate use of special purpose vehicles can endanger solvency. The NAIC membership believes that, properly used and structured, Special Purpose Reinsurance Vehicles may provide extra capacity, more competition, and may reduce the overall costs of insurance for the public. The NAIC membership therefore believes that on-shore SPRVs, regulated by U.S. insurance regulators, would be preferable to the current situation where most securitizations are conducted off-shore. Again, we thank you for the opportunity to review and comment on the report. The following are GAO’s comments on the Reinsurance Association of America’s letter dated September 9, 2002. 1. In appendix III of the draft report we had already noted that the Florida Hurricane Catastrophe Fund provides reinsurance to supplement that available from private sources. We added a footnote on page 15 to note that reinsurance is also available from private sources for property and casualty insurance companies doing business in Florida. 2. We agree and have added a footnote on page 29 to state that no catastrophe bond contracts have been triggered by an actual event. 3. We agree and have added a footnote on page 14 on the creation of the Bermuda reinsurance market and its role in introducing new capacity into the marketplace after a major event. 4. This issue is covered on pages 24 through 26. 5. Bankruptcy remoteness is among the reasons that the special purpose entities are established, whether domestically or offshore. The following are GAO’s comments on the Bond Market Association’s letter dated September 10, 2002.
|
Because of population growth, resulting real estate development, and using real estate values in hazard-prone areas, the nation is increasingly exposed to much higher property-casualty losses--both insured and uninsured--from natural catastrophes than in the past. In the 1990s, a series of natural disasters, (1) raised questions about the adequacy of the insurance industry's financial capacity to cover large catastrophes without limiting coverage or substantially raising premiums and (2) called attention to ways of raising additional sources of capital to help cover catastrophic risk. Catastrophe risk includes exposure to losses from natural disasters, such as hurricanes, earthquakes and tornadoes, which are infrequent events that can cause substantial financial loss but are difficult to reliably predict. The characteristics of natural disasters prompt most insurers to limit the amount and type of catastrophic risk they hold. Risk-linked securities that can be used to cover risk from natural catastrophes employ many structures and include catastrophic bonds and catastrophic options. GAO identified and analyzed several issues that might affect the use of risk-linked securities. First, the National Association of Insurance Commissioners and insurance industry representatives are considering revisions in the regulatory accounting treatment of risk transfer obtained from nonindemnity-based coverage that would allow credit to the insurer similar to that now afforded additional reinsurance. Such a revision has the potential to facilitate the use of risk-linked securities. Second, the Financial Accounting Standards Board is proposing a new U.S. Generally Accepted Accounting Principles interpretation, which would increase independent capital investment requirements that allow the sponsor to treat special purpose reinsurance vehicles (SPRV) and similar entities as independent entities and report SPRV assets and liabilities separately. Third, "pass-through" tax treatment--which eliminates taxation at the SPRV level--with favorable implementing requirements could facilitate expanded use of catastrophe bonds. Finally, catastrophe bonds, most of which are noninvestment-grade instruments, have not been sold to a wide range of investors beyond institutional investors.
|
NHTSA’s mission is to prevent motor vehicle crashes and reduce injuries, fatalities, and economic losses associated with these crashes. To carry out this mission, NHTSA conducts a range of safety-related activities, including: providing guidance and other assistance to states to help address setting vehicle safety standards; investigating possible safety defects and taking steps to help ensure that products meet safety standards and are not defective (through recalls if necessary); collecting and analyzing data on crashes; and traffic safety issues, such as drunk driving and distracted driving. NHTSA also develops uniform guidelines for states’ highway safety programs. In the past, these guidelines were referred to as standards, and if a state failed to implement these standards, DOT could withhold a percentage of federal-aid highway funds apportioned to the state. As shown in figure 1, this authority changed in 1976 when legislation limited NHTSA’s authority to withhold apportioned funds. Since that change, states have been able to choose whether or not to follow the guidelines in developing their highway safety programs. NHTSA’s guideline on state motor vehicle inspection programs, included in its Uniform Guidelines for State Highway Safety Programs, recommends that states should have a program for periodic inspection of all registered vehicles to reduce the number of vehicles with existing or potential conditions that may contribute to crashes or increase the severity of crashes that do occur, and should require the owner to correct such conditions. We found that 16 states operate periodic motor vehicle inspection programs. See figure 2 below. These states develop the specific rules that govern their programs. For example, 11 of the 16 states with inspection programs require an annual vehicle safety inspection, three states require a biennial inspection, and two states require time frames other than annual or biennial. Further, some states allow certain vehicles to be exempted from the safety inspections, such as newer model vehicles (up to 5 years from the model year) or vehicles at least 25 years old and registered as historical vehicles. Some states couple the safety inspection with emissions inspections. Vehicle safety inspection programs are administered by the state motor vehicle administration, department of transportation, or law enforcement agency, and in all of the states, except Delaware, state-licensed private inspection stations perform the inspections. The number of these private inspection stations per state ranges from around 295 (Rhode Island) to 17,000 (Pennsylvania). With a few exceptions, states do not limit the number of private inspection stations that may participate in the safety However, inspection program; it is typically a market-driven process.states require these inspection stations to obtain certification or licenses from the state. Delaware operates four state-run safety inspection sites. The fees charged to vehicle owners for safety inspections are mostly set by the state, though five states allow a market-driven fee which is set by individual inspection stations. State officials we spoke with provided information on the fees collected from drivers at the time of inspection. These fees ranged from $0 (Delaware) to $55 (Rhode Island, but this includes an emissions inspection). In addition to the guideline for states on periodic motor vehicle inspection, NHTSA has issued Vehicle In Use Inspection Standards, which set inspection criteria for several vehicle systems.system standards for brakes (hydraulic, vacuum, air, electric and service brakes), steering, suspension, tires, and wheel assemblies. For example, the standards specify that tread on a tire shall not be less than two thirty– seconds (2/32”) of an inch deep and provides an inspection procedure for examining the tire for this depth. These minimum standards apply to all states that choose to implement a vehicle safety inspection program. However, states with programs include more vehicle systems in their inspections than are specified in the standards. For systems not covered by these standards, each state determines what will constitute a passed or failed component. Examples of other systems generally incorporated into state vehicle safety inspections are lighting (such as headlights, brake lights, and turn signals), seatbelts, horns, windshields, wiper blades and the vehicle’s undercarriage. According to officials in 15 states with existing vehicle safety inspection programs whom we interviewed, these programs help improve the condition of vehicles; these officials point to data on the number of failed inspections as evidence of the safety benefit of these programs. Officials whom we interviewed from all 15 states said their programs help identify vehicles with safety problems and remove these unsafe vehicles from the roadways or compel owners to make repairs that otherwise might not be performed. Most of these states (12 of 15) collect data on the number of vehicles that fail inspection—the failure rate—and officials from 9 of these states cited their failure rate data to demonstrate the effectiveness of their programs. For example, Pennsylvania officials provided 2014 data showing that more than 529,000 vehicles (about 20 percent of the state’s 2.7-million registered vehicles) underwent repairs in order to pass inspection after initially failing. Virginia officials told us they believed that their state’s roadways were safer because their program identified safety problems in over 1.4 million—or 19 percent—of the state’s 7.5-million vehicles, in 2014. According to Virginia officials, 700,000 of those vehicles were rejected for brake-related issues such as worn, contaminated, or defective linings or drums, disc pads, or disc rotors. Safety problems most frequently found in other states in 2014 included: problems with glass, which resulted in 47,172 failed inspections in Utah; malfunctioning brake lights, which resulted in more than 13,000 failed inspections in Delaware; and tire deficiencies, which resulted in almost 6,000 failed inspections in Rhode Island. Additionally, officials in three states said that vehicle safety inspections are valuable because the average age of passenger vehicles is increasing and, in some areas, weather conditions and roadway treatments such as salt may contribute to vehicle deterioration. For example, Rhode Island officials stated that their inspection program is necessary in part because the state’s snow and icy weather requires road treatments that can corrode a vehicle’s chassis, steel brake lines, suspension, steering linkages, and ball joints. Further, these officials said that their inspection program is important because vehicles are staying in service longer—with some cars accruing more than 300,000 miles— exposing vehicle systems to more use and risk of developing safety issues. DOT data show that the average age of passenger vehicles has consistently increased from 1995 to 2013, from an average age of 8.4 to 11.4 years. Similarly, Vermont and West Virginia officials told us that their states’ snow and associated road treatments, coupled with rough terrain and poor roadways, increase vehicle deterioration. They said that their programs mitigate seasonal weather challenges by reducing the number of unsafe cars in use. Despite the consensus among the state inspection program officials we interviewed that these programs improve vehicle condition, research remains inconclusive about the effect of safety inspection programs on crash rates. There is little recent empirical research on the relationship between vehicle safety inspection programs and whether these programs reduce crash rates. What is available has generally been unable to establish any causal relationship.on vehicle safety inspection programs in 1990, there have been three econometric studies conducted examining the relationship between vehicle inspections and crashes in the U.S. and three studies examining these programs in other countries. Among the three studies of U.S. vehicle inspection programs, none were able to establish a statistically significant effect of safety inspection programs on crashes involving either fatalities or injuries. Specifically, the studies examined crash rates in all 50 states and did not find statistically significant differences in crash rates in states with inspection programs compared to those without. International studies have also not been able to establish a link between Since GAO last conducted a review safety inspection programs and crash rates involving either fatalities or injuries. For example, only one study suggested that safety inspections potentially reduce the likelihood of crashes, but noted the magnitude of the reduction could not be clearly established.information on each of the studies. See appendix III for more While our literature review did not yield any studies establishing that vehicle safety inspections reduce crashes, this does not necessarily demonstrate that inspections do not have such an effect. Nationwide studies involving crashes related to vehicle component failure are hindered, in part, due to a lack of nationwide crash data. There is no comprehensive database for all police reported crashes in the United States. NHTSA maintains two data sources that capture some vehicle crash incidents related to component failure. NHTSA’s Fatality Analysis Reporting System (FARS) is a census of all fatal traffic crashes in the United States that provides uniformly coded, national data on police- reported fatalities, and contains information on crashes in which vehicle component failure was noted, but is limited to crashes involving fatalities. NASS-GES is a nationally representative sample of police-reported motor vehicle traffic crashes, which is also uniformly coded and contains information on crashes in which vehicle component failure was noted in the police report. However, the sample is not set up to be representative at the state level; therefore, it cannot be used to compare states with and without safety inspection programs. Some researchers have used FARS in their analyses in order to perform state-by-state comparisons, but detecting the effect of inspection programs on crash rates is difficult because few crashes involve fatalities, and relatively few of those fatal crashes are noted in police reports as having vehicle component failure as a potential contributing factor. According to our analysis of NHTSA’s NASS-GES crash data from 2009 through 2013, crashes with noted vehicle component failure constituted around 2 percent of all crashes nationwide. We also found that the three most common failures were related to 1) tires, 2) brakes, and 3) steering. These categories make up the majority of failures reported with the next biggest category being “other.” (See fig. 3.) These components are inspected as part of all state inspection programs. In addition to looking at NASS-GES data, we attempted to examine crash rates before and after the elimination of safety inspection programs in four states and D.C., but were able to get sufficient crash data for two of these states, New Jersey and Oklahoma.vehicle component failure were generally between 2 and 3 percent of all crashes and varied little from year to year, even after the elimination of the inspection programs. We also calculated the crash rate—controlling for vehicle miles traveled—and found that the rate did not significantly change for either state. However, this analysis does not provide sufficient In both cases, crashes involving evidence to conclude that inspection programs did not have an effect on crash rates because additional factors—such as implementation or increased enforcement of traffic safety laws—could influence crash rates. The number of crashes related to vehicle component failure may also be generally underreported. Some literature and safety advocate organizations we spoke with noted that police officers filling out accident reports often do not have the time and resources to conduct a thorough vehicle check to determine if a vehicle component failure contributed to the crash. Other factors, such as driver behavior, may be more easily ascertained. For a 2008 NHTSA crash causation survey, researchers conducted thorough investigations of over 5,000 crashes over a 2-year period (2005—2007) to determine factors that contributed to the crashes. While this study did not identify vehicle component failure as necessarily the cause of the accident, vehicle component failures were found to be present in 6.8% of crashes. The crash causation survey utilized a more comprehensive mechanical examination of the vehicle(s) involved in crashes than the police accident reports used as the data collection instrument for the NASS-GES crash data. The results of the crash causation survey suggest that the percentage of crashes related to vehicle component failure is higher than the estimates produced by the NASS-GES because of the more detailed analysis of the vehicles involved in the crashes. States with vehicle safety inspection programs generally do not directly track the costs of managing and overseeing such programs. Officials from 8 of the 15 states with vehicle safety inspection programs we interviewed told us they do not track the cost of their vehicle inspection program. Officials from several of these states explained that costs for the inspection program cannot be broken out, because the costs for operating the inspection program are co-mingled with other programs or activities. For example, in New York, North Carolina, and Vermont, officials told us the staff who oversee the safety inspection programs also perform oversight of the emissions testing program, motorcycles or heavy-duty vehicles inspections, or have other state DOT duties. Consequently, the administrative costs for programs and activities were co-mingled. Similarly, officials from seven states reported tracking their program costs, but several of them also acknowledged some cost estimates included costs from other programs, since inspection program staff and overhead may be multi-tasked for other related programs. Funding for vehicle safety inspection programs comes from general state funding or through fees related to safety inspections. States typically receive some of the fee charged to drivers for safety inspections, while the remainder is retained by the inspection station. As explained by state officials, generally the amount that goes to the state is between $0 and $5 per inspection, though some states receive greater amounts with the most being $33.25. In some cases, states generate revenue by selling inspection stickers to the stations that conduct the inspection; these stickers are used to indicate that a vehicle has passed the inspection. States may also collect fees at the time of vehicle registration. These revenue sources may go to the state’s general fund, to other funds or departments (such as a highway maintenance fund), or to the larger programmatic department (state patrol or department of transportation), before being allocated to the inspection program. No state reported using federal funds to support its inspection program. NHTSA officials also said that no state had ever applied to use federal funding for a safety inspection program. Officials in the 15 states we spoke with primarily cited oversight and paper-based data systems as challenges they have faced when operating their vehicle safety inspection programs. Eleven of 15 states cited oversight efforts as a challenge. Oversight efforts involve addressing or preventing fraudulent behavior and ensuring that private inspection stations perform inspections in compliance with program requirements. To conduct oversight, states with private inspection stations generally perform some combination of routine, random, and covert audits. Because the inspection station is a private entity, states do not have direct control over how inspections are performed. For example, one state official said it can be a challenge to ensure that stations do not attempt to make unneeded repairs for profitable gain, while officials in a second state said it was a challenge to ensure that stations do not intentionally pass vehicles that should have failed the inspection. Officials in a third state explained that it is challenging to ensure thoroughness and quality of the inspection because doing so is a labor-intensive process. Similarly, officials in four of the five states that we spoke with that had eliminated programs told us that oversight efforts were also a challenge for them in operating their programs. For example, officials in one state told us that inspection stations were able to make more money by providing other automotive services and believed the safety inspections were not as profitable. Consequently, some inspection station mechanics issued inspection stickers without properly conducting inspections. In addition, some states cited challenges with inadequate staffing resources for oversight efforts. For example, officials from four states mentioned that they had relatively few state auditors to oversee their safety inspection programs. According to officials in one state, oversight can be a particular problem because private inspection stations can span thousands of miles, and it can be difficult to retain qualified state personnel if state wages are relatively low. Four of 15 states cited their paper-based data systems as a challenge. Paper-based inspection data systems can be inefficient and, according to some state officials, can limit states’ ability to monitor their programs. Generally, in a paper-based data system, private inspection stations record inspection results on paper forms rather than into an electronic database. Officials in one state said they would like an electronic database because inspection station results would be more quickly shared with the state, resulting in better program monitoring. These officials said they would first need to ensure that the benefits of an electronic database outweigh the costs and was a viable solution before requiring inspection stations to use it. Officials in another state physically scan and enter paper-based data received from inspection stations into the state’s database, a process that they said is time consuming. To help manage the state’s data- entry work flow, officials limit the number of safety inspections that inspection stations may conduct in a single day. These officials said that the lack of Internet access at some of the inspection stations in the state made it difficult to require the use of an electronic inspection database. Other states with paper-based systems do not collect statewide inspection data, preventing the state from analyzing data and determining, for example, the number of vehicles that fail inspections in a given year. Officials from one of these states cited a lack of funds as a major impediment to creating an electronic data system, and an official from the other state told us they were preparing a request for proposals to develop an electronic database. Other Challenges: State officials mentioned additional challenges, including state legislatures’ attempts to eliminate or alter programs (two states) and customer service challenges or general public irritation with the program (two states). For example, officials in two states told us they either relaxed or eliminated some non-safety related standards (such as using certain tools to check headlight aim) or exempted newer-model vehicles from safety inspections as a compromise with state legislatures to continue their programs. With regard to customer service challenges, officials in two states told us it was challenging for them to deal with customers who complained when their vehicle failed the inspection, had to be re-inspected, or they endured long wait times. Literature that we reviewed and other stakeholders whom we interviewed, including representatives from safety groups, vehicle manufacturer industry groups, and DOT officials, also cited challenges that states face in operating their programs. Four studies cited oversight challenges. For example, a 1999 study noted that inspectors can either intentionally or unintentionally fail to report safety problems—sometimes to minimize the level of trouble to customers and increase the number of inspections performed. A 2008 state study found that one of the major criticisms of safety inspection programs is the difficulty that one state had in ensuring the quality and uniformity of inspections. The study stated that a thorough inspection, if performed to state regulations, should take between 15 and 30 minutes, according to program managers and industry representatives. However, according to the study, safety inspections in this state were taking 5 minutes, on average, raising questions about whether consumers’ vehicles were receiving thorough inspections. In addition, four stakeholders told us that state legislatures’ attempts to eliminate states’ programs either were or may be a challenge for states. Also, three stakeholders told us that public frustration associated with what the public perceives as unneeded repairs or the personal inconvenience vehicle owners feel when having to get their vehicles inspected either were or may be challenges for states. Some states have taken action to address their challenges, including implementing more stringent program rules, preparing manpower studies, and developing electronic database systems. Officials in one state told us that in 2012, they implemented stricter program rules for inspection stations to follow in an attempt to reduce fraudulent behavior (specifically, issuing stickers for vehicles that should have failed the inspection). In addition, officials in a second state said they recently added requirements that inspection station mechanics use fingerprint scanners for proper identification before performing inspections. To address challenges with staffing resources, officials in a third state told us they completed a manpower study to better identify the resources needed to operate their program. An official in another state told us that state officials were currently developing a request for proposals to create an electronic database system to replace its paper-based system. While some states have tried various ways to address their program challenges, other states have eliminated their vehicle safety inspection programs altogether. Since we last reported on vehicle safety inspections in 1990, five states and the District of Columbia have dropped their programs, some citing a lack of evidence proving the program’s effectiveness or saving financial resources as reasons. For example: In 2001, an Oklahoma Senate Press Release stated there was no evidence that vehicle safety inspection programs resulted in decreased highway accidents or injuries statewide and that eliminating the program would save Oklahomans $12 million. In 2009, the District of Columbia eliminated its safety inspection program primarily because there was no available data to show that the program was beneficial, according to a District official. For example, a District official told us that an analysis of crash data before the program was eliminated showed that the majority of vehicle accidents resulted from driver behavior, not from vehicular mechanical failure. In 2010 when New Jersey eliminated its program, the New Jersey Motor Vehicle Commission Chief Administrator announced that with a lack of conclusive data on program effectiveness and with the current (2010) fiscal crisis, New Jersey could not justify the program’s expense, and that dropping the program would yield an estimated annual savings of $17 million. Officials in all 15 states with inspection programs that we spoke with told us that additional guidance and information from NHTSA would help in operating their programs. The majority of state officials (11 of 15) would like more guidance in the area of new vehicle safety technologies in order to determine how and whether new technologies should be incorporated into their inspection programs. The example most frequently cited by state officials was light-emitting diode (LED) brake lights. LED brake lights have multiple “light-emitting diodes” that contribute to the visibility of the light. See figure 4 below for a diagram of an LED light. The number of LEDs in a light can vary, depending on the vehicle manufacturer or model. According to state officials, they do not know how many diodes, if any, could malfunction before the light is considered unsafe, making it difficult for them to set pass or fail criteria for LED lights. Since brake lighting is critical to alert other drivers to changing conditions, it is important for states with inspection programs to have criteria to judge whether lights are working sufficiently well. Officials in three states also noted that such criteria are important because failing a car on the basis of individual diodes being out can result in a costly repair for consumers ranging from a few hundred to several thousand dollars, depending on the vehicle. State officials provided a range of criteria they have chosen to use for LED brake lights: 50% of the diodes must function to pass inspection (5 states), 70% must function (1 state), 100% must function (3 states), and not yet specifically addressed in the inspection program (1 state). Federal Motor Vehicle Safety Standard No. 108: Lamps, reflective devices, and associated equipment, 49 C.F.R. § 571.108. NHTSA could sponsor research in the area of LED brake lighting as it has done in the past that might be helpful to states. Further, officials in two states said that they are concerned about how their programs may be impacted by new autonomous vehicle technologies. Officials did not state specific concerns, but said that with new advanced vehicle technologies coming on the market, it is not clear how or what they should be inspecting. We have previously reported that automobile manufacturers have begun to equip some newly manufactured vehicles with sensor-based crash avoidance and autonomous technologies intended to prevent accidents.officials from these two states noted that such technologies may add a new layer to their inspection programs if the state decides the technologies need to be included in inspections. State officials in eight states with safety programs we interviewed also said that additional information from NHTSA on new safety technologies required by the agency’s safety standards for vehicle manufacturers would help them in operating their inspection programs. These state officials told us they generally track new vehicle safety standards implemented by NHTSA, but it is not always clear to program officials whether or how new standards might be incorporated into their inspection programs. Two recent vehicle standards cited by state officials were the requirements for tire-pressure monitoring systems (three state officials) and back-up cameras (two state officials). Specifically, these states would like guidance on whether they should check to see that these technologies are functioning correctly for vehicles that were manufactured with the technologies. Officials in one state told us that their state required the tire-pressure monitoring system to work and then eliminated that requirement because the system often malfunctioned and the inspectors could readily check whether the tires are properly inflated and holding air. A 2013 study contracted by NHTSA to gather information for updating inspection standards found “State directors welcomed the suggestion that, when NHTSA issues a new regulation, the rulemaking be accompanied by guidance on how to inspect a vehicle to ensure that the required equipment is still functioning.” However, the last update to the standards was in 1979, thus technologies that have been developed since that time—such as anti-lock brake systems—are not included. NHTSA officials told us that the determination of whether or how to include new vehicle safety technologies in inspection programs should be made by the states. Further, NHTSA conducts research that could be useful to states with inspection programs, but state officials may not be aware of this information. For example, in April 2015, NHTSA issued the results of a defect investigation on brake lines, which recommended to consumers who drive vehicles from model year 2007 and earlier and live in cold- weather states to have a qualified mechanic inspect brake lines and other components under the vehicle at least twice a year, which is more frequent than the most strict state inspection requirements. Although the recommendation was not directed at state inspection officials, this information could help state inspection officials identify such problems during their inspections. However, NHTSA did not disseminate this information directly to states with inspection programs. According to officials from the American Association of Motor Vehicle Administrators (AAMVA)—the national group representing motor vehicle and law enforcement agencies, which administer safety inspection programs in states—they would share this type of information from NHTSA with their members. However, the AAMVA officials were not aware of this study. According to NHTSA officials, NHTSA issues press releases to the media and stakeholders on a regular distribution list, but AAMVA is not currently on this distribution list. According to NHTSA officials, there are no NHTSA staff designated to answer questions related to state inspection programs or disseminate relevant information to program officials because agency resources are currently focused on areas that have a greater impact on crash rates, such as driver behavior. NHTSA officials also noted that current evidence on vehicle safety inspection programs does not warrant a more prescriptive approach and that state officials should make determinations on what is most effective for their individual programs. Considering the variation among state programs and state needs it seems appropriate for states to determine much of their vehicle safety inspection program’s structures. However, state vehicle safety inspection program officials sometimes have questions about incorporating new technologies in their programs. Given that NHTSA has a guideline recommending that states implement vehicle inspection programs and that the agency’s mission includes assisting states with traffic safety programs, it is reasonable that state officials would look to NHTSA for guidance when these questions arise. While NHTSA does not dedicate staff to vehicle inspection issues, the agency has a broad range of vehicle technical experts in various parts of the organization who are knowledgeable about related issues. For example, NHTSA officials said the agency currently has 20 engineers who work on Federal Motor Vehicle Safety Standards that are relevant to vehicle inspection guidelines, along with 10 support professionals, such as economists and lawyers. Although NHTSA could update or produce additional regulation, Executive Order 13563 states that agencies should be identifying and assessing available alternatives to regulation including providing information upon which choices can be made. Establishing a communication channel, such as by designating a point of contact, could provide information transfer between knowledgeable NHTSA staff and state vehicle inspection program officials, and could help state inspection program officials operate their programs more effectively. Once established, such a channel would not necessarily require extensive NHTSA resources. For example, NHTSA could leverage the communication channel that AAMVA currently has with states, or set up a web-based forum through which state officials can ask questions, receive information from NHTSA, and share information with other states on how they are addressing new vehicle technologies and standards in their programs. While the benefits and costs of state vehicle inspection programs are difficult to quantify, state program officials we spoke to are confident that their programs improve vehicle safety, despite the challenges they face in operating the programs. However, some state officials told us they sometimes have questions about new technologies and other issues related to vehicle safety, and have not been able to get clear answers from NHTSA. With no recent federal guidance, state officials have implemented different criteria or chosen not to include new technologies in their inspection programs, potentially reducing the safety benefits of their inspection program. Further, NHTSA’s work in the areas of Federal Motor Vehicle Safety Standards and defects investigation touches on vehicle component and safety information that could be useful to state vehicle safety inspection program officials, but this information is not being provided directly to these officials nor to the national group that represents these officials. NHTSA’s decision to not devote significant resources to state vehicle inspection programs is consistent with research showing that vehicle component failures are a relatively minor contributor to traffic crashes. However, establishing a communication channel to answer questions from state officials and convey information could assist states in improving their vehicle safety inspection programs. To minimize resources needed to establish and maintain a communication channel, NHTSA could potentially create a web-based forum to share information and respond to questions and collaborate with AAMVA to disseminate information to state officials. To improve assistance to states in regard to the periodic motor vehicle inspection guideline, the Secretary of Transportation should direct the Administrator of NHTSA to establish and maintain a communication channel with states to convey relevant information related to vehicle inspections and respond to questions from state safety inspection program officials. We provided a draft of this report to DOT for review and comment. DOT provided written comments, which are reprinted in appendix V. In its written comments, DOT stated that NHTSA agreed with our recommendation, and supports our conclusion that establishing a communication channel with state vehicle safety inspection program officials would be beneficial. We are sending copies of this report to the appropriate congressional committees, and the Secretary of Transportation. This report will also be available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We conducted a review of state motor vehicle inspection programs and National Highway Traffic Safety Administration’s role in assisting these programs. This report assesses: 1) what is known about the safety benefits and costs of operating state vehicle safety inspection programs, 2) any challenges states have faced in operating these programs, and 3) any actions NHTSA could take to assist states with these programs. To identify what is known about the costs and safety benefits of state vehicle inspection programs, we conducted a literature search for studies that analyzed relationships between safety inspections and outcomes, such as crash rates, vehicle component failures, and vehicle fleet age. We limited our literature search to those articles and reports published since 1990—the last time GAO conducted a comprehensive literature review. We identified existing studies from peer-reviewed journals, government reports, trade publications, and conference papers based on searches of various databases, such as ProQuest, Academic OneFile, and Transportation Research International Documentation. Search parameters included studies across the United States and in other countries. We also conducted interviews with organizations that assist states with traffic safety efforts and asked them to recommend additional research. The literature review parameters and interviews resulted in 185 abstracts or studies. Of these, we determined that 29 studies appeared to be relevant—eliminating, for example, studies that focused on emissions inspections. We assessed the relevance and methodological quality of the selected studies by performing an initial review of the findings (here we eliminated any studies based on data from before 1980), and then performed an independent assessment of the study’s methodology. After these reviews, we determined that 6 studies published from 1992 through 2013 were sufficiently reliable for the research objective on the safety benefits and costs of operating state vehicle safety inspection programs (see appendix III) and 4 studies were sufficiently reliable for the research objective on any challenges that states face in operating these programs. To determine what is known about safety benefits of state vehicle inspections, we also analyzed crash data. Because of known data limitations raised in the studies we reviewed during the literature search, we attempted to compare crash rates related to vehicle component failure before and after program elimination in states that eliminated their inspection program since 1990. Six states fit this criterion: South Carolina (1995), Arkansas (1998), Oklahoma (2001), Washington D.C., (2009), New Jersey (2010), and Mississippi (2015). Because Mississippi dropped its program during the course of our assessment, we did not do a before and after comparison. For the other five states, we attempted to collect data on the number of crashes recorded in the state and the number of crashes recorded with vehicle component failures 5 years before and 5 years after program elimination. We were only able to obtain this data for two of the five states: Oklahoma and New Jersey. For Oklahoma we were able to obtain data for 1995-2013. We focused on the five years before and after 2001 when the program was eliminated to see if there was a difference in trend. Because New Jersey eliminated its program in 2010, we were not able to get 5 years of crash data after the program was eliminated. For New Jersey we reviewed data from 2005 to 2013. We also analyzed national level crash data from NHTSA’s National Automotive Sampling System General Estimates System (NASS-GES) for the years 2009-2013. NASS-GES consists of data collected from an annual sample of about 50,000 police accident reports and is statically weighted to be a nationally representative of all police-reported crashes that occur in the United States each year. We analyzed this data to determine the estimated number of total crashes with vehicle factors nationwide as well as the specific vehicle component failures that were reported, such as issues with brakes, tires, and steering. We express our confidence in the precision of estimates as 95 percent confidence intervals. This is the interval that would contain the actual population values for 95 percent of the NASS-GES samples that NHTSA could have drawn. Because of the sample design used to collect GES, we are limited to reporting trends on a national level and could not use this data to look at individual state trends. For each of these data sets, we interviewed relevant officials and analyzed the data for possible errors. We determined that these data were sufficiently reliable for the purposes of estimating the number of reported crashes that occur with vehicle component failures. To determine challenges states faced in operating their inspection programs and what actions, if any, NHTSA could take to assist states with their vehicle safety inspection programs, we reviewed federal and state policy and program documents related to inspection programs. We reviewed federal statutes, regulations, guidelines, and guidance documents, state laws authorizing safety inspection programs, state program reports, state officials’ testimony before their state legislators and state inspection guidance and manuals. We observed safety inspections in Delaware at a state-run inspection station and in Virginia at a privately owned and operated inspection station. In selecting these sites we worked with state officials to identify an inspection station where we could view an actual inspection take place. We conducted structured interviews with officials in 15 of the 16 states that currently have a safety inspection program. We attempted multiple times to speak with the one remaining state—New Hampshire—but were unsuccessful. We also interviewed state officials in five of six jurisdictions (four states and the District of Columbia) that eliminated their programs since 1990. South Carolina eliminated its program in 1995 and did not have any officials knowledgeable about the program. We also interviewed NHTSA officials, researchers at Carnegie Mellon University, and representatives from the American Association of Motor Vehicle Administrators, safety groups (Center for Auto Safety and Public Citizen), and automotive industry groups (Automotive Service Association, Auto Care Association, and Motor & Equipment Manufacturers Association). Article Sutter, David and Poitras, Marc (2002). The Political Economy of Automobile Safety Inspections. Public Choice, 133 (3-4), 367-387. Methodology Regression analysis using 1981-1993 panel data of 50 states. Conclusions Unable to establish a statistically significant effect of vehicle inspection program on fatalities or injury rates. Merrell, David, Poitras, Marc, and Sutter, Daniel (1999). The Effectiveness of Vehicle Safety Inspections: An Analysis Using Panel Data. Southern Economic Journal, 65 (3), 571-583. Regression analysis using 1981-1993 panel data of 50 states. Unable to establish a statistically significant effect of vehicle inspection program on fatalities or injury rates. Holdstock, J., Hagarty, D., & Zalinger, D. (1994). Review of a mandatory vehicle inspection program. Project report. Regression analysis using 1990-1991 data for 50 states, District of Columbia, and 10 Canadian provinces. Unable to establish a statistically significant effect of vehicle inspection program on fatalities or injury rates. Keall, M. D., & Newstead, S. (2013). An evaluation of costs and benefits of a vehicle periodic inspection scheme with six- monthly inspections compared to annual inspections. Accident Analysis & Prevention, 58, 81-87. Regression analysis using merged New Zealand crash data (2004-2009), licensing data (2003-2008), and inspection data (2003-2009). Going from annual to biannual inspections may reduce likelihood of crashes (8%) and the prevalence of vehicle defects (13.5%), but the wide confidence interval for the drop in crash rate (0.4–15%) indicated considerable statistical uncertainty. Christensen, Peter and Elvik, Rune (2007). Effects on Accidents of Periodic Motor Vehicle Inspection in Norway. Accident Analysis and Prevention 39, 47-52. Observational study using insurance data and 1998-2002 inspection data in Norway. Inspections improved the technical condition of inspected cars, but did not have a statistically significant effect on crash rates. The study’s findings suggested that following inspections, the accident rate of inspected cars did not decline, but rather showed a weak tendency to increase. Fosser, Stein (1992). An Experimental Evaluation of the Effects of Periodic Motor Vehicle Inspection on Accident Rates. Accident Analysis and Prevention 24 (6), 599-612. Experimental design over 4 years (1986- 1990) in Norway. Inspection improved the technical condition of inspected cars, but the differences found in technical condition had no influence on accident rates. Appendix IV: Estimated Number of Crashes Listed with Vehicle Component Failure 2009- 2013 with Lower and Upper Bound 95% Confidence Intervals (CI) In addition to the contact named above, Sara Vermillion (Assistant Director), Carl Barden, Namita Bhatia Sabharwal, Timothy Bober, Melissa Bodeau, Jennifer Clayborne, Leia Dickerson, Amanda Miller, Sara Ann Moessbauer, Josh Ormond, Cheryl Peterson, Oliver Richard, Stephen Sanford, Amy Suntoke, Friendly Vang-Johnson, Michelle Weathers, and Jade Winfree made key contributions to this report.
|
In 2013, an estimated 5.7-million vehicle crashes resulted in approximately 32,700 fatalities and over 2.3-million injuries. One of NHTSA's guidelines to help states optimize the effectiveness of highway safety programs recommends that each state have a program to periodically inspect all registered vehicles to reduce the number of vehicles with conditions that may contribute to crashes or increase the severity of crashes. GAO was asked to review these state programs and NHTSA's assistance to states. This report assesses: 1) what is known about the safety benefits and costs of operating state vehicle safety inspection programs, 2) challenges that states have faced in operating these programs, and 3) actions NHTSA could take to assist states with these programs. GAO analyzed NHTSA 2009—2013 data and state data for crash trends related to vehicle component failure; reviewed studies that analyzed relationships between safety inspections and outcomes; and interviewed officials in 15 states that have inspection programs. GAO also interviewed officials in 5 states that eliminated their programs since 1990, NHTSA officials, and representatives from safety groups and automotive industry groups. According to officials GAO interviewed from 15 state vehicle safety inspection programs, these programs enhance vehicle safety; however, the benefits and costs of such programs are difficult to quantify. State officials told GAO that inspections help identify vehicles with safety problems and result in repair or removal of unsafe vehicles from the roads. For example, Pennsylvania state data show that in 2014, more than 529,000 vehicles (about 20 percent of vehicles in the state) failed inspection and then underwent repairs to pass. Nationwide, however, estimates derived from data collected by the Department of Transportation's (DOT) National Highway Traffic Safety Administration (NHTSA) show that vehicle component failure is a factor in about 2 to 7 percent of crashes. Given this relatively small percentage as well as other factors—such as implementation or increased enforcement of state traffic safety laws—that could influence crash rates, it is difficult to determine the effect of inspection programs based on crash data. Studies GAO reviewed and GAO's analysis of state data examined the effect of inspection programs on crash rates related to vehicle component failure, but showed no clear influence. Finally, many states do not directly track the costs of operating safety inspection programs because costs may be comingled with other inspection programs, such as emissions. State safety inspection program officials GAO interviewed primarily cited the oversight of inspection activities and paper-based data systems as challenges they have faced in operating vehicle safety inspection programs. For example, officials in 11 of the 15 states with programs GAO interviewed cited oversight efforts as a challenge, including ensuring that private inspection stations were conducting inspections consistent with program requirements, and officials in 4 of the 15 states also said that paper-based data systems can hinder oversight efforts. To address challenges, some states have taken actions such as implementing more stringent program rules and exploring the development of electronic data systems. Other states have eliminated their inspection programs altogether. Program officials in all 15 states said that additional information from NHTSA—for example, information related to new vehicle safety technologies—would help in operating their programs. However, there is no designated channel for communication between NHTSA and program officials. Several state officials noted that they would like more information on new technologies such as light-emitting diode (LED) brake lights. State officials also said that it is not clear whether or how to inspect new safety technologies, such as tire pressure monitoring systems, required by NHTSA for new vehicles. Without information, states have implemented different inspection pass-fail criteria or chosen not to include new technologies in their inspections, potentially reducing the safety benefit of their programs. NHTSA officials told GAO they have adopted a hands-off approach to state vehicle inspection programs because the agency devotes its resources primarily to areas that contribute more heavily to crashes, such as driver behavior. However, consistent with NHTSA's mission to assist states in implementing traffic safety programs, improving communication with state officials on vehicle safety issues could help these officials in operating their inspection programs. DOT should establish a communication channel with states to convey relevant information to state safety inspection officials and respond to their questions. DOT officials reviewed this report and agreed with GAO's recommendation.
|
Demand for GAO’s analysis and advice remains strong across the Congress. During the past 3 years, GAO has received requests or mandated work from all of the standing committees of the House and the Senate and over 90 percent of their subcommittees. In fiscal year 2007, GAO received over 1,200 requests for studies. This is a direct result of the high quality of GAO’s work that the Congress has come to expect as well as the difficult challenges facing the Congress where it believes having objective information and professional advice from GAO is instrumental. Not only has demand for our work continued to be strong, but it is also steadily increasing. The total number of requests in fiscal year 2007 was up 14 percent from the preceding year. This trend has accelerated in fiscal year 2008 as requests rose 26 percent in the first quarter and are up 20 percent at the mid-point of this fiscal year from comparable periods in 2007. As a harbinger of future congressional demand, potential mandates for GAO work being included in proposed legislation as of February 2008 totaled over 600, or an 86 percent increase from a similar period in the 109th Congress. The following examples illustrate this demand: Over 160 new mandates for GAO reviews were imbedded in law, including the Consolidated Appropriations Act of 2008, the Defense Appropriations Act of 2008, and 2008 legislation implementing the 9/11 Commission recommendations; New recurring responsibilities were given to GAO under the Honest Leadership and Open Government Act of 2007 to report annually on the compliance by lobbyists of registration and reporting requirements; and Expanded bid protest provisions applied to GAO that 1) allow federal employees to file protests concerning competitive sourcing decisions (A- 76), 2) establish exclusive bid protest jurisdiction at GAO over issuance of task and delivery orders valued at over $10 million, and 3) provide GAO bid protest jurisdiction over contracts awarded by the Transportation Security Administration. Further evidence of GAO’s help in providing important advice to the Congress is found in the increased numbers of GAO appearances at hearings on topics of national significance and keen interest (see table 1). In fiscal year 2007 GAO testified at 276 hearings, 36 more than fiscal year 2006. The fiscal year 2007 figure was an all-time high for GAO on a per capita basis and among the top requests for GAO input in the last 25 years. This up tempo of GAO appearances at congressional hearings has continued, with GAO already appearing at 140 hearings this fiscal year, as of April 4th. Our FTE level in fiscal year 2008 is 3,100—the lowest level ever for GAO. We are proud of the results we deliver to the Congress and our nation with this level, but with a slightly less than 5 percent increase in our FTEs to 3,251 we can better meet increased congressional requests for GAO assistance. While this increase would not bring GAO back to the 3,275 FTE level of 10 years ago, it would allow us to respond to the increased workload facing the Congress. GAO staff are stretched in striving to meet Congress’s increasing needs. People are operating at a pace that cannot be sustained over the long run. I am greatly concerned that if we try to provide more services with the existing level of resources, the high quality of our work could be diminished in the future. But I will not allow this to occur. This is not in the Congress’s nor GAO’s interest. One consequence of our demand vs. supply situation is the growing list of congressional requests that we are not able to promptly staff. While we continue to work with congressional committees to identify their areas of highest priority, we remain unable to staff important requests. This limits our ability to provide timely advice to congressional committees dealing with certain issues that they have slated for oversight, including Safety concerns such as incorporating behavior-based security programs into TSA’s aviation passenger screening process, updating our 2006 study of FDA’s post-market drug safety system, and reviewing state investigations of nursing home complaints. Operational improvements such as the effectiveness of Border Security checkpoints to identify illegal aliens, technical and programmatic challenges in DOD’s space radar programs, oversight of federally-funded highway and transit projects and the impact of the 2005 Bankruptcy Abuse Prevention and Consumer Protection Act. Opportunities to increase revenues or stop wasteful spending including reducing potential overstatements of charitable deductions and curbing potential overpayments and contractor abuses in food assistance programs. Our fiscal year 2009 budget request seeks to better position us to maintain our high level of support for the Congress and better meet increasing requests for help. This request would help replenish our staffing levels at a time when almost 20 percent of all GAO staff will be eligible for retirement. Accordingly, our fiscal year 2009 budget request seeks funds to ensure that we have the increased staff capacity to effectively support the Congress’s agenda, cover pay and uncontrollable inflationary cost increases, and undertake critical investments, such as technology improvement. GAO is requesting budget authority of $545.5 million to support a staff level of 3,251 FTEs needed to serve the Congress. This is a fiscally prudent request of 7.5 percent over our fiscal year 2008 funding level, as illustrated in table 2. Our request includes about $538.1 million in direct appropriations and authority to use about $7.4 million in offsetting collections. This request also reflects a reduction of about $6 million in nonrecurring fiscal year 2008 costs. Our request includes funds needed to increase our staffing level by less than 5 percent to help us provide more timely responses to congressional requests for studies; enhance employee recruitment, retention, and development programs, which increase our competitiveness for a talented workforce; recognize dedicated contributions of our hardworking staff through awards and recognition programs; address critical human capital components, such as knowledge capacity building, succession planning, and staff skills and competencies; pursue critical structural and infrastructure maintenance and improvements; restore program funding levels to regain our lost purchasing power; and undertake critical initiatives to increase our productivity. Key elements of our proposed budget increase are outlined as follows: Pay and inflationary cost increases We are requesting funds to cover anticipated pay and inflationary cost increases resulting primarily from annual across-the-board and performance-based increases and annualization of prior fiscal year costs. These costs also include uncontrollable, inflationary increases imposed by vendors as the cost of doing business. GAO generally loses about 10 percent of its workforce annually to retirements and attrition. This annual loss places GAO under continual pressure to replace staff capacity and renew institutional memory. In fiscal year 2007, we were able to replace only about half of our staff loss. In fiscal year 2008, we plan to replace only staff departures. Our proposed fiscal year 2009 staffing level of 3,251 FTEs would restore our staff capacity through a modest FTE increase, which would allow us to initiate congressional requests in a timelier manner and begin reducing the backlog of pending requests. Critical technology and infrastructure improvements We are requesting funds to undertake critical investments that would allow us to implement technology improvements, as well as streamline and reengineer work processes to enhance the productivity and effectiveness of our staff, make essential investments that have been deferred year after year but cannot continue to be delayed, and implement responses to changing federal conditions. Human capital initiatives and additional legislative authorities GAO is working with the appropriate authorization and oversight committees to make reforms that are designed to benefit our employees and to provide a means to continue to attract, retain, and reward a top- flight workforce, as well as help us improve our operations and increase administrative efficiencies. Among the requested provisions, GAO supports the adoption of a “floor guarantee” for future annual pay adjustments similar to the agreement governing 2008 payment adjustments reached with the GAO Employees Organization, IFPTE. The floor guarantee reasonably balances our commitment to performance-based pay with an appropriate degree of predictability and equity for all GAO employees. At the invitation of the House federal workforce subcommittee, we also have engaged in fruitful discussions about a reasonable and practical approach should the Congress decide to include a legislative provision to compensate GAO employees who did not receive the full base pay increases of 2.6 percent in 2006 and 2.4 percent in 2007. We appreciate their willingness to provide us with the necessary legal authorities to address this issue and look forward to working together with you and our oversight committee to obtain necessary funding to cover these payments. The budget authority to cover the future impact of these payments is not reflected in this budget request. As you know, on September 19, 2007, our Band I and Band II Analysts, Auditors, Specialists, and Investigators voted to be represented by the GAO Employees Organization, IFPTE, for the purpose of bargaining with GAO management on various terms and conditions of employment. GAO management is committed to working constructively with employee union representatives to forge a positive labor-management relationship. Since September, GAO management has taken a variety of steps to ensure it is following applicable labor relations laws and has the resources in place to work effectively and productively in this new union environment. Our efforts have involved delivering specialized labor-management relations training to our establishing a new Workforce Relations Center to provide employee and labor relations advice and services; hiring a Workforce Relations Center director, who also serves as our chief negotiator in collective bargaining deliberations, and postponing work on several initiatives regarding our current performance and pay programs. In addition, we routinely notify union representatives of meetings that may qualify as formal discussions, so that a representative of the IFPTE can attend the meeting. We also regularly provide the IFPTE with information about projects involving changes to terms and conditions of employment over which the union has the right to bargain. We are pleased that GAO and the IFPTE reached a prompt agreement on 2008 pay adjustments. The agreement was overwhelmingly ratified by bargaining unit members on February 14, 2008, and we have applied the agreed-upon approach to the 2008 adjustments to all GAO staff, with the exception of the SES and Senior Level staff, regardless of whether they are represented by the union. In fiscal year 2007, we addressed many difficult issues confronting the nation, including the conflict in Iraq, domestic disaster relief and recovery, national security, and criteria for assessing lead in drinking water. For example, GAO has continued its oversight on issues directly related to the Iraq war and reconstruction, issuing 20 products in fiscal year 2007 alone—including 11 testimonies to congressional committees. These products covered timely issues such as the status of Iraqi government actions, the accountability of U.S.-funded equipment, and various contracting and security challenges. GAO’s work spans the security, political, economic, and reconstruction prongs of the U.S. national strategy in Iraq. Highlights of the outcomes of GAO work are outlined below. See appendix II for a detailed summary of GAO’s annual measures and targets. Additional information on our performance results can be found in Performance and Accountability Highlights Fiscal Year 2007 at www.gao.gov. GAO’s work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2007, we recorded a total of 1,354 other improvements in government resulting from GAO work. For example, in 646 instances federal agencies improved services to the public, in 634 other cases agencies improved core business processes or governmentwide reforms were advanced, and in 74 instances information we provided to the Congress resulted in statutory or regulatory changes. These actions spanned the full spectrum of national issues, from strengthened screening procedures for all VA health care practitioners to improved information security at the Securities and Exchange Commission. See table 4 for additional examples. In January 2007, we also issued our High-Risk Series: An Update, which identifies federal areas and programs at risk of fraud, waste, abuse, and mismanagement and those in need of broad-based transformations. Issued to coincide with the start of each new Congress, our high-risk list focuses on major government programs and operations that need urgent attention. Overall, this program has served to help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. GAO added the 2010 Census as a high-risk area in March 2008. GAO’s achievements are of great service to the Congress and American taxpayers. With your support, we will be able to continue to provide the high level of performance that has come to be expected of GAO. Madam Chair, this concludes my statement. At this time, I would be pleased to respond to questions. GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Provide Timely, Quality Service to the Congress and the Federal Government to . . . . . . Address Current and Emerging Challenges to the Well-being and Financial Security of the American People relted to . . . Viable commnitieNl rerce usnd environmentl protection Phyicl infrastrctre . . . Respond to Changing Security Threats and the Challenges of Global Interdependence involving . . . Advncement of U.S. intereGlobal mrket forceHelp Transform the Federal Government’s Role and How It Does Business to Meet 21st Century Challenges assssing . . . Key mgement chllenge nd progrm riFil poition nd finncing of the government Maximize the Value of GAO by Being a Model Federal Agency and a World-Class Professional Services Organization in the reas of . . . Our employee feedback survey asks staff how often the following occurred in the last 2 months (1) my job made good use of my skills, (2) GAO provided me with opportunities to do challenging work, and (3) in general, I was utilized effectively. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The budget authority GAO is requesting for fiscal year 2009--$545.5 million--represents a prudent request of 7.5 percent to support the Congress as it confronts a growing array of difficult challenges. GAO will continue to reward the confidence you place in us by providing a strong return on this investment. In fiscal year 2007 for example, in addition to delivering hundreds of reports and briefings to aid congressional oversight and decisionmaking, our work yielded: financial benefits, such as increased collection of delinquent taxes and civil fines, totaling $45.9 billion--a return of $94 for every dollar invested in GAO; over 1,300 other improvements in government operations spanning the full spectrum of national issues, ranging from helping Congress create a center to better locate children after disasters to strengthening computer security over sensitive government records and assets to encouraging more transparency over nursing home fire safety to strengthening screening procedures for VA health care practitioners; and expert testimony at 276 congressional hearings to help Congress address a variety of issues of broad national concern, such as the conflict in Iraq and efforts to ensure drug and food safety. GAO's work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services.
|
Review groups identified significant problems after the media reports concerning Walter Reed. Initial efforts to respond to these problems were primarily coordinated through the Senior Oversight Committee, and DOD and VA undertook additional efforts to respond to these problems. Following the revelations at Walter Reed, several review groups noted significant problems that may arise during servicemembers’ recovery from wounds, illnesses, and injuries. Some of these problems involve the provision of appropriate medical care, while others involve the acquisition of needed DOD and VA benefits. In 2007, one of the review groups, the President’s Commission on Care for America’s Returning Wounded Warriors—commonly referred to as the Dole-Shalala Commission—noted that recovering servicemembers depend on the effective and efficient provision of medical services and benefits across the recovery care continuum, which is separated into three phases: recovery, when wounded, ill, and injured servicemembers are stabilized and receive acute inpatient medical treatment at an MTF, VAMC, or private medical facility; rehabilitation, when recovering servicemembers with complex trauma, such as missing limbs, receive medical and rehabilitative care; and reintegration, when servicemembers either return to active duty or to the civilian community as veterans. A recovering servicemember or veteran may not experience the recovery care continuum as a linear process, and may move back and forth across the continuum over time, depending on his or her medical needs. For example, a servicemember who has transitioned to the rehabilitation phase may go back to the recovery phase if there is a need to return to an MTF to obtain acute medical care, such as a surgical procedure. DOD and VA took a number of steps to address the problems identified by the review groups that investigated the issues raised by the Walter Reed media reports. As an initial step, the departments established the Senior Oversight Committee to coordinate and oversee DOD’s and VA’s efforts to jointly resolve these problems. Through this committee, DOD and VA created programs and initiatives to assist recovering servicemembers and veterans as they navigate the recovery care continuum. Key efforts included the establishment of the integrated disability evaluation system (IDES), the Federal Recovery Coordination Program (FRCP), the Recovery Coordination Program (RCP), and the Interagency Program Office. (See fig. 1.) Senior Oversight Committee. The Senior Oversight Committee was responsible for ensuring that the recommendations—which totaled more than 600 from the various review groups—were properly reviewed, coordinated, implemented, and resourced. Supporting the Senior Oversight Committee was an Overarching Integrated Product Team, the membership of which included the Assistant Secretaries of Defense, the military departments’ Assistant Secretaries for Manpower and Reserve Affairs, and various senior officials from DOD and VA. This team coordinated, integrated, and synchronized the work of the eight “Lines of Action” (LOA) that focused on specific issues, including case management, disability evaluation systems, and data sharing between DOD and VA. (See fig. 2.) Each LOA included representation from DOD, including each military service, and VA. They performed the bulk of the work to address the issues and recommendations of the various review groups, including establishing plans, setting and tracking milestones, and identifying and enacting early and short-term solutions. More specifically, the LOAs were as follows: LOA 1—Disability Evaluation: Responsible for addressing efforts to reform the DOD and VA disability evaluation systems. LOA 2—Traumatic Brain Injury (TBI)/Post Traumatic Stress Disorder (PTSD): Responsible for addressing issues related to TBI/PTSD. LOA 3—Case Management: Responsible for addressing issues related to the care, management, and transition of recovering servicemembers from recovery to rehabilitation and reintegration. LOA 4—DOD/VA Data Sharing: Responsible for addressing issues regarding the electronic exchange of DOD and VA health records. LOA 5—Facilities: Responsible for addressing issues relating to military and VA medical facilities. LOA 6—”Clean Sheet” Review: Developed recommendations to improve care and benefits without the constraints of existing laws, regulations, organizational roles, personnel constraints, or budgets. LOA 7—Legislation and Public Affairs: Responsible for addressing legal and other issues for policy development. LOA 8—Personnel, Pay, and Financial Support: Responsible for addressing compensation and benefit issues. Some of the key efforts initiated out of the LOAs included the establishment of an integrated disability evaluation system, care coordination programs, and steps towards the electronic exchange of DOD and VA health records—a responsibility that was later assumed by the Interagency Program Office. DOD/VA Integrated Disability Evaluation System. Through LOA 1, DOD and VA jointly began to develop and pilot IDES to improve the disability evaluation process by eliminating duplication in DOD’s and VA’s separate evaluation systems and expediting the receipt of VA benefits. Specifically, IDES merges DOD’s and VA’s separate medical exams for servicemembers into a single exam process; consolidates DOD’s and VA’s separate disability rating decisions into a single VA rating decision; and provides staff to perform outreach and nonclinical case management and explain VA results and processes to servicemembers. By October 2011, DOD and VA had fully deployed IDES at 139 MTFs in the United States and several other countries. Care Coordination Programs. LOA 3 took the lead role in addressing problems with uncoordinated case management for recovering servicemembers and veterans through the establishment of two care coordination programs—the FRCP and the RCP. The FRCP was based on a recommendation from the Dole-Shalala Commission that a single individual—a recovery coordinator—would work with existing DOD and VA case managers to ensure that servicemembers had the resources needed for their care. LOA 3 designed the FRCP to assist “severely” wounded, ill, and injured OEF and OIF servicemembers, veterans, and their families with access to care, services, and benefits. This population includes servicemembers and veterans who suffer from traumatic brain injuries, amputations, burns, spinal cord injuries, visual impairment, and PTSD. The program uses federal recovery coordinators to monitor and coordinate clinical services, including facilitating and coordinating medical appointments, and nonclinical services, such as providing assistance with obtaining financial benefits or special accommodations, needed by program enrollees and their families. Federal recovery coordinators, who are senior-level registered nurses and licensed clinical social workers, were intended to serve as the single point of contact among all of the case managers of DOD, VA, and other governmental and nongovernmental programs that provide services directly to servicemembers and veterans. Although the FRCP was designed as a joint program, it is administered by VA, and the federal recovery coordinators are VA employees. LOA 3 subsequently developed the RCP in response to a requirement in the NDAA 2008. The RCP is a DOD-specific program that uses recovery care coordinators to coordinate nonclinical services and resources for “seriously” wounded, ill, and injured servicemembers who may return to active duty, unlike those categorized as “severely” wounded, ill, and injured, who are not likely to return to duty and would be served by the FRCP. The military services were responsible for separately implementing the RCP through each of their existing wounded warrior programs as a means of providing care coordination services to program enrollees. Electronic Sharing of Health Records. LOA 4 was focused on addressing issues related to the electronic exchange of DOD and VA health records. However, this effort was superseded by the NDAA 2008, which required the establishment of the Interagency Program Office to serve as a single point of accountability for both departments in the development and implementation of interoperable electronic health records.DOD and VA retained the responsibility for the development and Although management of the information technology systems, the Interagency Program Office was responsible for ensuring the implementation of an electronic health records system or capabilities that allowed for the complete sharing of health care information for the provision of clinical care. In October 2011, the Interagency Program Office also became accountable for DOD’s and VA’s work on developing an integrated electronic health records system that both departments would use for their beneficiaries. In addition to the Senior Oversight Committee’s efforts, DOD, its military services, and VA developed or modified a number of programs and initiatives to assist recovering servicemembers and veterans in navigating the recovery care continuum. Military Services’ Wounded Warrior Programs. The military services’ wounded warrior programs were established to assist recovering servicemembers during their recovery, rehabilitation, and initial reintegration back to active duty or to civilian life. Most of these programs provide nonclinical case management services to the recovering servicemembers; that is, they help to resolve issues related to finances, benefits and compensation, administrative and personnel paperwork, housing, and transportation. In addition, the wounded warrior programs serve as the central point of access to other types of services or resources that support recovering servicemembers, such as clinical case management, care coordination, and career, education, and readiness services. (See table 1.) If a wounded warrior program does not directly provide a service or resource, it can facilitate servicemembers’ access to that service or resource. Although the wounded warrior programs were intended mainly to provide services to recovering servicemembers, all but one of the programs continue to assist individuals after they have transitioned to veteran status. VA Transition Programs. VA’s Liaison for Healthcare Program and its OEF/OIF/OND Care Management Program assist recovering servicemembers with transitioning from DOD’s to VA’s health care system. As of August 2012, the Liaison for Healthcare Program employed 33 liaisons at 18 MTFs nationwide. After a DOD or VA treatment team determines that a recovering servicemember is medically ready to transition to a VAMC, a VA liaison facilitates the transfer from an MTF to a VAMC closest to their homes or to the most appropriate locations for the specialized services their medical condition requires. VA liaisons follow recovering servicemembers as they enter the VA health care system, ensuring that their first VA appointments are scheduled. Thereafter, the VA OEF/OIF/OND Care Management Program team assigned to each recovering individual coordinates the individual’s care at the VAMC and provides ongoing follow-up.OEF/OIF/OND Care Management Program team in place to coordinate patient care activities. Recovering servicemembers’ access to case management and care coordination programs has been impeded by two main factors—(1) the limited ability to identify and refer those servicemembers who could benefit from enrollment in the programs along with officials’ reluctance to refer them, and (2) variations in eligibility criteria among the military services’ wounded warrior programs, resulting in access disparities for similarly situated recovering servicemembers. We found that referrals may be lacking or delayed (1) from military service unit commanders to wounded warrior programs; (2) from wounded warrior programs to the FRCP; and (3) for certain groups of servicemembers, such as those with “invisible injuries” as well as members of the National Guard and Reserve. Referral to the military services’ wounded warrior programs. The military services’ wounded warrior programs primarily use referrals to identify recovering servicemembers that might be eligible for enrollment. However, we found that the methods for referral, which include casualty reports and direct referrals, are imprecise, such that all servicemembers who could benefit from being enrolled in these programs are not necessarily identified and referred. Officials from three wounded warrior programs told us that casualty reports are the primary method for receiving referrals. Casualty reports are initial alerts to military personnel, including wounded warrior program officials, that a servicemember has been injured. These reports can be initiated by unit commands or other military personnel as a method of referral to the wounded warrior programs. However, wounded warrior program officials from four wounded warrior programs told us that casualty reports are not created after every injury or may be created late in a servicemember’s recovery. In particular, some of these officials said that military service unit command staff may delay or not create casualty reports for servicemembers not injured in combat, such as for injuries that occur stateside or while on leave, because servicemembers’ units may not find out about such incidents immediately. We found that referrals by unit command staff are most likely, because they have the most knowledge about servicemembers’ conditions, injuries, and treatment locations. their recovery as a result of staying in their units and not being referred to a wounded warrior program earlier. For example, a recovering servicemember told us that despite having been recently discharged from a hospital for arm injuries, he was required to operate a floor buffing machine in his unit, which was difficult for him as a result of his injuries. He did not receive rehabilitative treatment for his injuries until he was assigned to a wounded warrior program. Furthermore, we found that most of the military services’ wounded warrior programs do not always track the number of referrals to their programs, including data on whether or not servicemembers referred to the programs were actually enrolled. (See table 14 in app. I for additional information about referral data.) Without this information, it is not clear whether all those who could benefit from a wounded warrior program are being enrolled. Referral to the FRCP. In addition to problems with referrals to wounded warrior programs, wounded warrior program officials sometimes delay or fail to make referrals of potentially eligible servicemembers to the FRCP, which coordinates care across the departments and throughout the recovery care continuum. As we have previously reported, the FRCP relies predominantly on referrals from other sources, including wounded warrior program officials and clinical treatment teams, because it does not have a systematic way to identify potential enrollees. Referrals to the FRCP are important because federal recovery coordinators are intended to provide continuity of care throughout servicemembers’ recovery, starting with their initial treatment at an MTF and throughout the recovery care continuum. They can also assist in facilitating recovering servicemembers’ access to VA services and benefits while servicemembers are still on active duty, according to VA officials. However, we found that officials from wounded warrior programs view the jointly created and established FRCP as a VA program and, therefore, delay their referrals until it is certain that the servicemember will become a veteran. Referrals for certain servicemember populations. We found that certain servicemember populations may be at greater risk for not being identified for DOD and VA case management and care coordination programs. Specifically, according to wounded warrior program officials, servicemembers who have undiagnosed, “invisible” wounds, such as PTSD and TBI, may be at greater risk of not being referred to a wounded warrior program or the FRCP until it becomes apparent that the servicemember cannot be deployed. For example, a servicemember told us that although he was experiencing anxiety every time he put on his uniform, it was not until he had a severe anxiety attack, as a result of his PTSD, that he was hospitalized and then referred to a wounded warrior program. According to officials representing military advocacy organizations, National Guard and Reserve servicemembers may be particularly reluctant to identify injuries and illnesses because they are eager to return home and do not want to be delayed at the installation for an evaluation of any conditions they may have. However, these officials said that when these servicemembers have been deactivated and problems manifest themselves later on, they may experience difficulties establishing that their injuries or illnesses are a result of their service in the military, which could make it difficult for them to access services and programs provided by DOD and VA. Because of variations in eligibility criteria among the military services’ wounded warrior programs, DOD cannot assure that similarly situated servicemembers have equitable access to these programs, leading to disparities in the level of assistance provided across the military services. (See table 2.) For example, servicemembers can only be eligible for the Air Force Wounded Warrior Program if they have a combat-related injury or illness, whereas servicemembers with combat or non-combat-related injuries or illnesses can be eligible for the Army’s Warrior Transition Units. As a result of these differences in eligibility criteria, recovering servicemembers in one military service may qualify for entry in their wounded warrior program while similarly situated servicemembers in another military service do not have access to their program. Consequently, according to wounded warrior program officials, some recovering servicemembers do not have access to services that would otherwise be available to them, including the RCP and Operation Warfighter. Additionally, because wounded warrior programs facilitate access to other programs and services, including the VA Liaison for Healthcare Program and the Warrior Athlete Reconditioning Program, not being eligible for a particular wounded warrior program could preclude a servicemember from receiving the services of these other programs. Military coalition officials who advocate for recovering servicemembers and their families told us the lack of standardization across similar programs, such as the military services’ wounded warrior programs, is one of the main reasons recovering servicemembers “fall through the cracks” or do not get the services that they need when they are navigating the recovery care continuum. DOD is aware of inconsistencies in eligibility criteria among the military services’ wounded warrior programs and the potential for disparities in the provision of services and assistance that may result. DOD has not taken action to correct this, however, despite the identification of this issue as a potential problem for recovering servicemembers by a congressionally mandated DOD task force. Specifically, in its 2011 annual report to congressional committees, the Recovering Warrior Task Force noted that as a result of differences in eligibility criteria among the military services, certain subpopulations of recovering servicemembers may be at a disadvantage. In response to this report, DOD stated that although there are no DOD-wide criteria for entry into wounded warrior programs, the individual military services already have policies in place as a result of the flexibility given to them by DOD. Although IDES provides improved timeliness over the separate DOD and VA disability evaluation systems, processing times have continued to increase since its implementation in November 2007, resulting in frustration and uncertainty for servicemembers going through the process. In a May 2012 hearing, we testified that the average number of days for servicemembers to complete the IDES process and receive VA benefits increased from 283 in fiscal year 2008 to 394 in fiscal year 2011 for active duty cases (compared to the goal of 295 days) and from 297 to 420 for reserve cases, respectively (compared to the goal of 305 days). As we have previously testified, other reasons that could impact the increase in IDES processing times include large case loads and insufficient staff to complete a stage of IDES in a timely manner. the status of their case. For example, a servicemember told us that after going through the IDES process, receiving a rating, and filing an appeal over a year ago, he still did not know the status of his case, negatively affecting his ability to plan for his future. Similarly, a wounded warrior program official also told us that her program has had several servicemembers lose job opportunities because they applied for positions thinking that they would be through the IDES process by a certain date, but when that date was pushed back, the employers rescinded their offers. Wounded warrior program officials from some of the sites we visited told us that extended waiting periods resulting from the disability process also may lead to some recovering servicemembers engaging in negative behavior, including drug use. Wounded warrior program officials told us that after waiting for so long in the wounded warrior barracks due to the lengthy disability process, servicemembers can get depressed, resist or just stop going to medical appointments, and stop working on their recovery. Similarly, the DOD Inspector General has reported that lengthy IDES processing times has contributed to a negative and even counterproductive environment, which was not conducive to servicemembers’ recovery and transition. To prevent these problems, we found that two wounded warrior programs require recovering servicemembers to participate in programs such as the Warrior Athlete Reconditioning Program and Operation Warfighter. A recovering servicemember told us that soon after being assigned to the wounded warrior program, he was referred to the Warrior Athlete Reconditioning Program, which gave him something to do other than “sitting around.” Another recovering servicemember told us that the Warrior Athlete Reconditioning Program is an effective motivator for recovery. Conversely, the servicemembers could take actions that may impact their own processing times in IDES and, therefore, their length of stay in a wounded warrior program. We found that some servicemembers may appeal their disability decisions to prolong their own recovery and transition out of the military. According to wounded warrior program officials from some of the sites we visited, some servicemembers resist their transfer out of the wounded warrior program and the military because they want to continue to take advantage of the opportunities and services available to them, including the financial security of a regular paycheck. For example, a wounded warrior program official and a VA official told us that some servicemembers will purposefully miss appointments to delay the IDES process because they feel that they are not ready to leave the program. The departments have not yet developed sufficient capability to electronically share servicemembers’ and veterans’ complete health records, which can delay the receipt of care and benefits for recovering servicemembers and veterans. As we have previously reported, for over a decade DOD and VA have undertaken several efforts to improve the ability of their information technology systems to electronically share health records. For example, the Federal Health Information Exchange, which was started in 2001 and completed in 2004, allows DOD to electronically transfer servicemembers’ health information to VA when they leave active duty. In addition, the departments’ Bidirectional Health Information Exchange was established in 2004 to allow clinicians in both departments to view limited health information on patients who receive care from both departments. More recently, the departments have undertaken two new joint initiatives, the Virtual Lifetime Electronic Record and an integrated electronic health records system, in an effort to increase electronic health record interoperability and modernize their systems. We found that although DOD and VA care providers were expected to have access to some electronic health record information across the departments, the DOD and VA care providers that we spoke to still did not have the ability to electronically share complete health records for recovering servicemembers who were transferring between DOD’s and VA’s health care systems, and therefore they had to use other methods. For example, wounded warrior program and VA officials told us that they had to resort to copying and faxing recovering servicemembers’ health records to VAMC staff in preparation for a servicemember’s transition from DOD’s to VA’s health care system because there was not an automatic, electronic way to transfer them. In addition to copying and faxing health records, according to VA officials we spoke with, DOD and VA staff may hold a video-teleconference between the transferring MTF and receiving VA health care facilities to exchange information. In addition, wounded warrior program and VA officials who help servicemembers transition from DOD to VA told us that they only share with VA facilities the health records necessary for the treatment of a recovering servicemember’s current condition. As a result, servicemembers’ and veterans’ complete health records are not always shared between departments when transferring facilities, and ultimately, the responsibility to collect and provide a complete health record to the VA facility can fall on the recovering servicemember and veteran. A VA official told us that this process can be complicated because DOD separately maintains servicemembers’ inpatient, outpatient, and behavioral health records and does not have a single database that can identify all of the medical facilities where a servicemember received treatment. Further, according to VA and DOD officials, delaying the collection and assembly of a servicemember’s complete medical history until the start of the disability process could result in servicemembers having to be reexamined when they are demobilized, needing to establish that their injuries were connected to their time in the military, thus possibly delaying a servicemember’s or veteran’s receipt of VA benefits. Both departments have needed to create programs and provide staff to assist recovering servicemembers during their transition from a DOD MTF to a VAMC. For example, VA Liaisons and DOD nurse case managers help recovering servicemembers transition from DOD to VA by assembling their health records and sharing them with the VAMC where the servicemember will be receiving treatment. According to DOD and VA staff that assist servicemembers in their transition from one system to another, DOD nurse case managers at installations that do not have VA Liaisons do not always have the same knowledge of VA services and benefits, and may not be informed of the appropriate referral methods or contacts used by VA Liaisons to provide a servicemember with a seamless transition to a VAMC. A DOD official told us that at locations where the VA Liaison program is not available, the transition process for recovering servicemembers from DOD to VA is more difficult. This official understood how to properly transfer servicemembers’ records from the DOD facility to the receiving VA facility only because of past VA experience. The lack of leadership and program oversight has limited DOD’s and VA’s ability to effectively manage programs created to serve recovering servicemembers and veterans. Two bodies established to oversee these programs, the Senior Oversight Committee and the Office of Wounded Warrior Care and Transition Policy (WWCTP), lacked consistent leadership attention and oversight capabilities. In addition, DOD does not have a central office that oversees or collects common data on the military services’ wounded warrior programs. The Joint Executive Council was established by law in November 2003 to provide senior leadership for collaboration and resource sharing between DOD and VA. Through a joint strategic planning process, the Joint Executive Council recommends to the Secretaries the strategic direction for the joint coordination and sharing efforts between the two departments and oversees the implementation of those efforts. high-level leadership participation without substitution of lower-ranking rapid policy development and quick decision making, and rigorous monitoring to hold the military services and the two departments accountable for needed actions. Sustaining the Senior Oversight Committee’s original momentum over time became difficult, and its waning influence and effectiveness became evident in a number of ways: Starting in December 2008, the Senior Oversight Committee experienced leadership changes, including the departure of its cochairs, the Deputy Secretaries, as well as turnover in some of its key staff. According to a former Senior Oversight Committee executive, the personal commitment and strong relationship between the Deputy Secretaries who initially cochaired the Senior Oversight Committee served as a unifying and confidence building force that was not replicated by subsequent leadership, while leadership turnover in the DOD offices supporting the Senior Oversight Committee negatively impacted its ability to function effectively. As we have previously reported, the Senior Oversight Committee also began to encounter challenges when DOD “disrupted the unity of command” by changing the organizational structure of the committee and realigning and incorporating the committee’s staff and responsibilities into existing or newly created DOD and VA offices, such as WWCTP.us that the new staffing arrangement did not adequately support the committee’s efforts, and VA did not provide full-time staff members to support the committee, as it had in the past. Later in October 2008, VA established the Office of VA/DOD Collaboration Services, and VA supported Senior Oversight Committee efforts, along with broader collaboration efforts, through this separate office. Officials formerly involved with the committee told The committee began meeting less frequently. For example, in contrast to weekly meetings held during its initial year of operation, in fiscal year 2011, the committee met less than 11 hours in total. Top DOD leadership no longer consistently attended Senior Oversight Committee meetings. According to a former Senior Oversight Committee official, the second Deputy Secretary of Defense to cochair the committee sent the Deputy Undersecretary of Defense for Personnel and Readiness to represent DOD in his place. The Senior Oversight Committee no longer made relatively quick decisions. According to former Senior Oversight Committee executive and support staff, frequent substitutions by lower-ranking officials at Senior Oversight Committee meetings no longer allowed for quick decision making and transformed Senior Oversight Committee meetings into informational briefings. The Senior Oversight Committee no longer tracked or monitored progress of its policy initiatives or assigned tasks. According to a former LOA cochair and a cognizant support staff member, by 2011 the Senior Oversight Committee was no longer routinely using a tracking mechanism to hold the departments accountable for completing appointed tasks. Later that year, the Recovering Warrior Task Force reported that the Senior Oversight Committee no longer had a formal mechanism for assessing the status of the committee’s initiatives and goals, leaving no way to determine whether initiatives or goals had been partially or fully implemented or met. In its September 2011 report, the Recovering Warrior Task Force recommended combining the Senior Oversight Committee and Joint Executive Council to improve effectiveness and reduce redundancies as both entities had similar membership and operating structures. In January 2012, the Joint Executive Council cochairs agreed to consolidate the two groups. The Senior Oversight Committee’s working groups for care coordination and the integrated disability evaluation system were realigned within the Joint Executive Council, and a Wounded, Ill, and Injured Council was established under the Joint Executive Council to oversee emerging issues for recovering servicemembers and veterans. Whether the Joint Executive Council can effectively address the issues once managed by the Senior Oversight Committee has yet to be seen. Several DOD and VA officials expressed concern to us about the ability of the Joint Executive Council to focus on rapid, short-term policy decision making rather than the longer-term strategic planning role that it has traditionally played. For example, according to a DOD official, historically, the Joint Executive Council has not been able to drive policy decision making, and therefore, issues that should have been decided by the Joint Executive Council were taken directly to the Secretaries for resolution, raising doubts about the ability of the Joint Executive Council to function effectively. A former Senior Oversight Committee executive noted that the Joint Executive Council cochairs are not of equivalent rank, another challenge that may serve as a barrier to the council’s ability to make decisions and drive policy changes. Specifically, the VA cochair is the Deputy Secretary, who has control over all relevant offices within VA, while the DOD cochair is the Deputy Undersecretary of Defense for Personnel and Readiness, whose responsibilities include establishing health and benefit policies affecting recovering servicemembers and directing the military services to comply with such policies but lacks authority in enforcing the military services’ implementation of these policies. The Recovering Warrior Task Force also cited concerns about the rank of the DOD cochair of the Joint Executive Council, stating that a higher level of leadership is needed to sustain departmental attention on key initiatives such as IDES and electronic health records. Furthermore, as of August 2012, DOD officials told us that the Joint Executive Council is operating under the original procedures that were in place prior to the entities merging. As a result, it is unclear at this time how the Joint Executive Council will provide oversight and accountability for issues once addressed by the Senior Oversight Committee. In 2008, WWCTP became responsible for overseeing the RCP among other programs that provide assistance to recovering servicemembers. However, WWCTP’s ability to oversee the RCP, including its ability to monitor program performance and ensure compliance with DOD policy, is limited by its lack of operational authority, such as budget and tasking authority, over the military services that implement the program. According to WWCTP officials, this lack of operational authority challenges WWCTP’s ability to direct the military services on their implementation of the program. For example, although WWCTP has been responsible for RCP oversight since 2008, the office was not able to collect basic program data, such as monthly enrollment numbers, on a consistent basis until October 2011. According to a WWCTP official, although WWCTP requested monthly data submissions from the military services, the information was provided on an ad hoc basis; sometimes the services would submit it, and other times they would not. Data- collection efforts still remain a challenge for WWCTP. For example, the Army’s Wounded Warrior Program, which serves as the Army’s care coordination program, only agrees to share partial data with WWCTP, arguing that the Army is only obligated to share data on servicemembers served by WWCTP-contracted personnel. Getting the military services to implement consistent care coordination policies also poses a challenge for WWCTP. WWCTP officials said that while WWCTP can develop policy to guide the military services, the military services may interpret that policy and implement their programs differently. Consequently, some DOD officials assert that the military services have not consistently implemented the RCP in accordance with DOD policy—an observation that is shared by the Recovering Warrior Task Force. DOD policy requires that care coordination should be provided to those who are “seriously” and “severely” wounded, ill, and injured, but the Army only provides care coordination to recovering servicemembers who are “severely” wounded, ill, and injured. As a result, some servicemembers who could benefit from having someone coordinate their care and benefits as they navigate the recovery care continuum do not have access to those services. Some WWCTP officials with whom we spoke expressed the view that the military services have been inconsistent in their cooperation with WWCTP, with cooperation being better on issues that represent priorities of top leadership. Specifically, WWCTP officials told us that top DOD leadership has not been pressured to resolve lingering care coordination issues as much as other more visible issues, such as IDES and electronic medical record interoperability problems confronting the departments. Consequently, WWCTP officials said that the military services cooperate with WWCTP’s efforts to oversee IDES and to monitor whether the military services achieve their goals for timely completion of the IDES process. Although these goals have not consistently been achieved,officials told us that military service cooperation has not been an impediment to overseeing IDES as it has been for overseeing care coordination. Conversely, the military services have not been as inclined to cooperate with WWCTP on its oversight of the RCP relative to these other issues. In addition to limited operational authority over the military services, turnover in leadership and other staffing changes have also limited WWCTP’s ability to provide consistent direction and oversight for the RCP, according to WWCTP officials. Specifically: Three different DOD officials have led WWCTP since its inception in 2008. According to WWCTP staff, each of these officials had different visions and priorities for the office, which led to disruptions in RCP oversight. For example, a major oversight initiative—to collect satisfaction survey data across the RCP—was abandoned when a new official was appointed. In addition, the RCP has been led by three different directors, with the most recent director leaving in June 2012. In September through December 2011, WWCTP’s contracted staffing was temporarily reduced by 70 percent when a contract expired and was not immediately renewed, according to DOD. Staff reductions primarily impacted WWCTP’s ability to oversee the RCP, since many RCP support staff members were lost. For example, according to a WWCTP official, the office was no longer able to make monitoring visits to the RCP program sites. However, in July 2012 a contract was awarded that allowed WWCTP to engage additional staff to support the RCP, according to a WWCTP official. In June 2012, DOD changed the name of the WWCTP office to the Office of Warrior Care Policy and moved it under the Assistant Secretary of Defense for Health Affairs. According to a DOD official, the change was made as part of a realignment of DOD’s organizational structure in response to statutory requirements. An official in Health Affairs said that the move will be beneficial because it will provide greater access to resources, including human resources and information technology, among others. However, it is too early to determine the full effect of this change. There is currently no central office or authority that oversees or collects common data on the military services’ wounded warrior programs, preventing DOD from both assessing how well the programs are working across the department and leveraging the strengths of each program by sharing proven best practices across the military services. Each of the military service Secretaries created their own wounded warrior programs to meet their military service’s unique needs. Because each service developed its own policy to govern its wounded warrior programs and no central, unified DOD policy exists to govern these programs, no central DOD office—such as WWCTP—may direct how these programs operate. This lack of central oversight over the wounded warrior programs has been one of the main reasons for the large discrepancies between these programs. The 2011 Recovering Warrior Task Force report recommended that the Secretary of Defense enforce the existing policy guidance regarding the Army’s and Marines’ wounded warrior transition units’ entrance criteria. However, in its response to this recommendation, DOD supported the military service Secretaries’ discretion in establishing their own policies in this regard, saying that there is no central DOD policy on the establishment of transition units and entrance criteria, and that the policies were established by the Secretaries for their specific populations. While no common data are collected on the performance of wounded warrior programs across the military services, each individual program has initiated internal efforts to collect and analyze performance data. The type and quality of data vary by program, however. For example, the largest of the wounded warrior programs, the Army Warrior Care and Transition Program, has collected wounded warrior program performance survey data on a continuous basis since March 2007 and has developed outcome measures to determine the impact of its services. However, smaller programs, such as the Air Force Wounded Warrior Program and the United States Special Operations Command’s Care Coalition have measured baseline program satisfaction levels, but they do not have additional years of survey data to monitor any changes over time. (See table 3 for information about the types of performance data collected by each of the wounded warrior programs.) Some DOD officials with whom we spoke questioned why common measures have not been developed. For example, a DOD official in charge of wounded warrior care at an MTF suggested developing a measurement tool to determine what aspects of the programs help recovering servicemembers. Another DOD official involved with wounded warrior program performance measurement commented that it is common practice for DOD to share performance measurement practices and standard metrics across the military services. In September 2011, citing wide disparity across the military services in their implementation of wounded warrior programs and policies, the Recovering Warrior Task Force made four recommendations for creating common standards to ensure parity in the programs and services provided to recovering servicemembers across DOD. For example, the first recommendation called for a common nomenclature, or consistent definitions to be used in DOD policy to identify recovering servicemembers who may require and be eligible for assistance. The task force concluded that common definitions are needed to promote consistent levels of care among the military services and would better enable DOD to compare across programs and identify best practices. In its response to the task force, DOD acknowledged that some of these recommendations were valid and that DOD should take actions to address them. However, at the time of the Recovering Warrior Task Force’s 2012 report, these recommendations had not been implemented, and the task force is continuing to follow DOD’s efforts to implement Moreover, even if DOD decided to take some actions in this them.regard, it is unclear who would have responsibility for addressing them, since there is no central oversight office or authority for these programs. In addition to problems with leadership and oversight of care coordination and case management programs, DOD and VA have a longstanding track record of insufficient staffing to address delays in disability determinations and insufficient staffing and control over the budget to oversee the development of systems with improved capabilities for electronically sharing health records. Insufficient staffing across both departments has affected DOD’s and VA’s ability to reduce disability determination delays and meet their IDES timeliness goals. We raised concerns about staffing in 2010, when we reported that DOD and VA did not sufficiently staff many key positions in the IDES process, including DOD board liaisons, who counsel servicemembers and ensure that documentation submitted for consideration is complete and accurate, and medical evaluation board physicians, who review medical and service records to identify conditions that limit a servicemember’s ability to serve in the military. In 2012, we continued to report evidence of staffing shortages, including high caseloads for DOD board liaisons and VA case managers as well as insufficient numbers of physicians to write narrative summaries needed to complete the medical evaluation board stage of the IDES process in a Some recovering servicemembers told us they do not timely manner.receive sufficient support from their DOD board liaisons, and that there are not enough liaisons to efficiently meet the needs of all the recovering servicemembers going through the IDES process. Delays in the disability determination process are expected to continue. VA anticipates a much larger caseload of all disability and other benefit claims in the near future, not just those claims associated with IDES cases. Specifically, a high-level VA official told us that new laws, such as the Veterans Opportunity to Work Act, will encourage all transitioning servicemembers—not just those going through the IDES process—to claim VA benefits. This official also told us that DOD and VA have a much larger problem to address as a surge of 300,000 servicemembers begin to transition into the VA system as troops return home from Iraq and Afghanistan. Without adequate planning and adequate resources, these servicemembers may experience much longer processing times in the disability benefits systems. DOD and VA are working to address staffing challenges in some of the IDES processes that are most delayed. We have previously reported that the Army, for example, is in the midst of a major hiring initiative to increase staffing dedicated to its medical evaluation boards, which will include additional DOD board liaisons and medical evaluation board physician positions. Additionally, VA officials said that the agency has added staffing to its IDES rating sites to handle the demand for preliminary disability ratings, rating reconsiderations, and final benefit decisions, which has increased the number of preliminary VA ratings completed and slightly improved processing times. But it is too early to tell the extent to which VA’s efforts will continue to improve processing times. The Interagency Program Office was established by law single point of accountability for joint DOD and VA efforts to implement fully interoperable electronic health record systems or capabilities, but this office was not given sufficient staffing or budget control by DOD and VA to effectively facilitate the departments’ efforts. According to an Interagency Program Office official, the office was never fully staffed and was challenged by a high degree of turnover in staffing and leadership that served in a temporary or acting capacity. The Interagency Program Office’s initial charter limited its ability to exercise authority over DOD and VA. Specifically, the charter stated that control of the budget, contracts, and technical development remained wholly within the two departments’ program offices. The charter conveyed no authority in these areas to the Interagency Program Office. As a former Interagency Program Office official testified in July 2011, the office lacked control of budgeting and contracting necessary to achieve its intended purpose, and without this, it could not sufficiently oversee the departments’ efforts and compliance with the requirements in NDAA 2008. As a result, each department continued to pursue separate strategies, rather than a unified interoperable approach, according to this former official. See Pub. L. No. 110-181, § 1635, 122 Stat. 3, 460-63 (2008). The Interagency Program Office was rechartered in October 2011 and provided an expanded staff and new authorities under the charter, including control over the budget. According to Interagency Program Office officials, when hiring under the new charter is completed, the office will have a staff of 236 personnel, more than seven times the number of In addition, the staff originally allotted to the office by DOD and VA.charter provides the Interagency Program Office with the authority to lead, oversee, and manage budget and contracting for electronic health record sharing efforts. According to Interagency Program Office officials, budget control is the essential component for overseeing progress and ensuring accountability for the departments’ efforts. With the enhanced charter, as well as plans for an expanded staff to oversee the implementation of a single joint electronic health record system, the Interagency Program Office will have more resources to draw upon and support department interoperability initiatives. However, it is still too early to determine whether this investment of resources will be sufficient to meet the office’s goals for 2017. provision of additional resources, Interagency Program Office officials told us that as of July 2012, the office is staffed at approximately 48 percent and that hiring additional staff in time to meet appointed implementation deadlines remains one of its biggest challenges. According to DOD and VA officials, the departments have identified 54 joint capabilities that will be implemented by the end of fiscal year 2017. will depend upon achieving cooperation between the departments—which has been elusive for many years—as well as with the military services. With the creation of the RCP, the FRCP was no longer the single point of contact with respect to servicemembers’ care coordination, and early on, there were concerns and some confusion about how the FRCP and the RCP would align without creating overlapping and duplicative services. Shortly after the RCP was established, DOD sent a report to congressional committees outlining a medical category assignment process that was based on the severity of each servicemember’s medical condition, along with input from the servicemember and his or her unit commander, to determine whether servicemembers would be directed to either the FRCP or to the RCP for care coordination services. In concept, the medical category assignment process would have resulted in wounded, ill, and injured servicemembers being assigned to one of three categories: “mild,” “serious,” or “severe.” Under this approach, the FRCP would provide care coordination services for “severely” wounded, ill, and injured servicemembers and the RCP would serve those who were “seriously” wounded, ill, and injured. (See app. II for additional information on the intended medical category assignment process for DOD and VA care coordination programs.) Despite DOD’s attempt to define the populations served by the FRCP and the RCP, neither the military services’ wounded warrior programs, which implement the RCP, nor VA, which administers the FRCP, implemented DOD’s assignment process. Instead, these programs expanded their enrollment to include both “seriously” and “severely” recovering servicemembers and veterans, which resulted in both programs serving the same populations, thereby setting up the likelihood of overlap and duplication of services. As we have previously reported, this duplication issue is compounded by the numerous other programs that also provide services to recovering servicemembers and veterans and have overlapping roles as well. It is not uncommon for recovering servicemembers to be enrolled in more than one case management or care coordination program and end up with multiple care coordinators and case managers—each of whom develop different care plans for the same servicemember. The care plans may even conflict with one another, which could conceivably adversely affect the servicemember’s recovery process. In fact, in the course of previous work, we found instances where inadequate information exchange and poor coordination between these programs resulted not only in duplication of effort and overlap of services, but also confusion and frustration for servicemembers and their families. In addition, DOD and VA officials acknowledge that the multiplicity of care coordination and case management programs causes confusion even among members of care coordination teams. In October 2011, we recommended that the Secretaries of Defense and Veterans Affairs direct the Senior Oversight Committee to expeditiously develop and implement a plan to strengthen functional integration across all DOD and VA care coordination and case management programs to reduce redundancy and overlap. Although DOD and VA have not yet aligned care coordination policy for the FRCP and RCP, we have found indications that care coordinators and case managers at some locations have been cooperating to some degree and trying to work more closely with one another. In the course of our visits to 11 DOD and VA facilities during this review, we found that care coordinators and case managers in many locations had attempted—with some success—to clarify their roles and to limit the degree of overlap and duplication in the services they provide to recovering servicemembers and veterans. However, such local attempts to improve the degree of cooperation and coordination among the programs are not systemic and depend on individual personalities and circumstances. They may not be sustainable without agreement by DOD and VA and the alignment of policy governing case management and care coordination programs. Another critical issue on which DOD and VA have disagreed pertains to the stage in a servicemember’s recovery when the FRCP should get involved in the coordination of services. Because the FRCP depends on referrals from other programs as a basis for becoming involved with recovering servicemembers, this can be a significant issue. Currently, neither DOD nor VA policy clearly defines when referrals are to be made; consequently, most wounded warrior programs delay referrals to the FRCP until it becomes clear that the servicemember will be separated from the military. Senior DOD officials stated that wounded warrior program officials justify this practice on the basis that referring a recently wounded servicemember to the FRCP—a VA-operated program—sends a negative message to a recovering servicemember that his or her military career has ended, even though the FRCP was designed as a joint program. Additionally, the belief among the military that they should “take care of their own,” contributes to the reluctance to involve the FRCP. On their part, VA maintains that its point of engagement should be in the early stage of medical treatment to build rapport and trust and to begin coordinating the services needed by severely wounded servicemembers. Despite multiple efforts over the last several years to align their care coordination and case management programs, DOD and VA have failed to implement lasting measures to resolve underlying problems concerning the aligning of roles and responsibilities of the FRCP, RCP, and case management programs. Previous attempts include the following: December 2010. The Senior Oversight Committee directed its case management work group to perform a feasibility study of recommendations on the governance, roles, and mission of DOD and VA care coordination. However, no action was taken by the committee and care coordination was subsequently removed from the Senior Oversight Committee’s agenda as other issues were given higher priority. March 2011. WWCTP sponsored a joint summit that included officials from VA and the military services to review DOD and VA care coordination issues. Although this collaboration resulted in the development of five recommendations related to care coordination, no agreement was reached by the departments to jointly implement them. A DOD participant told us that VA did not agree with the recommendations, and a VA official involved in the summit concurred, alleging that the recommendations appeared to suggest eliminating overlap and duplication between the FRCP and RCP by ending the FRCP. May 2011. Concerned with overlap and duplication between the DOD and VA care coordination programs, the House Committee on Veterans Affairs, Subcommittee on Health directed the Deputy Secretaries of DOD and VA to provide an analysis of how the FRCP and RCP could be integrated under a “single umbrella” by June 20, 2011. In the absence of such a response, the subcommittee scheduled a congressional hearing and requested that options for addressing this issue be presented. Following the notification of the hearing, the departments developed a joint letter and submitted it to the subcommittee in September 2011. This letter, however, did not identify or outline options for aligning the FRCP and the RCP. In a hearing held by the subcommittee in early October 2011, neither VA nor DOD outlined definitive plans to address this issue. September 2011. The Recovering Warrior Task Force issued the first of four annual reports that included 21 recommendations, including a recommendation that the roles of care coordinators be clarified. In DOD’s official response to congressional committees, the Under Secretary of Defense stated that the department would implement the Recovering Warrior Task Force’s recommendations. However, a Recovering Warrior Task Force member stated that the Recovering Warrior Task Force concluded that in most cases DOD has not made significant changes to its programs to achieve the outcomes intended by the recommendations. In August 2012, the Recovering Warrior Task Force reported that DOD has fully implemented only 2 of the 21 recommendations. However, a DOD official whose office is responsible for coordinating DOD’s responses to the Recovering Warrior Task Force’s recommendations stated that DOD is in the process of addressing several more of the 2011 Recovering Warrior Task Force recommendations. October 2011–April 2012. VA declined DOD’s requests to discuss care coordination and case management policy issues during this period, according to DOD and VA senior officials, because VA had established its own task force to conduct an internal review of its care coordination and case management activities, including the FRCP. After completing its initial assessment, VA briefed WWCTP officials on the process it was using to review its care coordination and case management activities, but chose not to discuss realignment of the FRCP and RCP at that time, according to DOD officials who attended this briefing. Instead, the VA Chief of Staff said that he approached the Army’s Warrior Transition Command—which has the largest number of recovering servicemembers—to propose developing guidelines for better integrating Army’s wounded warrior program with the FRCP, including identifying when the Army’s wounded warrior programs should refer a recovering servicemember to the FRCP, and replacing multiple care coordination plans with a single, comprehensive planning document. However, a high-level DOD official criticized this initiative as a tactic to minimize central input from the Office of the Secretary of Defense and pointed out that this effort would result in an agreement with only a single military branch. In contrast, VA’s Chief of Staff told us that VA took this approach in the hope that if an agreement could be reached with Army, the other military branches would follow suit. More recently, in May 2012, VA and DOD developed a new task force, the VA/DOD Warrior Care and Coordination Task Force, which represents an effort to comprehensively address problems caused by the lack of integration between DOD’s and VA’s care coordination and case management programs. The task force has developed recommendations that are intended to achieve a coordinated, interdepartmental approach to care coordination and case management programs, according to a task force official. On August 10, 2012, the task force presented the following recommendations to the Joint Executive Council for its consideration: establish and charter an interagency governance structure responsible for coordinating VA and DOD policy, establish and charter an interagency care coordination community of align the FRCP to function in a consultant and resource-facilitator role, clarify the lead coordinator role and responsibilities for executing a recovering servicemember’s comprehensive plan, identify the business requirements for technical tools to support the interagency comprehensive plan, and accelerate existing information-sharing efforts for care coordination. The Joint Executive Council provisionally approved the six recommendations, but withheld final approval pending receipt of additional information from the task force, such as an estimate of resources required to implement the recommendations, as well as details of the proposed interagency governance structure. The Joint Executive Council instructed the task force to present the additional information to them in another decision briefing, which was scheduled for September 20, 2012. Absent final approval from the Joint Executive Council, the task force’s next step was to hold a status briefing for the DOD and VA Secretaries on September 10, 2012, to discuss the task force’s recommended course of action for care coordination. Given the inability of past task forces to effect changes that better align DOD and VA care coordination and case management policies, it is too soon to determine the full effect of the departments’ efforts to manage care coordination services regarding outcomes for recovering servicemembers and veterans. Although VA and DOD appear to be moving in a positive direction on care coordination, notable barriers remain: There is concern as to whether the Joint Executive Council can effectively lead the effort to realign VA’s and DOD’s care coordination policy. Some high-ranking and cognizant DOD officials we talked with expressed concerns that the recently merged Joint Executive Council may not have the capability to effectively monitor the actions taken by DOD and VA to implement the task force’s recommendations. Some officials we talked with viewed the council as taking too long to resolve issues due to both the infrequency of its meetings and the difficulties DOD and VA members have in agreeing with one another. Following approval of its recommended course of action, task force documents indicate that a detailed plan will be completed by July 2013. VA’s task force cochair stated that some aspects of the planned changes could take years to implement, particularly as they transition existing enrollees of programs affected by significant revisions. For example, VA intends to conduct a case-by-case review of every FRCP enrollee before modifying the FRCP to function in a consultant and resource-facilitator role, according to VA’s Task Force cochair. One of the most fundamental challenges to resolving care coordination problems is the issue of obtaining the cooperation of the military services to implement a new approach to care coordination and case management, especially in light of past difficulties of working in concert with DOD and VA programs and policies. DOD and VA leadership officials stated that even if new solutions and policies were to be approved by the departments, changes would be made only if the individual military services implement the new policies as directed by the Secretary of Defense. Several DOD and VA officials identified concurrence and support of the military services as the most difficult element to achieve. Ultimately, the military services’ compliance with the departments’ agreed-upon strategy for care coordination and case management programs will determine how seamlessly recovering servicemembers and veterans will be able to navigate the recovery care continuum. The deficiencies exposed at Walter Reed in 2007 served as a catalyst compelling DOD and VA to address a host of problems that complicate the course of a wounded, ill, and injured servicemember’s recovery, rehabilitation, and return to active duty or civilian life. We believe strongly and have reported already that fixing the long-standing and complex problems highlighted in the wake of the Walter Reed media accounts as expeditiously as possible is critical to ensuring high-quality care for returning servicemembers and veterans. We continue to believe that the departments’ success ultimately depends on sustained attention, systematic oversight, and sufficient resources from both DOD and VA. However, this has not yet occurred, and as a result, after 5 years, recovering servicemembers and veterans are still facing problems as they navigate the recovery care continuum, including access to some of the programs designed to assist them. The transition period from DOD’s to VA’s health care system is particularly critical, as servicemembers continue to experience delays in the disability evaluation system and the departments continue to use methods other than a common information technology system to share servicemembers’ health information. Until these problems are resolved, recovering servicemembers and veterans may still face difficulties getting the services they need to maximize their potential when they return to active duty or transition to civilian life. Initially, departmental leadership exhibited focus and commitment— through the Senior Oversight Committee—to addressing problems related to case management and care coordination, disability evaluation systems, and data sharing between DOD and VA. However, over time, waning leadership attention, a failure to oversee critical wounded warrior functions and programs, limited resources, and the inability to achieve a collaborative environment— particularly with care coordination—have impeded the departments’ ability to fully resolve these problems. A key element in resolving current care coordination issues in particular is eliciting the cooperation of the military services, which are responsible for implementing various wounded warrior programs and ensuring that these programs operate as intended—which has sometimes not been the case, as with the RCP. Also, absent clear direction and central oversight and accountability among the military services’ wounded warrior programs, true cooperation and program effectiveness may be in jeopardy. We believe that at the heart of the problem is the need for strong and unwavering leadership to bring about changes that best serve our nation’s recovering servicemembers and veterans. This leadership should be united across both DOD and VA and centered on the individual servicemember’s or veteran’s recovery. Many task forces—including the VA/DOD Warrior Care and Coordination Task Force and the Recovering Warrior Task Force—have already attempted to bring a spirit of cooperativeness and clear direction and purpose among the different programs providing services to this population. However, to date, these efforts have not fully resolved key issues, and our nation’s recovering servicemembers and veterans continue to face obstacles and challenges, especially as they transition from DOD’s to VA’s health care system. Certainly, the fluidity and focus of the departments’ leadership over the last several years, especially related to care coordination, have added to the challenges of developing consistent policy, effective oversight, and mechanisms to monitor progress and hold programs accountable. The departments have recently taken steps to improve problems related to care coordination, disability evaluations, and the electronic sharing of health records, through concerted efforts to coordinate on policy, increase staffing resources, and provide control over the budget, respectively. However, it is too early to determine the effectiveness of these efforts, and sustained leadership attention will be critical to their success. The need to fully resolve remaining problems is urgent as there will be an increasing demand for services from both DOD and VA as the current conflicts come to an end. If not resolved now, these same problems will persist into the future for recovering servicemembers and veterans. To ensure that servicemembers have equitable access to the military services’ wounded warrior programs, including the RCP, and to establish central accountability for these programs, we recommend that the Secretary of Defense establish or designate an office to centrally oversee and monitor the activities of the military services’ wounded warrior programs to include the following: Develop consistent eligibility criteria to ensure that similarly situated recovering servicemembers from different military services have uniform access to these programs. Direct the military services’ wounded warrior programs to fully comply with the policies governing care coordination and case management programs and any future changes to these policies. Develop a common mechanism to systematically monitor the performance of the wounded warrior programs—to include the establishment of common terms and definitions—and report this information on a biannual basis to the Armed Services Committees of the House of Representatives and the Senate. To ensure that persistent challenges with care coordination, disability evaluation, and the electronic sharing of health records are fully resolved, we recommend that the Secretaries of Defense and Veterans Affairs ensure that these issues receive sustained leadership attention and collaboration at the highest levels with a singular focus on what is best for the individual servicemember or veteran to ensure continuity of care and a seamless transition from DOD to VA. This should include holding the Joint Executive Council accountable for ensuring that key issues affecting recovering servicemembers and veterans get sufficient consideration, including recommendations made by the Warrior Care and Coordination Task Force and the Recovering Warrior Task Force; developing mechanisms for making joint policy decisions; involving the appropriate decision-makers for timely implementation of policy; and establishing mechanisms to systematically oversee joint initiatives and ensure that outcomes and goals are identified and achieved. DOD and VA reviewed a draft of this report and provided comments, which are reprinted in appendixes III and IV. DOD and VA also provided technical comments, which we incorporated as appropriate. DOD concurred with specific components of our first recommendation regarding the establishment of central accountability for the military services’ wounded warrior programs. In particular, DOD agreed that a single office should have oversight responsibility for the military services’ wounded warrior programs and that these programs should fully comply with the policies governing care coordination and case management programs and any future changes to these policies. However, DOD only partially concurred with other components of our first recommendation—that DOD develop consistent eligibility criteria for enrollment in wounded warrior programs and that DOD establish a common mechanism to systematically monitor the performance of these programs. In its comments, DOD explained that the three military service Secretaries should have the ability to control entrance criteria into their wounded warrior programs and added that it does not believe that differences in eligibility criteria for these programs results in noticeable differences in access to these programs by recovering servicemembers or their families. DOD did not offer a rationale, however, as to why the military service Secretaries should unilaterally determine eligibility criteria for their wounded warrior programs, other than to suggest that flexibility is important and necessary. Moreover, as we have reported, DOD does not systematically assess or monitor these programs across the department, and as a result, we believe that DOD has no basis to assert that there are no noticeable differences in access to these programs. Overall, we believe that similarly situated wounded, ill, and injured servicemembers should be given the same access to wounded warrior programs and the assistance these programs provide, regardless of their branch of military service. With respect to developing a common mechanism to systematically monitor the performance of the wounded warrior programs, DOD responded that the Interagency Care and Coordination Committee will conduct an inventory of all wounded warrior programs to identify duplication and areas for gaining efficiencies. In commenting on our recommendation to also report its performance information on the wounded warrior programs to the Armed Services Committees on a biannual basis, DOD stated that the department reports progress through the Joint Executive Council’s annual strategic planning report and any additional reporting would be redundant and of limited value. We disagree. The Joint Executive Council’s strategic planning and annual reports focus on joint efforts between the departments and do not report on the performance of the military services’ wounded warrior programs. Therefore, we do not believe that the performance information on the wounded warrior programs would be redundant or of limited value given that the department itself is currently unable to systematically determine how well these programs are functioning. As we reported, one of the key problems hindering a department-wide assessment of these programs is the lack of common terms and definitions used by the military services. Although DOD acknowledges that this is an issue, it asserts that it has instituted some common definitions through the Senior Oversight Committee and through its instruction for the RCP and that it will work towards a common understanding and use of these approved definitions. Although we are aware of efforts to define some terms, on the basis of our work, it does not appear that the military services are using them consistently. Therefore, substantial progress towards a common understanding and use will be critical to the department’s ability to oversee these programs. DOD did not respond directly to our recommendation for developing a common mechanism for performance measurement, which we found is not systematically conducted across the wounded warrior programs. During our collection of performance data from the wounded warrior programs, we found that the programs vary in their ability to report performance outcome measures on the basis of what each program chooses to track. In addition, we found that some of the programs had difficulty reporting basic data, such as enrollment numbers, and only compiled these data following our request—sometimes taking about 5 months to do so. Lastly, our recommendation is consistent with the call of the Interagency Care and Coordination Committee that the military programs develop more useful quantitative and qualitative metrics that would effectively demonstrate their performance. Until DOD takes the necessary steps to assess these programs department-wide, it will never know with certitude whether these programs are meeting the needs of its recovering servicemember population. DOD and VA both concurred with our second recommendation that the departments ensure that care coordination, disability evaluation, and electronic health record sharing receive sustained leadership attention and collaboration at the highest levels, with a singular focus on what is best for the individual servicemember or veteran to ensure continuity of care and a seamless transition from DOD to VA. In addition to its comments on our recommendation, VA asserted that the care coordination challenges facing both departments are broader and more complex than issues concerning just the FRCP and RCP and that our overall analysis and conclusions are over simplified. VA stated that through its recently formed task force, both departments identified over 40 programs that provide some level of coordination or management of care and services across the continuum of care and acknowledged that there is no common operational picture that facilitates collaborative planning or situational awareness. We agree that the care coordination challenges are broader and more complex than the FRCP and RCP. Specifically, in October 2011, we recommended that the departments strengthen functional integration across all care coordination and case Similarly, management programs to reduce redundancy and overlap.our current recommendation is broad and does not focus exclusively on these two programs as our review also included other programs, such as the military services’ wounded warrior programs, VA’s Liaison for Healthcare Program, and VA’s OEF/OIF/OND Care Management Program. The scope of our review was directed by Congress, who required us to report on the progress DOD and VA in implementing the programs involved with the care, management, and transition of wounded, ill, and injured servicemembers that they established. Our specific discussion of the FRCP and RCP served to illustrate, until recently, a continued lack of collaboration between the departments to better align these programs and better serve recovering servicemembers and veterans. Furthermore, during detailed discussions with top-level VA and DOD officials, they focused on the FRCP and RCP issue as the main sticking point in achieving coordination and cooperation among the two departments with respect to care coordination and case management. We are encouraged that the departments are now taking steps to identify all programs that need better alignment and integration. However, as we have stated, the key to resolving this and other problems is the need for strong and unwavering leadership that is united across both departments and focused on the individual servicemember’s or veteran’s recovery. VA also suggested further clarifications to our report. VA suggested that we clarify that while the VA Liaison for Healthcare Program facilitates the transfer of recovering servicemembers from DOD’s to VA’s health care system, it is a DOD or VA treatment team that determines if the servicemember is medically ready to begin the transition process. VA also suggested that we add that that the OEF/OIF/OND Care Management Program screens all returning combat veterans for case management services. We incorporated VA’s suggested changes. VA disagrees with a DOD-attributed statement that the Joint Executive Council historically has not driven policy decision making and that, at times, decisions were taken directly to the DOD and VA Secretaries for resolution. The statement that we attribute to the DOD official relates to the period prior to the integration of the Senior Oversight Committee with the Joint Executive Council. As mentioned in the report, it is too early to ascertain whether the newly merged Joint Executive Council will be able to make decisions and drive policy changes in DOD and VA. VA provided clarification about how the Joint Executive Council is currently providing oversight and accountability for wounded warrior issues that were once addressed by the Senior Oversight Committee. We recognize the effort that the Joint Executive Council is now making to track wounded warrior issues, including the integrated disability evaluation system and care coordination. However, we have not had the opportunity to review this tracking mechanism now in place to comment on its effectiveness. VA asserts that the size of the overlap between the FRCP and RCP population is fairly small. Although the number of seriously injured servicemembers may be comparatively small, this situation has been and continues to be a major concern in that these individuals and their families represent a highly vulnerable population. Further, during our review, one high-level DOD official we spoke with characterized the FRCP/RCP overlap as the most difficult policy issue to resolve. While we understand that DOD and VA now intend to harmonize care coordination policies within a broader context of interdepartmental care coordination and case management practice, many of the proposed revisions—including the role to be played by the FRCP— are neither fully developed nor implemented by the separate DOD and VA programs at this time. In our report, we explain that VA declined DOD’s requests to discuss care coordination and case management policy issues—for the better part of 1 year—on the basis that VA was conducting an internal review of its care coordination and case management activities. In its comments, VA stated that the use of the word “decline” is misleading, and suggested that we change our text to state that VA asked DOD to defer collaboration until the internal review was conducted. Despite VA’s characterization that our statement is misleading, we maintain that this finding was based on remarks made by high-level DOD officials that were subsequently corroborated by senior VA officials. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Secretary of Veterans Affairs, and other interested parties. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Both the Department of Defense (DOD) and the Department of Veterans Affairs (VA) operate care coordination and case management programs designed to assist servicemembers and veterans as they navigate the recovery care continuum, from acute medical treatment and stabilization, through rehabilitation, to reintegration—either back to active duty or to the civilian community as a veteran. This appendix describes selected DOD and VA programs and includes data on enrollment and population characteristics as well as the type of information each program tracks on referrals. Within DOD, each military service has established its own wounded warrior program or a complement of programs to assist wounded, ill, and injured servicemembers during their recovery and rehabilitation, and to help with the transition back to active duty or to civilian life. Wounded warrior programs range in size from the largest, the Army’s Warrior Transition Units and Community-Based Warrior Transition Units, with 18,762 enrollees served in fiscal year 2011, to the smallest, the Navy Safe Harbor Program, with 784 enrollees served in fiscal year 2011. (See table 4 for a list of the DOD wounded warrior programs and enrollment for fiscal year 2011.) Programs differ in their organization and function. For example, two of the wounded warrior programs—the Army’s Warrior Transition Units and the Marine Corps Wounded Warrior Regiment—are organized under separate military commands, which means that wounded, ill, and injured servicemembers enrolled in these programs may be removed from their parent units or commands and assigned or attached to a separate unit or regiment that provides command and controlservicemember as well as administrative support. These servicemembers may be housed in separate barracks while receiving medical care and waiting to transition back to active duty or civilian life. The other wounded warrior programs do not assign or attach servicemembers to a separate command structure, but provide services while recovering servicemembers remain with their parent units. The services provided by the wounded warrior programs also vary. A servicemember may receive either case management or care coordination services or both, depending on how the military service’s wounded warrior program is structured. For example, the Navy Safe Harbor Program only provides care coordination services and does not have a case management component, whereas the Marine Corps Wounded Warrior Regiment provides all servicemembers with both case management and care coordination services. A further distinction is whether or not a program serves veterans as well as servicemembers. For example, the Army Warrior Transition Units do not serve veterans, but eligible veterans are served through the Army Wounded Warrior Program. The remainder of the wounded warrior programs continue to provide support to any enrollee who needs services even after the enrollee has transitioned to veteran status. The Army’s Warrior Care and Transition Program, which was established in May 2007, consists of two components that support the recovery process for wounded, ill, and injured servicemembers—the Warrior Transition Unitsoperates a number of warrior transition units located at Army installations across the country. Recovering servicemembers who are attached or assigned to a warrior transition unit generally are housed in barracks and receive medical care, rehabilitative services, professional development and clinical and nonclinical case management services in order to help and the Army Wounded Warrior Program. The Army them in their transition back to active duty or to the civilian community. Army Warrior Transition Units vary in size and functionality, including community-based warrior transition units, which primarily serve Reserve Component servicemembers. In fiscal year 2011, there were a total of 14,906 recovering servicemembers assigned or attached to 29 warrior transition units and 3,856 recovering servicemembers assigned or attached to 10 community-based warrior transition units. (See table 5.) According to Army policy, recovering servicemembers assigned or attached to the units are expected to require 6 months or more of rehabilitative care or require complex medical management. was established in April 2004 to The Army Wounded Warrior Program assist severely wounded, ill, and injured servicemembers, their families, and caregivers. Army Wounded Warrior Program enrollees are assigned an Advocate who provides nonclinical care coordination services, which include assisting enrollees with benefit information, career guidance, finances, and the integrated disability evaluation system (IDES) process. Recovering servicemembers are eligible for Army Wounded Warrior Program services if they have, or are expected to receive, an Army disability rating of 30 percent or greater in one or more specific categories or a combined rating of 50 percent or greater for conditions that are the result of combat or are combat-related. The most severely wounded, ill, or injured servicemembers who are assigned to warrior transition units are also enrolled in the Army Wounded Warrior Program. The Army Wounded Warrior Program also provides services to veterans. In fiscal year 2011, nearly three-fourths of the population (6,953) were veterans. (See table 6.) The Army Wounded Warrior Program was originally named the Disabled Soldier Support System. Army Wounded Warrior Program officials said that the program does not specifically track whether or when an enrollee transitions to veteran status because it has no impact on enrollees’ eligibility for the program and whether they leave the program. Rather, these data have been derived by the program by counting the number of enrolled servicemembers who received a certificate of release or discharge from active duty within each fiscal year. Enrollees considered to have “left for other reasons” include those who died while enrolled in the Army Wounded Warrior Program. The Navy Safe Harbor Program office was established in 2005. Over time, this office expanded its reach and mission, and in 2008 the program became responsible for nonclinical care coordination and oversight of all severely (and high-risk nonseverely) wounded, ill, and injured Sailors and Recovering servicemembers enrolled in the program Coast Guardsmen.are assigned to nonmedical care managers who are geographically dispersed at major military treatment facilities and Veterans Affairs polytrauma medical centers. The program’s nonmedical care managers assist enrollees with services such as pay and personnel, legal, housing, as well as education and training benefits. In addition, enrollees obtain support from centrally located experts in transition and benefits assistance, such as a liaison to the Department of Labor and a Navy Staff Judge Advocate. Recovering servicemembers enrolled in the program are enrolled for life and, if desired, receive support from Navy Safe Harbor personnel after they transition to veteran status. (See table 7.) The Air Force Warrior and Survivor Care Program supports wounded, ill, and injured servicemembers through its Air Force Wounded Warrior Program and the Air Force Recovery Care Program. The Air Force Wounded Warrior Program was established in June 2005 to provide nonclinical case management to Airmen, Air National Guard, and Reserve Component servicemembers who have combat-related illnesses or injuries. Each enrolled servicemember is assigned a nonmedical care manager, who serves as an advocate for enrollees to obtain services from agencies and organizations that support the needs of enrolled servicemembers, their families and caregivers. The Air Force Wounded Warrior Program continues to provide services to enrollees once they transition to veteran status. (See table 8.) The Air Force Recovery Care Program was established in November 2008 to provide nonclinical care coordination services for seriously ill and injured Airmen, Air National Guard, and Reserve Component servicemembers. Each enrolled servicemember is assigned a care coordinator who oversees the coordination of services and assists enrollees’ with nonclinical needs, such as employment and benefits. These care coordinators also work with enrolled servicemembers to develop their recovery plans and career goals. Enrollees who have combat-related illness or injuries are concurrently enrolled in the Air Force Wounded Warrior Program. For example, in fiscal year 2011, almost 300 Air Force Recovery Care Program enrollees were also either tracked or actively assisted by the Air Force Wounded Warrior Program. (See table 9.) The Marine Corps established the Wounded Warrior Regiment in May 2007 to provide and facilitate assistance to wounded, ill, and injured Marines and their family members throughout the recovery process. The Wounded Warrior Regiment is a single command that oversees nonmedical care for the total Marine force, including Active Duty, Reserve, retired, and veteran Marines. The regiment enrolls Marines regardless of whether they have combat- or non-combat-related conditions. The regiment commands the operation of two wounded warrior battalions and 14 detachments located at 12 principal military treatment facilities and four Veterans Affairs polytrauma medical centers across the United States and overseas. A Marine enrolled in the regiment can either stay with his or her parent unit and be supported by the regiment, or be assigned or attached to one of the regiment’s battalions and detachments, depending on their specific needs. Generally, Marines who require more than 90 days of medical treatment or rehabilitation are assigned or attached to a battalion or detachment. The District Injured Support Cells Program is the component of the Wounded Warrior Regiment that provides services to veterans. District Injured Support Coordinators are located at 30 sites across the United States to provide support, including nonmedical care management to its enrollees. In fiscal year 2011, the District Injured Support Coordinators provided support to 1,488 veterans. (See table 10.) The United States Special Operations Command established the Care Coalition in August 2005 to track, support, and advocate for Special Operations Force’s wounded, ill, and injured servicemembers regardless of their duty status or whether their conditions are combat-related. (See table 11.) All enrollees are assigned an Advocate and are entitled to advocate services for life. Advocates assist enrollees with health care and financial benefits, transition processes, and link enrollees with needed government and nongovernment resources. Because the United States Special Operations Command’s Care Coalition serves servicemembers from across the military services, it serves as a liaison with, and complements, the military services’ wounded warrior programs. United States Special Operations Command’s Care Coalition enrollees are often concurrently enrolled in their own military service’s wounded warrior program. However, according to a Care Coalition official, the Care Coalition serves as the lead program for case management and care coordination for dually enrolled servicemembers. VA operates a number of case management and care coordination programs that provide assistance to recovering servicemembers and veterans, including the Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn (OEF/OIF/OND) Care Management Program and the Federal Recovery Coordination Program (FRCP). These two programs assist wounded servicemembers and veterans to navigate the recovery care continuum. The OEF/OIF/OND Care Management Program was established in March 2007 to provide case management to wounded, ill, and injured servicemembers and veterans who screen positive for the need for case management or request case management services. (See table 12). Each of VA’s 152 Medical Centers (VAMC) has an OEF/OIF/OND Care Management team in place to manage patient care activities and ensure that servicemembers and veterans are receiving patient-centered, integrated care and benefits. Members of the OEF/OIF/OND Care Management team include: a Program Manager, Clinical Case Managers, and a Transition Patient Advocate. The FRCP was established in January 2008. Developed as a joint program by DOD and VA, but administered by VA, the program was designed to provide care coordination services to servicemembers and veterans who were “severely” wounded, ill, and injured after September 11, 2001. (See table 13.) The program uses federal recovery coordinators to monitor and coordinate clinical services, including facilitating and coordinating medical appointments, and nonclinical services, such as providing assistance with obtaining financial benefits or special accommodations, needed by program enrollees and their families. Federal recovery coordinators serve as the single point of contact among all of the case managers of DOD, VA, and other governmental and private case management programs that provide services directly to servicemembers and veterans. DOD and VA case management and care coordination programs primarily identify servicemembers and veterans who may be eligible for enrollment through referrals. Tracking referral information, including the number of those who were referred and enrolled or not enrolled in the program, may indicate whether the programs are identifying those who could benefit from their services. However, fewer than half of the DOD and VA case management and care coordination programs that we reviewed track this type of referral information. (See table 14.) The Senior Oversight Committee intended for the Federal Recovery Coordination Program (FRCP) and the Recovery Coordination Program (RCP) to be complementary programs, specifically identifying which population of wounded, ill, and injured servicemembers would be assigned to the two programs. On the basis of work done for the committee, the Department of Defense (DOD) sent a report to congressional committees in 2008 outlining a medical category assignment process based on the severity of each servicemember’s medical condition, along with input from the servicemember and his or her unit commander, to determine whether servicemembers would be directed either to the FRCP or to the RCP programs for care coordination services. In concept, the medical category assignment process would have resulted in wounded, injured, or ill servicemembers being assigned to one of three categories. Servicemembers designated as Category 1 were those who were found to have mild injury or illness, who were expected to return to duty in less than 180 days of medical treatment, and primarily received local outpatient and short-term inpatient treatment and rehabilitation. Servicemembers designated as Category 2 were those with serious injury or illness, who were unlikely to return to duty in less than 180 days, and may be medically separated from the military. Servicemembers designated as Category 3 were those with severe injury or illness, who were highly unlikely to return to duty, and were most likely to be medically separated from the military. The category designation was intended to be used to determine whether the recovering servicemember was subsequently referred to a care coordination program, in that Category 1 servicemembers would not be referred to a care coordination program, unless their medical or psychological conditions worsen; Category 2 servicemembers would be referred to the RCP; and Category 3 servicemembers would be referred to the FRCP. (See fig. 3.) In addition to the contact name above, Bonnie Anderson, Assistant Director; Mark Bird, Assistant Director; Michele Grgich, Assistant Director; Jennie Apter; Frederick Caison; Heather Collins; Dan Concepcion; Melissa Jaynes; Deitra Lee; Mariel Lifshitz; Lisa Motley; Elise Pressma; and Greg Whitney made key contributions to this report. Military Disability System: Improved Monitoring Needed to Better Track and Manage Performance. GAO-12-676. Washington, D.C.: August 28, 2012. Military Disability System: Preliminary Observations on Efforts to Improve Performance. GAO-12-718T. Washington, D.C.: May 23, 2012. More Efficient and Effective Government: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-449T. Washington, D.C.: February 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. DOD and VA Health Care: Action Needed to Strengthen Integration across Care Coordination and Case Management Programs. GAO-12-129T. Washington, D.C.: October 6, 2011. VA and DOD Health Care: First Federal Health Care Center Established, but Implementation Concerns Need to Be Addressed. GAO-11-570. Washington, D.C.: July 19, 2011. Federal Recovery Coordination Program: Enrollment, Staffing, and Care Coordination Pose Significant Challenges. GAO-11-572T. Washington, D.C.: May 13, 2011. Information Technology: Department of Veterans Affairs Faces Ongoing Management Challenges. GAO-11-663T. Washington, D.C.: May 11, 2011. Military and Veterans Disability System: Worldwide Deployment of Integrated System Warrants Careful Monitoring. GAO-11-633T. Washington, D.C.: May 4, 2011. DOD and VA Health Care: Federal Recovery Coordination Program Continues to Expand but Faces Significant Challenges. GAO-11-250. Washington, D.C.: March 23, 2011. Electronic Health Records: DOD and VA Should Remove Barriers and Improve Efforts to Meet Their Common System Needs. GAO-11-265. Washington, D.C.: February 2, 2011. Military and Veterans Disability System: Pilot Has Achieved Some Goals, but Further Planning and Monitoring Needed. GAO-11-69. Washington, D.C.: December 6, 2010. Military and Veterans Disability System: Preliminary Observations on Evaluation and Planned Expansion of DOD/VA Pilot. GAO-11-191T. Washington, D.C.: November 18, 2010. Electronic Health Records: DOD and VA Interoperability Efforts Are Ongoing; Program Office Needs to Implement Recommended Improvements. GAO-10-332. Washington, D.C.: January 28, 2010. Electronic Health Records: DOD and VA Efforts to Achieve Full Interoperability Are Ongoing; Program Office Management Needs Improvement. GAO-09-775. Washington, D.C.: July 28, 2009. Recovering Servicemembers: DOD and VA Have Jointly Developed the Majority of Required Policies but Challenges Remain. GAO-09-728. Washington, D.C.: July 8, 2009. Recovering Servicemembers: DOD and VA Have Made Progress to Jointly Develop Required Policies but Additional Challenges Remain. GAO-09-540T. Washington, D.C.: April 29, 2009. Army Health Care: Progress Made in Staffing and Monitoring Units that Provide Outpatient Case Management, but Additional Steps Needed. GAO-09-357. Washington, D.C.: April 20, 2009. Electronic Health Records: DOD’s and VA’s Sharing of Information Could Benefit from Improved Management. GAO-09-268. Washington, D.C.: January 28, 2009. Electronic Health Records: DOD and VA Have Increased Their Sharing of Health Information, but More Work Remains. GAO-08-954. Washington, D.C.: July 28, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T. Washington, D.C.: September 26, 2007. DOD and VA Health Care: Challenges Encountered by Injured Servicemembers during Their Recovery Process. GAO-07-589T. Washington, D.C.: March 5, 2007.
|
The National Defense Authorization Act for Fiscal Year 2008 required DOD and VA to jointly develop and implement policy on the care, management, and transition of recovering servicemembers. It also required GAO to report on DOD's and VA's progress in addressing these requirements. This report specifically examines (1) the extent to which DOD and VA have resolved persistent problems facing recovering servicemembers and veterans as they navigate the recovery care continuum, and (2) the reasons DOD and VA leadership have not been able to fully resolve any remaining problems. To address these objectives, GAO visited 11 DOD and VA medical facilities selected for population size and range of available resources and met with servicemembers and veterans to identify problems they continue to face. GAO also reviewed documents related to specific DOD and VA programs that assist recovering servicemembers and veterans and interviewed the leadership and staff of these programs to determine why problems have not been fully resolved. Deficiencies exposed at Walter Reed Army Medical Center in 2007 served as a catalyst compelling the Departments of Defense (DOD) and Veterans Affairs (VA) to address a host of problems for wounded, ill, and injured servicemembers and veterans as they navigate through the recovery care continuum. This continuum extends from acute medical treatment and stabilization, through rehabilitation to reintegration, either back to active duty or to the civilian community as a veteran. In spite of 5 years of departmental efforts, recovering servicemembers and veterans are still facing problems with this process and may not be getting the services they need. Key departmental efforts included the creation or modification of various care coordination and case management programs, including the military services' wounded warrior programs. However, these programs are not always accessible to those who need them due to the inconsistent methods, such as referrals, used to identify potentially eligible servicemembers, as well as inconsistent eligibility criteria across the military services' wounded warrior programs. The departments also jointly established an integrated disability evaluation system to expedite the delivery of benefits to servicemembers. However, processing times for disability determinations under the new system have increased since 2007, resulting in lengthy wait times that limit servicemembers' ability to plan for their future. Finally, despite years of incremental efforts, DOD and VA have yet to develop sufficient capabilities for electronically sharing complete health records, which potentially delays servicemembers' receipt of coordinated care and benefits as they transition from DOD's to VA's health care system. Collectively, a lack of leadership, oversight, resources, and collaboration has contributed to the departments' inability to fully resolve problems facing recovering servicemembers and veterans. Initially, departmental leadership exhibited focus and commitment--through the Senior Oversight Committee--to addressing problems related to case management and care coordination, disability evaluation systems, and data sharing between DOD and VA. However, the committee's oversight waned over time, and in January 2012, it was merged with the VA/DOD Joint Executive Council. Whether this council--which has primarily focused on long-term strategic planning--can effectively address the shorter-term policy focused issues once managed by the Senior Oversight Committee remains to be seen. Furthermore, DOD does not provide central oversight of the military services' wounded warrior programs, preventing it from determining how well these programs are working across the department. However, despite these shortcomings, the departments continue to take steps to resolve identified problems, such as increasing the number of staff involved with the electronic sharing of health records and the integrated disability evaluation process. Additionally, while the departments' previous attempts to collaborate on how to resolve case management and care coordination problems have largely been unsuccessful, a joint task force established in May 2012 is focused on resolving long-standing areas of disagreement between VA, DOD, and the military services. However, without more robust oversight and military service compliance, consistent implementation of policies that result in more effective case management and care coordination programs may be unattainable. GAO recommends that DOD provide central oversight of the military services' wounded warrior programs and that DOD and VA sustain high-level leadership attention and collaboration to fully resolve identified problems. DOD partially concurred with the recommendation for central oversight of the wounded warrior programs, citing issues with common eligibility criteria and systematic monitoring. DOD and VA both concurred with the recommendation for sustained leadership attention. GAO recommends that DOD provide central oversight of the military services wounded warrior programs and that DOD and VA sustain high-level leadership attention and collaboration to fully resolve identified problems. DOD partially concurred with the recommendation for central oversight of the wounded warrior programs, citing issues with common eligibility criteria and systematic monitoring. DOD and VA both concurred with the recommendation for sustained leadership attention.
|
DON’s primary mission is to organize, train, maintain, and equip combat- ready naval forces capable of winning the global war on terrorism and any other armed conflict, deterring aggression by would-be foes, preserving freedom of the seas, and promoting peace and security. To support this mission, DON performs a variety of interrelated and interdependent business functions (e.g., acquisition and financial management), relying heavily on IT systems. In fiscal year 2008, DON’s budget for business systems and associated infrastructure was about $2.7 billion, of which $2.2 billion was allocated to operations and maintenance of existing systems and the remaining $500 million to systems in development and modernization. Of the approximately 3,000 business systems that DOD reports in its current inventory, DON accounts for 904, or about 30 percent, of the total. Navy ERP is one such system investment. In July 2003, the Assistant Secretary of the Navy for Research, Development, and Acquisition established Navy ERP to “converge” four separate pilot programs that were under way at four separate Navy commands. This program is to leverage a commercial, off-the-shelf software known as an enterprise resource planning product. Such products consist of multiple, integrated functional modules that perform a variety of business-related tasks, such as acquisition and financial management. Table 1 provides a brief description and status of each of the pilots. According to DOD, Navy ERP is to address the Navy’s long-standing problems related to financial transparency and asset visibility. Specifically, the program is intended to standardize the Navy’s acquisition, financial, program management, maintenance, plant and wholesale supply, and workforce management business processes across its dispersed organizational components. When the program is fully implemented, it is to support over 86,000 users. Navy ERP is being developed in a series of increments using the Systems Applications and Products (SAP) commercial software package, augmented as needed by customized software. SAP consists of multiple, integrated functional modules that perform a variety of business related tasks, such as finance and acquisition. The first increment, called Template 1, is currently the only funded portion of the program and consists of three releases: 1.0 Financial and Acquisition, 1.1 Wholesale and Retail Supply, and 1.2 Intermediate-Level Maintenance. Release 1.0 is the largest of the three releases in terms of the functional requirements being addressed. Specifically, it is to provide about 56 percent of Template 1 requirements. See table 2 for a description of these releases. DON estimates the life cycle cost for the program’s first increment to be about $2.4 billion, including about $1 billion for acquisition, and $1.4 billion for operations and maintenance. The life cycle cost of the entire program has not yet been determined because future increments have not been defined. The program office reported that approximately $400 million was spent from fiscal year 2004 through fiscal year 2007 on the first increment. For fiscal year 2008, about $200 million is planned to be spent. To manage the acquisition and deployment of Navy ERP, DON established a program management office within the Program Executive Office for Executive Information Systems. The program office manages the program’s scope and funding and is responsible for ensuring that the program meets its objectives. To accomplish this, the program office is responsible for key program management areas, such as architectural alignment, economic justification, earned value management, requirements management, and risk management. In addition, various DOD and DON organizations share program oversight and review activities. A listing of key entities and their roles and responsibilities is in table 3. The first increment of Navy ERP is currently in the production and deployment phase of the defense acquisition system. The system consists of five key program life cycle phases and three related milestone decision points. These five phases and related milestones, along with a summary of key program activities completed during or planned for each phase, are as follows: 1. Concept Refinement: The purpose of this phase is to refine the initial system solution (concept) and create a strategy for acquiring the solution. This phase began in July 2003, at which time DON began to converge the four pilot programs into Navy ERP and developed its first cost estimate in September 2003. This phase of the program was combined with the next phase, thus creating a combined Milestone A/B decision point. 2. Technology Development: The purpose of this phase is to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of the various technologies while simultaneously refining user requirements. During the combined Concept Refinement and Technology Development phase, the program office prepared a concept of operations and operational requirements document; performed an analysis of alternatives, business case analysis, and economic analysis; and established its first Acquisition Program Baseline. It also selected SAP as the commercial off-the-shelf ERP software. The combined phase was completed in August 2004, when the MDA approved Milestone A/B to allow the program to move to the next phase. 3. System Development and Demonstration: The purpose of this phase is to develop a system and demonstrate through developer testing that the system can function in its target environment. This phase was completed in September 2007, when Release 1.0 passed development testing and its deployment to NAVAIR began. This was 17 months later than the program’s original schedule set in August 2004 but on time according to the revised schedule set in December 2006. In September 2004, the program office awarded a $176 million system integration contract to BearingPoint for full system design, development, and delivery using SAP’s off-the-shelf product and related customized software. In January 2006, the program office (1) reduced the contractor’s scope of work from development and integration of the first increment to only development of the first release and (2) assumed responsibility and accountability for overall system integration. According to the program office, reasons for this change included the need to change the development plan to reflect improvements in the latest SAP product released and the lack of authority by the contractor to adjudicate and reconcile differences among the various Navy user organizations (i.e., Navy commands). In December 2006, the program office revised its Acquisition Program Baseline to reflect an increase of about $461 million in the life cycle cost estimate due, in part, to restructuring the program (e.g., changing the order of the releases, changing the role of system integrator from contractor to the program office) and resolving problems related to, among other things, converting data from legacy systems to run on Navy ERP and establishing interfaces between legacy systems and Navy ERP. In addition, the program office awarded a $151 million contract for Release 1.1 and 1.2 configuration and development to IBM in June 2007. In September 2007, prior to entering the next phase, the program revised its Acquisition Program Baseline again to reflect a $9 million decrease in the life cycle cost estimate and a 5-month increase in its program schedule. Soon after, the MDA approved Milestone C to move to the next phase. 4. Production and Deployment: The purpose of this phase is to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and to implement the system at all applicable locations. This phase began in September 2007, focusing first on achieving initial operational capability (IOC) of Release 1.0 at NAVAIR by May 2008. This date is 22 months later than the baseline established for Milestone A/B in August 2004, and 4 months later than the new baseline established in September 2007. According to program documentation, these delays were due, in part, to challenges experienced at NAVAIR in converting data from legacy systems to run on the new system and implementing new business procedures associated with the system. In light of the delays at NAVAIR in achieving IOC, the deployment schedules for the other commands were also revised. Specifically, Release 1.0 is still to be deployed at NAVSUP on October 2008, but Release 1.0 deployment at SPAWAR is now scheduled 18 months later than planned (October 2009), and deployment at NAVSEA general fund and Navy Working Capital Fund is now scheduled to be 12 months later than planned (October 2010 and 2011, respectively). Because of the Release 1.0 delays, Release 1.1 is now planned for deployment at NAVSUP 7 months later than planned (February 2010). Release 1.2 is still scheduled to be released at Regional Maintenance Centers in October 2010. The program office is currently in the process of again re-baselining the program, and DON plans to address any cost overruns through reprogramming of fiscal year 2008 DON funds. It estimates that this phase will be completed with full operational capability (FOC) by August 2013 (26 months later than the baseline established in 2004, and 5 months later than the re-baseline established in September 2007). 5. Operations and Support: The purpose of this phase is to operationally sustain the system in the most cost-effective manner over its life cycle. In this phase, the program plans to provide centralized support to its users across all system commands. Each deployment site is expected to perform complementary support functions, such as data maintenance. Overall, Increment 1 was originally planned to reach FOC in fiscal year 2011, and its estimated life cycle cost was about $1.87 billion. The estimate was later baselined in August 2004 at about $2.0 billion. In December 2006 and again in September 2007, the program was re- baselined. FOC is now planned for fiscal year 2013, and the estimated life cycle cost is about $2.4 billion (31 percent increase over the original estimate). Key activities for each phase are depicted in figure 1, changes in the deployment schedule are depicted in figure 2, and cost estimates are depicted in figure 3. IT acquisition management controls are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of a program’s success. Using these controls can result in better outcomes, including cost savings, improved service and product quality, and a better return on investment. For example, two software engineering analyses of nearly 200 systems acquisition projects indicate that teams using systems acquisition controls that reflected best practices produced cost savings of at least 11 percent over similar projects conducted by teams that did not employ the kind of rigor and discipline embedded in these practices. In addition, our research shows that these controls are a significant factor in successful acquisition outcomes, including increasing the likelihood that programs and projects will be executed within cost and schedule estimates. We and others have identified and promoted the use of a number of IT acquisition management controls associated with acquiring IT systems. See table 4 for a description of several of these activities. We have previously reported that DOD has not effectively managed a number of business system investments. Among other things, our reviews of individual system investments have identified weaknesses in such things as architectural alignment and informed investment decision making, which are also the focus areas of the Fiscal Year 2005 Defense Authorization Act business system provisions. Our reviews have also identified weaknesses in other system acquisition and investment management areas—such as EVM, economic justification, requirements management, and risk management. In July 2007, we reported that the Army’s approach for investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program did not include alignment with the Army enterprise architecture or use of a portfolio-based business system investment review process. Moreover, we reported that the Army did not have reliable analyses, such as economic analyses, to support its management of these programs. We concluded that, until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. Accordingly, we made recommendations aimed at improving the department’s efforts to achieve total asset visibility and enhancing its efforts to improve its control and accountability over business system investments. The department agreed with our recommendations. We also reported that DON had not, among other things, economically justified its ongoing and planned investment in the Naval Tactical Command Support System (NTCSS) and had not invested in NTCSS within the context of a well-defined DOD or DON enterprise architecture. In addition, we reported that DON had not effectively performed key measurement, reporting, budgeting, and oversight activities and had not adequately conducted requirements management and testing activities. We concluded that, without this information, DON could not determine whether NTCSS, as defined, and as being developed, is the right solution to meet its strategic business and technological needs. Accordingly, we recommended that the department develop the analytical basis to determine if continued investment in NTCSS represents prudent use of limited resources and to strengthen management of the program, conditional upon a decision to proceed with further investment in the program. The department largely agreed with our recommendations. In addition, we reported that the Army had not defined and developed its Transportation Coordinators’ Automated Information for Movements System II (TC-AIMS II)—a joint services system with the goal of helping to manage the movement of forces and equipment within the United States and abroad— in the context of a DOD enterprise architecture. In addition, we reported that the Army had not economically justified the program on the basis of reliable estimates of life cycle costs and benefits and had not effectively implemented risk management. As a result, we concluded that the Army did not know if its investment in TC-AIMS II, as planned, was warranted or represented a prudent use of limited DOD resources. Accordingly, we recommended that the department, among other things, develop the analytical basis needed to determine if continued investment in TC-AIMS II represents prudent use of limited defense resources. In response, the department agreed with our recommendations and has since reduced the program’s scope by canceling future investments. Furthermore, in 2005, we reported that DON had invested approximately $1 billion in the four previously cited ERP pilots without marked improvement in its day-to-day operations. More specifically, we reported that the program office had not implemented an EVM system. We also identified significant challenges and risks as the project moved forward, such as developing and implementing system interfaces, converting data from legacy systems into the ERP system, meeting its estimated completion date of 2011 at an estimated cost of $800 million, and achieving alignment with DOD’s BEA. To address these areas, we made recommendations that DOD improve oversight of Navy ERP, including developing quantitative metrics to evaluate the program. DOD generally agreed with our recommendations. DOD IT-related acquisition policies and guidance, along with other relevant guidance, provide an acquisition management control framework within which to manage business system programs like Navy ERP. Effective implementation of this framework can minimize program risks and better ensure that system investments are defined in a way to optimally support mission operations and performance, as well as deliver promised system capabilities and benefits on time and within budget. To varying degrees of effectiveness, Navy ERP has been managed in accordance with aspects of this framework. However, implementation of key management controls has not been effective. Specifically, compliance with DOD’s federated BEA has not been sufficiently investment in the program has been economically justified on the basis of expected life cycle benefits that will likely exceed estimated life cycle costs, although some estimating limitations nevertheless exist; earned value management has not been effectively implemented; an important requirements management activity has been effectively a risk management process has been defined, but not effectively implemented for all risks. The reasons that program management and oversight officials cited for why these key practices have not been sufficiently executed range from limitations in the applicable DOD guidance and tools to the complexity and challenges of managing and implementing a program of this size. Each of these reasons is described in the applicable sections of this report. By not effectively implementing all the above key IT acquisition management functions, the program is at increased risk of (1) not being defined in a way that best meets corporate mission needs and enhances performance and (2) adding to the more than 2 years in program schedule delays and about $570 million in program cost increases experienced to date. DOD and other guidance, recognize the importance of investing in business systems within the context of an enterprise architecture. Moreover, the Fiscal Year 2005 Defense Authorization Act requires that defense business systems be compliant with DOD’s federated BEA. Our research and experience in reviewing federal agencies show that not making investments within the context of a well-defined enterprise architecture often results in systems that are duplicative, are not well integrated, are unnecessarily costly to interface and maintain, and do not optimally support mission outcomes. To its credit, the program office has followed DOD’s BEA compliance guidance. However, this guidance does not adequately provide for addressing all relevant aspects of BEA compliance. Moreover, DON’s enterprise architecture, which is a major component of DOD’s federated BEA, as well as key aspects of DOD’s corporate BEA, have yet to be sufficiently defined to permit thorough compliance determinations. In addition, current policies and guidance do not require DON investments to comply with its enterprise architecture. This means that the department does not have a sufficient basis for knowing if Navy ERP has been defined to minimize overlap with and duplication of other programs’ functionality and maximize interoperability among related programs. Each of these architecture alignment limitations is discussed here: The program’s compliance assessments did not include all relevant architecture products. In particular, the program did not assess compliance with the BEA’s technical standards profile, which outlines, for example, the standards governing how systems physically communicate with other systems and how they secure data from unauthorized access. This is particularly important because systems like Navy ERP need to share information with other systems and, for these systems to accomplish this effectively and efficiently, they need to employ common standards. A case in point is the relationship between Navy ERP and the Global Combat Support System—Marine Corps (GCSS-MC) program. Specifically, Navy ERP has identified 25 technical standards that are not in the BEA technical standards profile, and GCSS-MC has identified 13 technical standards that are not in the profile. Among these non-BEA standards are program-unique information sharing protocols, which could limit information sharing between Navy ERP and GCSS-MC, and with other systems. In addition, the program office did not assess compliance with the BEA products that describe system-level characteristics. This is important because doing so would create a body of information about programs that could be used to identify common system components and services that could potentially be shared by the programs, thus avoiding wasteful duplication. For example, our analysis of Navy ERP program documentation shows that it contains system functions related to receiving goods, taking physical inventories, and returning goods, which are also system functions cited by the GCSS-MC program. However, because compliance with the BEA system products was not assessed, the extent to which these functions are potentially duplicative was not considered. Furthermore, the program office did not assess compliance with BEA system products that describe data exchanges among systems. As we previously reported, establishing and using standard system interfaces is a critical enabler to sharing data. For example, Navy ERP program documentation indicates that it is to exchange inventory order and status data with other systems. System interfaces are important for understanding how information is to be exchanged between systems. However, since the program was not assessed for compliance with these products, it does not have the basis for understanding how its approach to exchanging information differs from that of other systems that it is to interface with. Compliance against each of these BEA products was not assessed because DOD’s compliance guidance does not provide for doing so and, according to BTA officials, because some BEA system products are not sufficiently defined. According to these officials, BTA plans to continue to define these products as the BEA evolves. The compliance assessment was not used to identify potential areas of duplication across programs, which DOD has stated is an explicit goal of its federated BEA and associated investment review and decision-making processes. More specifically, even though the compliance guidance provides for assessing programs’ compliance with the BEA product that defines DOD operational activities, and Navy ERP was assessed for compliance with this product, the results were not used to identify programs that support the same operational activities and related business processes. Given that the federated BEA is intended to identify and avoid not only duplications within DOD components, but also between components, it is important that such commonality be addressed. For example, BEA compliance assessments for Navy ERP and GCSS-MC, as well as two Air Force programs (Defense Enterprise Accounting and Management System—Air Force and the Air Force Expeditionary Combat Support System) show that each program supports at least six of the same BEA operational activities (e.g., conducting physical inventory, delivering property and services) and three of these four programs support at least 18 additional operational activities (e.g., performing budgeting, managing receipt and acceptance). However, since the potential overlap among these and other programs was not assessed, these programs may be investing in duplicative functionality. Reasons for this were that the compliance guidance does not provide for such analyses to be conducted and programs have not been granted access rights to use this functionality in the compliance tool. The program’s compliance assessment did not address compliance against DON’s enterprise architecture, which is one of the biggest members of the federated BEA. This is particularly important given that DOD’s approach to fully satisfying the architecture requirements of the Fiscal Year 2005 Defense Authorization Act is to develop and use a federated architecture in which component architectures are to provide the additional details needed to supplement the thin layer of corporate policies, rules, and standards included in the corporate BEA. As we recently reported, DON’s enterprise architecture is not mature because, among other things, it is missing a sufficient description of its current and future environments in terms of business and information/data. However, certain aspects of an architecture nevertheless exist and, according to DON CIO officials, these aspects will be leveraged in its efforts to develop a complete enterprise architecture. For example, the FORCEnet architecture is intended to document Navy’s technical infrastructure. Therefore, opportunities exist for DON to assess its programs in relation to these architecture products, and to understand where its programs are exposed to risks because products do not exist, are not mature, or at odds with other Navy programs. According to DOD officials, compliance with the DON architecture was not assessed because DOD compliance policy is limited to compliance with the corporate BEA, and a number of aspects of the DON enterprise architecture have yet to be sufficiently developed. The program’s compliance assessment was not validated by DON or DOD investment oversight and decision-making authorities. More specifically, neither the DOD IRBs nor the DBSMC, nor the BTA in supporting both of these investment oversight and decision-making authorities, reviewed the program’s assessments. According to BTA officials, under DOD’s tiered approach to investment accountability, these entities are not responsible for validating programs’ compliance assessments. Rather, this is a component responsibility, and thus they rely on the military departments and defense agencies to validate the assessments. However, DON Office of the CIO, which is responsible for precertifying investments as compliant before they are reviewed by the IRB, did not validate any of the program’s compliance assessments. According to Office of the CIO officials, they rely on Functional Area Managers to validate a program’s compliance assessments. However, no DON policy or guidance exists that describes how the Functional Area Managers should conduct such validations. CIO officials stated that this is because these authorities do not have the resources that they need to validate the assessments, and because a number of aspects of the DON architecture are not yet sufficiently developed. Validation of program assessments is further complicated by the absence of information captured in the assessment tool about what program documentation or other source materials were used by the program office in making its compliance determinations. Specifically, the tool is only configured, and thus was only used, to capture the results of a program’s comparison of program architecture products to BEA products. Thus, it was not used to capture the system products used in making these determinations. The limitations in existing BEA compliance-related policy and guidance, the supporting compliance assessment tool, and the federated BEA, put programs like Navy ERP at increased risk of being defined and implemented in a way that does not sufficiently ensure interoperability and avoid duplication and overlap. We recently completed a review examining multiple programs’ compliance with the federated BEA, including Navy ERP, for the Senate Armed Services Committee, Subcommittee on Readiness and Management Support. We addressed the architectural compliance guidance, tool, and validation limitations as part of this review. The investment in Navy ERP has been economically justified on the basis of expected life cycle benefits that far exceed estimated life cycle costs. According to the program’s benefit/cost analysis, Navy ERP will produce about $8.6 billion in estimated benefits for an estimated cost of about $2.4 billion over its 20-year life cycle. While these benefit estimates were not subject to any analysis of how uncertainty in assumptions and data could impact the estimates, as called for by relevant guidance, our examination of key uncertainty variables, such as the timing of legacy systems’ retirement, showed that the savings impact would be relatively minor. However, the reliability of the cost estimate is limited because it was derived using several, but not all, key estimating practices. For example, the estimate was not grounded in a historical record of comparable data from similar programs and was not based on a reliable schedule baseline, which are both necessary to having a cost estimate that can be considered credible and accurate. These practices were not employed for various reasons, including DOD’s lack of historical data from similar programs and the lack of an integrated master schedule for the program that includes all releases. Notwithstanding the fact that these limitations could materially increase the $2.4 billion cost estimate, it is nevertheless unlikely that these factors would increase the estimate to a level approaching the program’s benefit expectations. Therefore, we have no reason to believe that Navy ERP will not produce a positive return on investment. Forecasting expected benefits over the life of a program is a key aspect of economically justifying an investment. The Office of Management and Budget (OMB) guidance advocates economically justifying investments on the basis of expected benefits, costs, and risks. Since estimates of benefits can be uncertain because of the imprecision in both the underlying data and modeling assumptions used, the guidance also provides for analyzing and reporting the effects of this uncertainty. By doing this, informed investment decision making can occur through the life of the program, and a baseline can be established against which to compare the accrual of actual benefits from deployed system capabilities. The most recent economic analysis, dated August 2007, includes monetized benefit estimates for fiscal years 2004–2023, in three key areas—about $2.7 billion in legacy system cost savings, $3.3 billion in cost savings from inventory reductions, and $2.7 billion in cost savings from labor productivity improvements. Collectively, these benefits total about $8.6 billion. The program office calculated expected benefits in terms of cost savings, which is consistent with established practices and guidance. For example, the program is to result in the retirement of 138 legacy systems (including the 4 pilot systems) between fiscal years 2005 and 2015, and the yearly maintenance costs for a single system are expected to be as high as about $39 million. According to relevant guidance, cost saving estimates should also be analyzed in terms of how uncertainty in assumptions and data could impact them. However, the program office did not perform such uncertainty analysis. According to program officials, uncertainty analysis is not warranted because they have taken and continue to take steps to validate the assumptions and the data, such as using the latest budget data associated with the legacy systems, and monitoring changes to the systems’ retirement dates. While these steps are positive, they do not eliminate the need for uncertainty analysis. Accordingly, we assessed key uncertainty variables, such as the timing of the legacy systems’ retirement, and found that the retirement dates of some of these systems have changed since the estimate was prepared, due to, among other things, schedule delays in the program. While the inherent uncertainty in these dates would reduce expected savings (e.g., only $11 million based on the 134 legacy systems that we examined), the reduction would be small relative to a total benefit estimate of $8.6 billion. A reliable cost estimate is a key variable in calculating return on investment, and it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. According to OMB, programs must maintain current and well- documented cost estimates, and these estimates must encompass the full life cycle of the program. OMB states that generating reliable cost estimates is a critical function necessary to support OMB’s capital programming process. Without reliable estimates, programs are at increased risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Our research has identified a number of practices that are the basis of effective program cost estimating. We have issued guidance that associates these practices with four characteristics of a reliable cost estimate. Specifically, these four characteristics are as follows: Comprehensive: The cost estimates should include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. They should also provide an appropriate level of detail to ensure that cost elements are neither omitted nor double counted and include documentation of all cost-influencing ground rules and assumptions. Well-documented: The cost estimates should have clearly defined purposes and be supported by documented descriptions of key program or system characteristics (e.g., relationships with other systems, performance parameters). Additionally, they should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. The final cost estimate should be reviewed and accepted by management on the basis of confidence in the estimating process and the estimate produced by the process. Accurate: The cost estimates should provide for results that are unbiased and should not be overly conservative or optimistic (i.e., they should represent most likely costs). In addition, the estimates should be updated regularly to reflect material changes in the program, and steps should be taken to minimize mathematical mistakes and their significance. Among other things, the estimate should be grounded in a historical record of cost estimating and actual experiences on comparable programs. Credible: The cost estimates should discuss any limitations in the analysis performed due to uncertainty or biases surrounding data or assumptions. Further, the estimates’ derivation should provide for varying any major assumptions and recalculating outcomes based on sensitivity analyses, and their associated risks and inherent uncertainty should be disclosed. Also, the estimates should be verified based on cross-checks using other estimating methods and by comparing the results with independent cost estimates. The $2.4 billion life cycle cost estimate for Navy ERP reflects many of the practices associated with a reliable cost estimate, including all practices associated with being comprehensive and well-documented, and several related to being accurate and credible (see table 5). However, several important practices related to accuracy and credibility were not performed. To be reliable, a cost estimate should be comprehensive, well- documented, accurate, and credible. The cost estimate is comprehensive because it includes both the government and contractor costs specific to development, acquisition (nondevelopment), implementation, and operations and support over the program’s 20-year life cycle. Moreover, the estimate clearly describes how the various subelements are aggregated to produce amounts for each cost category, thereby ensuring that all pertinent costs are included, and no costs are double counted. Finally, cost-influencing ground rules and assumptions, such as the program’s schedule, labor rates, and inflation rates are documented. The cost estimate is also well-documented in that the purpose of the cost estimate is clearly defined, and the technical baseline includes, among other things, the hardware components and planned performance parameters. Furthermore, the calculations and results used to derive the estimate are documented, including descriptions of the methodologies used and evidence of traceability back to source data (e.g., vendor quotes, salary tables). Also, the cost estimate was reviewed by the Naval Center for Cost Analysis and the Office of the Secretary of Defense, Director for Program Analysis and Evaluation, which adds a level of confidence in the estimating process and the estimate produced. However, the estimate lacks accuracy because not all important practices related to this characteristic were performed. Specifically, while the estimate is grounded in documented assumptions (e.g., hardware refreshment every 5 years) and periodically updated to reflect changes to the program, it is not adequately grounded in historical experience with comparable programs. While the program office did leverage historical cost data from the Navy ERP pilot programs, program officials told us that the level of cost accounting on these programs did not provide sufficient data. As stated in our guide, estimates should be based on historical records of cost and schedule estimates from comparable programs, and such historical data should be maintained and used for evaluation purposes and future estimates on comparable programs. The importance of doing so is evident by the fact that Navy ERP’s cost estimate has increased by about $570 million since fiscal year 2003, which program officials attributed to, among other things, site implementation costs (e.g., training and converting legacy system data) not included in the original cost estimate, schedule delays, and the lack of historical data from similar ERP programs. This lack of cost data for large-scale ERP programs is, in part, due to DOD not having a standardized cost element structure for these programs that can be used for capturing actual cost data, which is a prerequisite to capturing and maintaining the kind of historical data that can inform cost estimates on similar programs. This means that programs like Navy ERP will not be able to ground their cost estimates in actual costs from comparable programs. According to officials with the Defense Cost and Resource Center, such cost element structures are needed, along with a requirement for programs to report on their costs, but approval and resources have yet to be gained for either these structures or the reporting of their costs. We recently completed work that addressed standardization of DOD’s ERP cost element structure and maintenance of a database for historical ERP cost data for use on ERP programs. Compounding the estimate’s limited accuracy are limitations in its credibility. Specifically, while the estimate satisfies some of the key practices for a credible cost estimate (e.g., confirming key cost drivers, performing sensitivity analyses, and having an independent cost estimate prepared by the Naval Center for Cost Analysis that was within 11 percent of the program’s estimate), the program lacks a reliable schedule baseline, which is a key component of a reliable cost estimate because it serves as the basis for future work to be performed. Other factors that limit confidence in the cost estimate’s accuracy are (1) past increases in the program’s cost estimate (as discussed earlier) and (2) trends in EVM data (as discussed later). Taken together, the program’s cost estimate is not sufficiently credible and accurate and thus not reliable. While important cost estimating practices were not implemented, it is nevertheless unlikely that these limitations would materially increase the $2.4 billion cost estimate to a level approaching the program’s $8.6 billion benefit expectations. Measuring and reporting progress against cost and schedule commitments (i.e., baselines) is a vital element of effective program management. EVM provides a proven means for measuring such progress and thereby identifying potential cost overruns and schedule delays early, when their impact can be minimized. To its credit, the program has elected to implement program-level EVM, which is a best practice that has rarely been implemented in the federal government. In doing so, however, basic EVM activities have not been executed. In particular, an integrated baseline review, which is to verify that the program’s cost and schedule are reasonable given the program’s scope of work and associated risks, has not been performed. Moreover, other accepted industry standards have not been sufficiently implemented, and surveillance of EVM implementation by an entity independent of the program office has not occurred. Not performing these important practices has contributed to the cost overruns and lengthy schedule delays already experienced on Release 1.0, and they will likely result in more. In fact, our analysis of the latest estimate to complete just the budgeted development work for all three releases, which is about $844 million, shows that this estimate will most likely be exceeded by about $152 million. As we previously reported, EVM offers many benefits when done properly. In particular, it allows performance to be measured, and it serves as an early warning system for deviations from plans. It, therefore, enables a program office to mitigate the risks of cost and schedule overruns. OMB policy recognizes the use of EVM as an important part of program management and decision making. Implementing EVM at the program level rather than just the contract level is considered a best practice, and OMB recently began requiring it to measure how well a program’s approved cost, schedule, and performance goals are being met. According to OMB, integrating government and contractor cost, schedule, and performance status should result in better program execution through more effective management. In addition, integrated EVM data can be used to better justify budget requests. To minimize the risk associated with its decision to transition responsibility for Navy ERP system integration from the contractor to the government and to improve cost and schedule performance, the program office elected in October 2006 to perform EVM at the program level. We support the use of program-level EVM. However, if not implemented effectively, this program-level approach will be of little value. A fundamental aspect of effective EVM is the development of a performance measurement baseline (PMB), which represents the cumulative value of planned work and serves as the baseline against which variances are calculated. According to relevant best practice guidance, a PMB consists of a complete work breakdown structure, a complete integrated master schedule, and accurate budgets for all planned work. To validate the PMB, an integrated baseline review is performed to obtain stakeholder agreement on the baseline. According to DOD guidance and best practices, such a review should be held within 6 months of a contract award and conducted on an as needed basis throughout the life of a program to ensure that the baseline reflects (1) all tasks in the statement of work, (2) adequate resources (staff and materials) to complete the tasks, and (3) integration of the tasks into a well-defined schedule. Further, the contract performance reports that are to be used to monitor performance against the PMB should be validated during the integrated baseline review. The program office has satisfied some of the prerequisites for having a reliable PMB, such as developing a work breakdown structure and specifying the contract performance reports that are to be used to monitor performance. However, it has not conducted an integrated baseline review. Specifically, a review was not conducted for Release 1.0, even though the contract was finalized about 30 months ago (January 2006). Also, while the review for Release 1.1 was recently scheduled for August 2008, this is 8 months later than when such a review should be held, according to DOD guidance and best practices. This means that the reasonableness of the program’s scope and schedule relative to the program risks has not been assured, and has likely been, and will likely continue to be a primary contributor to future cost increases and schedule delays. According to program officials, a review was not performed on the first release because development of this release was largely complete by the time the program office established the underlying capabilities needed to perform program-level EVM. In addition, program officials stated that an integrated baseline review has yet to be performed on the other two releases because their priority has been on deploying and stabilizing the first release. In our view, not assuring the validity of the PMB precludes effective implementation of EVM. Until a review is conducted, DOD will not have reasonable assurance that the program’s scope and schedule are achievable, and thus, additional cost and schedule overruns are likely. In 1996, DOD adopted industry EVM guidance that identifies 32 essential practices organized into five categories: (1) organization; (2) planning, scheduling and budgeting; (3) accounting; (4) analysis and management reports; and (5) revisions and data maintenance. DOD requires that all programs’ implementation of EVM undergo a compliance audit against the 32 industry practices. In addition, DOD policy and guidance state that independent surveillance of EVM should occur over the life of the program to guarantee the validity of the performance data and ensure that EVM is being used effectively to manage cost, schedule, and technical performance. On Navy ERP, compliance with the 32 accepted industry practices has not been verified, and surveillance of EVM by an independent entity has not occurred. Therefore, the program does not have the required basis for ensuring that EVM is being effectively implemented on Navy ERP. According to program officials, surveillance was performed by NAVAIR for Release 1.0. However, NAVAIR officials said that they did not perform such surveillance because they did not receive the Release 1.0 cost performance data needed to do so. Program officials also stated that DON’s Center for Earned Value Management has conducted an initial assessment of their EVM management system, and that they intend to have the Center perform surveillance. However, they did not have a plan for accomplishing this. Until compliance with the standards is verified and continuous surveillance occurs, and deviations are addressed, the program will likely continue to experience cost overruns and schedule delays. The success of any program depends in part on having a reliable schedule of when the program’s work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for the systematic execution of a program but also provides the means by which to estimate costs, gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine practices associated with effective schedule estimating. These practices are (1) capturing key activities, (2) sequencing key activities, (3) establishing the duration of key activities, (4) assigning resources to key activities, (5) integrating key activities horizontally and vertically, (6) establishing the critical path for key activities, (7) identifying “float time” between key activities, (8) distributing reserves to high-risk activities, and (9) performing a schedule risk analysis. The program’s estimated schedule was developed using some of these practices, but several key practices were not fully employed that are fundamental to having a schedule that provides a sufficiently reliable basis for estimating costs, measuring progress and forecasting slippages. On the positive side, the schedule for the first two releases captures key activities and their durations and is integrated horizontally and vertically, meaning that multiple teams executing different aspects of the program can effectively work to the same master schedule. Moreover, for these two releases, the program has established float time between key activities and distributed schedule reserve to high-risk activities. However, the program has not adequately sequenced and assigned resources to key program activities. Moreover, the estimated schedule for the first increment is not grounded in an integrated master schedule of all the releases, and thus the schedule for this increment does not reflect the program’s critical path of work that must be performed to achieve the target completion date. Also, it does not reflect the results of a schedule-risk analysis across all three releases with schedule reserve allocated to high-risk activities because such risks were not examined. See table 6 for the results of our analyses relative to each of the nine practices. According to program documentation, they have plans to address the logical sequencing of activities (to ensure that it reflects how work is to be performed), but program officials stated that they do not plan to combine all three releases into a single integrated master schedule for the entire first increment of the program because doing so would produce an overly complex and nonexecutable schedule involving as many as 15,000 activities. However, our research of and experience in evaluating major programs’ use of EVM and integrated master schedules show that while large, complex programs necessitate schedules involving thousands of activities, successful programs ensure that their schedules integrate these activities. In our view, not adequately performing these practices does not allow the program to effectively assign resources, identify the critical path, and perform a schedule risk analysis that would allow it to understand, disclose, and compensate for its schedule risks. This means that the program is not well-positioned to understand progress and forecast its impact. To illustrate, the program recently experienced delays in deploying its first release at NAVAIR, which according to a recent operational test and evaluation report has significantly affected the schedule’s critical path. These schedule impacts are because resources supporting the deployment at NAVAIR began to shift to the next scheduled deployment site and thus are no longer available to resolve critical issues at NAVAIR. Since the schedule baseline is not integrated across all releases, the impact of this delay on other releases, and thus the program as a whole, cannot be readily determined. Program data show a pattern of actual cost overruns and schedule delays between January 2007 and May 2008. Moreover, our analysis of the data supports a most likely program cost growth of about $152 million to complete all three releases. Differences from the PMB are measured in both cost and schedule variances. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate that activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in forecasting the cost and time needed to complete the program. Based on program-provided data for the first increment over a 17-month period ending May 2008, the program has experienced negative cost variances. Specifically, while these cost variances have fluctuated during this period, they have consistently been negative. (See fig. 4.) Moreover, our analysis of the cost to complete just the budgeted development work (also known as the PMB) for all three releases, which is about $844 million, will be exceeded by between about $102 million and $316 million, with a most likely overrun of about $152 million. In contrast, the program office reports that the overrun at completion will be $55 million but has yet to provide us with documentation supporting this calculation. Moreover, our calculation does not reflect the recent problems discovered during the operational test and evaluation at NAVAIR and thus the overrun is likely to be higher. During this same 17-month period, the program has experienced negative schedule variances and, since January 2008, they have almost doubled each month. Further, as of May 2008, the program had not completed about $24 million in scheduled work. (See fig. 5.) An inability to meet schedule performance is a frequent indication of future cost increases, as more spending is often necessary to resolve schedule delays. Because the program office has not performed important reliability checks, such as EVM validation and integrated baseline reviews, as discussed above, we cannot be certain that the PMB is reliable (i.e., reflects all the work to be done and has identified all the risks). As a result, the overrun that we are forecasting could be higher. By not executing basic EVM practices, the program has and will likely continue to experience cost and schedule shortfalls. Until the program office implements these important EVM practices, it will likely not be able to track actual program costs and schedules close to estimates. Well-defined and managed requirements are recognized by DOD guidance as essential, and they can be viewed as a cornerstone of effective system acquisition. One aspect of effective requirements management is requirements traceability. By tracing requirements both backward from system requirements to higher level business or operational requirements and forward to system design specifications and test plans, the chances of the deployed product satisfying requirements is increased, and the ability to understand the impact of any requirement changes and thus make informed decisions about such changes, is enhanced. The program office is effectively implementing requirements traceability for its 1,733 Release 1.0 system requirements. To verify this traceability, we randomly selected and analyzed 60 of the 1,733 system requirements and confirmed that 58 of the 60 were traceable both backward to higher level requirements and forward to design specifications and test results. The remaining 2 had been allocated to the other releases, and thus we also confirmed the program’s ability to maintain traceability between product releases. In doing so, the program utilized a tool called DOORS, which if implemented properly, allows each requirement to be linked from its most conceptual definition to its most detailed definition, as well as to design specifications and test cases. In effect, the tool maintains the linkages among requirement documents, design documents, and test cases even if requirements change. If DON continues to effectively implement requirements traceability, it will increase the chances that system requirements will be met by the deployed system. Proactively managing program risks is a key acquisition management control and, if defined and implemented properly, it can increase the chances of programs delivering promised capabilities and benefits on time and within budget. To the program office’s credit, it has defined a risk management process that meets relevant guidance. However, it has not effectively implemented the process for all identified risks. As a result, these risks have not been proactively mitigated and either have contributed to cost and schedule shortfalls, or could potentially contribute to such shortfalls. DOD acquisition management guidance, as well as other relevant guidance advocates identifying facts and circumstances that can increase the probability of an acquisition’s failing to meet cost, schedule, and performance commitments and then taking steps to reduce the probability of their occurrence and impact. In brief, effective risk management consists of: (1) establishing a written plan for managing risks; (2) designating responsibility for risk management activities; (3) encouraging program-wide participation in the identification and mitigation of risks; (4) defining and implementing a process that provides for the identification, analysis, and mitigation of risks; and (5) examining the status of identified risks in program milestone reviews. The program office has developed a written plan for managing risks and established a process that together provide for the above-cited risk management practices. Moreover, it has largely followed its plan and process as per the following examples: The program manager has been assigned overall responsibility for managing risks and serves as the chair of the risk management board. Also, a functional team lead (i.e., subject matter expert) is assigned responsibility for analyzing and mitigating each identified risk. Program-wide participation in the identification, analysis, and mitigation of risks is encouraged. Specifically, a manager for each release is responsible for providing risk management guidance to the staff, which includes staff identification and analysis of risks. Also, according to the program office’s risk management plan, all program personnel can submit a risk for approval. In addition, stakeholders participate in risk management activities during acquisition milestone reviews. The program office has identified and categorized individual risks. As of June 2008, the risk database contained 15 active risks—3 high, 8 medium, and 4 low. Program risks are considered during program milestone reviews. For example, during the program’s critical design review, which is a key event of the system development and demonstration phase, key risks regarding implementing new business processes and legacy system changes were discussed. Furthermore, the program manager receives a monthly risk report that describes the status of program risks. However, the program office has not consistently followed other aspects of its process. In particular, it has not effectively implemented steps for mitigating the risks associated with (1) converting data from NAVAIR’s legacy systems to run on Navy ERP and (2) positioning NAVAIR for adopting the new business processes embedded in Navy ERP. As we have previously reported, it is important for organizations that are to operate and use commercial off-the-shelf software products, such as Navy ERP, to proactively manage and position themselves for the organizational impact of introducing functionality embedded in the commercial products. If they do not, the organization’s performance will suffer. To the program office’s credit, it identified numerous risks associated with data conversion and organizational change management and developed and implemented strategies that were intended to mitigate these risks. However, it closed these risks even though they were never effectively mitigated, as evidenced by the results of recently completed DON operational test and evaluation. According to the June 2008 operational test and evaluation report for NAVAIR, significant problems relating to both legacy system data conversion and adoption of new business processes were experienced. The report states that these problems have contributed to increases in the costs to operate the system, including unexpected manual effort. It further states that these problems have rendered the deployed version not operationally effective and that deployment of the system to other sites should not occur until the change management process has been analyzed and improved. It also attributed the realization of the problems to the program office and NAVAIR not having adequately engaged and communicated early with each other to coordinate and resolve differences in organizational perspectives and priorities and provide intensive pre-deployment preparation and training. Program officials acknowledged these shortcomings and attributed them to their limited authority over the commands. In this regard, they have previously surfaced these risks with department oversight and approval authorities, but actions were not taken by these authorities that ensured that the risks were being effectively mitigated. Beyond not effectively mitigating these risks, the program office has not ensured that all risks are captured in the risk inventory. For example, the inventory does not include the risks described in this report that are associated with not having adequately demonstrated the program’s alignment to the federated BEA and not having implemented program- level EVM in a manner that reflects industry practices. This means that these risks are not being disclosed or mitigated. By not effectively addressing all risks associated with the program, these risks can and have become problems that contribute to cost and schedule shortfalls. Until all significant risks are proactively addressed, to include ensuring that all associated mitigation steps are implemented and that they accomplished their intended purpose, the program will likely experience further problems at subsequent deployment sites. DOD’s success in delivering large-scale business systems, such as Navy ERP, is in large part determined by the extent to which it employs the kind of rigorous and disciplined IT management controls that are reflected in department policies and related guidance. While implementing these controls does not guarantee a successful program, it does minimize a program’s exposure to risk and thus the likelihood that it will fall short of expectations. In the case of Navy ERP, living up to expectations is important because the program is large, complex, and critical to addressing the department’s long-standing problems related to financial transparency and asset visibility. The effectiveness to which key IT management controls have been implemented in Navy ERP varies, with one control and several aspects of others being effectively implemented, and others less so. Moreover, those controls that have not been effectively implemented have, in part, contributed to the sizable cost and schedule shortfalls experienced to date on the program. Unless this changes, more shortfalls can be expected. While the program office is primarily responsible for ensuring that effective IT management controls are implemented, other oversight and stakeholder organizations share responsibility. For example, even though the program has not demonstrated its alignment with the federated BEA, it nevertheless followed established DOD architecture compliance guidance and used the related compliance assessment tool in assessing and asserting its compliance. The root cause for not demonstrating compliance thus is not traceable to the program office but rather is due to, among other things, the limitations of the compliance guidance and tool, and the program’s oversight entities not validating the compliance assessment and assertion. Also, the reason that the program’s cost estimate was not informed by the cost experiences of other programs of the same size and scope is because DOD does not have a standard ERP cost element structure and has not maintained a historical database of costs for like programs to use. In contrast, effective implementation of other management controls, such as implementing EVM, requirements traceability, and risk management is the responsibility of the program office. All told, addressing the management control weaknesses requires the combined efforts of the various organizations that share responsibility for managing and overseeing the program. By doing so, the department can better assure itself that Navy ERP will optimally support its performance goals and will deliver promised capabilities and benefits on time and within budget. Because we recently completed work that more broadly addresses the above cited architectural alignment and comparable program cost data limitations, we are not making recommendations in this report for addressing them. To strengthen Navy ERP management control and better provide for the program’s success, we are making the following recommendations: To improve the reliability of Navy ERP benefit estimates and cost estimates, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that future Navy ERP estimates include uncertainty analyses of estimated benefits, reflect the risks associated with not having cost data for comparable ERP programs, and are otherwise derived in full accordance with the other key estimating practices, and economic analysis practices discussed in this report. To enhance Navy ERP’s use of EVM, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that (1) an integrated baseline review on the last two releases of the first increment is conducted, (2) compliance against the 32 accepted industry EVM practices is verified, and (3) a plan to have an independent organization perform surveillance of the program’s EVM system is developed and implemented. To increase the quality of the program’s integrated master schedule, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the schedule (1) includes the logical sequencing of all activities, (2) reflects whether all required resources will be available when needed, (3) defines a critical path that integrates all three releases, (4) allocates reserve for the high- risk activities on the entire program’s critical path, and (5) incorporates the results of a schedule risk analysis for all three releases and recalculates program cost and schedule variances to more accurately determine a most likely cost and schedule overrun. To improve Navy ERP’s management of program risks, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that (1) the plans for mitigating the risks associated with converting data from legacy systems to Navy ERP and positioning the commands for adopting the new business processes embedded in the Navy ERP are re-evaluated in light of the recent experience with NAVAIR and adjusted accordingly, (2) the status and results of these and other mitigation plans’ implementation are periodically reported to program oversight and approval authorities, (3) these authorities ensure that those entities responsible for implementing these strategies are held accountable for doing so, and (4) each of the risks discussed in this report are included in the program’s inventory of active risks and managed accordingly. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, DOD stated that it concurred with two of our four recommendations and partially concurred with the remaining two. Further, it stated that it has taken steps to address some of our recommendations, adding that it is committed to implementing recommendations that contribute to the program’s success. The department’s comments relative to both of the recommendations that it partially concurred with, as well as additional comments, are discussed below. For our recommendation associated with improving the program’s benefit and cost estimates, DOD concurred with two of the recommendation’s three parts, but it did not concur with one part—ensuring that future cost estimates reflect the risk of not having cost data for comparable programs. While acknowledging that the program had limited cost data from comparable programs on which to base its cost estimate, DOD stated that an uncertainty analysis had been applied to the estimate to account for the risk associated with not having such data. The department further stated that actual experience on the program will continue to be used to refine the program’s cost estimating methodology. While we support DOD’s stated commitment to using actual program cost experience in deriving future estimates, we do not agree that the latest estimate accounted for the risk of not having cost data from comparable programs. We examined the uncertainty analysis as part of our review, and found that it did not recognize this risk. Moreover, DOD’s comments offered no new evidence to the contrary. For our recommendation associated with improving the program’s schedule estimating, DOD concurred with four of the recommendation’s five parts, and partially concurred with one part—ensuring that the schedule defines a critical path that integrates all releases. In taking this position, the department stated that a critical path has been established for each release rather than across all three releases, and it attributes this to the size and complexity of the program. We do not take issue with either of these statements, as they are already recognized in our report. However, DOD offers no new information in its comments. Further, our report also recognizes that to be successful, large and complex programs that involve thousands of activities need to ensure that their schedules integrate these activities. In this regard, we support the department’s commitment to explore the feasibility of implementing this part of our recommendation. In addition, while stating that it concurred with all parts of our recommendation associated with improving the program’s use of EVM, DOD nevertheless provided additional comments as justification for having not conducted an integrated baseline review on Release 1.0. Specifically, it stated that when it rebaselined this release in December 2006, the release’s development activities were essentially complete and the release was in the latter stages of testing. Further, it stated that the risks associated with the Release 1.0 schedule were assessed 3 months after this rebaselining, and these risks were successfully mitigated. To support this statement, it said that Release 1.0 achieved its “Go-Live” as scheduled at NAVAIR. We do not agree with these comments for several reasons. First, at the time of the rebaselining, about 9 months of scheduled Release 1.0 development remained, and thus the release was far from complete. Moreover, the significance of the amount of work that remained, and still remains today on Release 1.0 is acknowledged in DOD’s own comment that the scheduled integrated baseline review for Release 1.1 will also include remaining Release 1.0 work. Second, the Release 1.0 contract was awarded in January 2006, and DOD’s own guidance requires that an integrated baseline review be conducted within 6 months of a contract’s award. Third, although DOD states that the program achieved “Go-Live” as scheduled on October 1, 2007, the program achieved initial operational capability 7 months later than established in the December 2006 baseline. In addition to these comments, the department also described actions under way or planned to address our recommendations. We support the actions described, as they are consistent with the intent of our recommendations. If fully and properly implemented, these actions will go a long way in addressing the management control weaknesses that our recommendations are aimed at correcting. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; the Secretary of Defense; and the Department of Defense Office of the Inspector General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3439 or hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the Department of the Navy is effectively implementing information technology management controls on the Navy Enterprise Resource Planning (Navy ERP) program. To accomplish this, we focused on the first increment of Navy ERP and the following management areas (1) architectural alignment, (2) economic justification, (3) earned value management (EVM), (4) requirements management, and (5) risk management. To determine whether Navy ERP was aligned with the Department of Defense’s (DOD) federated business enterprise architecture (BEA), we reviewed the program’s BEA compliance assessments and system architecture products, as well as Versions 4.0, 4.1, and 5.0 of the BEA, and compared them with the BEA compliance requirements described in the Fiscal Year 2005 Defense Authorization Act and DOD’s BEA compliance guidance, and we evaluated the extent to which the compliance assessments addressed all relevant BEA products. We also determined the extent to which the program-level architecture documentation supported the BEA compliance assessments. We obtained documentation, such as the BEA compliance assessments from the Navy ERP and Global Combat Support System—Marine Corps programs, as well as the Air Force’s Defense Enterprise Accounting and Management System and Air Force Expeditionary Combat Support System programs. We then compared these assessments to identify potential redundancies or opportunities for reuse and determined if the compliance assessments examined duplication across programs, and if the tool that supports these assessments is being used to identify such duplication. In doing so, we interviewed program officials and officials from the Department of the Navy, Office of the Chief Information Officer and reviewed recent GAO reports to determine the extent to which the programs were assessed for compliance against the Department of the Navy enterprise architecture. We also interviewed program officials and officials from the Business Transformation Agency and the Department of the Navy, including the logistics Functional Area Manager, and obtained guidance documentation from these officials to determine the extent to which the compliance assessments were subject to oversight or validation. To determine whether the program had economically justified its investment in Navy ERP, we reviewed the latest economic analysis to determine the basis for the cost and benefit estimates. This included evaluating the analysis against Office of Management and Budget guidance and GAO’s Cost Assessment Guide. In doing so, we interviewed cognizant program officials, including the program manager and cost analysis team, regarding their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. We also interviewed officials at the Office of Program Analysis and Evaluation and Naval Center for Cost Analysis as to their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. We did not verify the validity of the source data used to calculate estimated benefits, such as those data used to determine the yearly costs associated with legacy systems planned for retirement. To determine the extent to which the program had effectively implemented EVM, we reviewed relevant documentation, such as contract performance reports, acquisition program baselines, performance measurement baseline, and schedule estimates and compared them with DOD policies and guidance. To identify trends that could affect the program baseline in the future, we assessed cost and schedule performance and, in doing so, we applied earned value analysis techniques to data from contract performance reports. We compared the cost of work completed with the budgeted costs for scheduled work over a 17-month period, from January 2007 to May 2008, to show trends in cost and schedule performance. We also used data from the reports to estimate the likely costs at completion of the program through established earned value formulas. This resulted in three different values, with the middle value being the most likely. We checked EVM data to see if there were any mathematical errors or inconsistencies that would lead to the data being unreliable. We interviewed cognizant officials from the Naval Air Systems Command and program officials to determine whether the program had conducted an integrated baseline review, whether the EVM system had been validated against industry guidelines, and to better understand the anomalies in the EVM data and determine what outside surveillance was being done to ensure that the industry standards are being met. We also reviewed the program’s schedule estimates and compared them with relevant best practices to determine the extent to which they reflect key estimating practices that are fundamental to having a reliable schedule. In doing so, we interviewed cognizant program officials to discuss their use of best practices in creating the program’s current schedule. To determine the extent to which the program has effectively implemented requirements management, we reviewed relevant program documentation, such as the program management plan and baseline list of requirements. To determine the extent to which the program has maintained traceability backward to high-level business operation requirements and system requirements, and forward to system design specifications, and test plans, we randomly selected 60 program requirements and traced them both backward and forward. This sample was designed with a 5 percent tolerable error rate at the 95 percent level of confidence so that, if we found 0 problems in our sample, we could conclude statistically that the error rate was less than 5 percent. In addition, we interviewed program officials involved in the requirements management process to discuss their roles and responsibilities for managing requirements. To determine the extent to which the program implemented risk management, we reviewed relevant risk management documentation, such as the program’s risk management plan and risk database reports demonstrating the status of the program’s major risks and compared the program office’s activities with DOD acquisition management guidance and related industry practices. We also reviewed the program’s mitigation process with respect to key risks to determine the extent to which these risks were effectively managed. In doing so, we interviewed cognizant program officials, such as the program manager and risk manager, to discuss their roles and responsibilities and obtain clarification on the program’s approach to managing risks associated with acquiring and implementing Navy ERP. We conducted this performance audit at DOD offices in the Washington, D.C., metropolitan area and Annapolis, Md., from June 2007 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, key contributors to this report were Neelaxi Lakhmani, Assistant Director; Monica Anatalio; Harold Brumm; Neil Doherty; Cheryl Dottermusch; Nancy Glover; Mustafa Hassan; Michael Holland; Ethan Iczkovitz; Anh Le; Josh Leiling; Emily Longcore; Lee McCracken; Madhav Panwar; Karen Richey; Melissa Schermerhorn; Karl Seifert; Sushmita Srikanth; Jonathan Ticehurst; and Adam Vodraska.
|
The Department of Defense (DOD) has long been challenged in implementing key information technology (IT) management controls on its thousands of business system investments. For this and other reasons, GAO has designated DOD's business systems modernization efforts as high-risk. One of the larger business system investments is the Department of the Navy's Enterprise Resource Planning (Navy ERP) program. Initiated in 2003, the program is to standardize the Navy's business processes, such as acquisition and financial management. It is being delivered in increments, the first of which is to cost about $2.4 billion over its useful life and be fully deployed in fiscal year 2013. GAO was asked to determine whether key IT management controls are being implemented on the program. To do this, GAO analyzed, for example, requirements management, economic justification, earned value management, and risk management. DOD has implemented key IT management controls on its Navy ERP program to varying degrees of effectiveness. To its credit, the control associated with managing system requirements is being effectively implemented. In addition, important aspects of other controls have at least been partially implemented, including those associated with economically justifying investment in the program and proactively managing program risks. Nevertheless, other aspects of these controls, as well as the bulk of what is needed to effectively implement earned value management, which is a recognized means for measuring program progress, have not been effectively implemented. Among other things, these control weaknesses have contributed to the more than 2-year schedule delay and the almost $600 million cost overruns already experienced on the program since it began, and they will likely contribute to future delays and overruns if they are not corrected. Examples of the weaknesses are: (1) Investment in the program has been economically justified on the basis of expected benefits that far exceed estimated costs ($8.6 billion versus $2.4 billion over a 20-year life cycle). However, important estimating practices, such as using historical cost data from comparable programs and basing the cost estimate on a reliable schedule baseline were not employed. While these weaknesses are important because they limit the reliability of the estimates, GAO's analysis shows that they would not have altered the estimates to the point of not producing a positive return on investment. (2) Earned value management has not been effectively implemented. To its credit, the program office has elected to implement program-level earned value management. In doing so, however, basic prerequisites for effectively managing earned value have not been executed. In particular, the integrated master schedule was not derived in accordance with key estimating practices, and an integrated baseline review has not been performed on any of the first increment's releases. (3) A defined process for proactively avoiding problems, referred to as risk management, has been established, but risk mitigation strategies have not been effectively implemented for all significant risks, such as those associated with data conversion and organizational change management, as well the risks associated with the above-cited weaknesses. The reasons that program management and oversight officials cited for these practices not being executed range from the complexity and challenges of managing and implementing a program of this size to limitations in the program office's scope and authority. Notwithstanding the effectiveness with which important aspects of several controls have been implemented, the above-cited weaknesses put DOD at risk of investing in a system solution that does not optimally support corporate mission needs and mission performance, and meet schedule and cost commitments.
|
The overall goal of U.S. public diplomacy is to understand, inform, engage, and influence the attitudes and behavior of foreign audiences in ways that support U.S. strategic interests. The State Department leads these efforts, which are guided by the Under Secretary for Public Diplomacy and Public Affairs and include academic and professional exchanges, English language teaching, information programs, and news management. The department’s regional and functional bureaus also contain public diplomacy offices, which report to the relevant assistant secretary. The Under Secretary has direct authority over the three public diplomacy bureaus but does not have line authority over public diplomacy operations in other regional or functional bureaus. In overseas missions, Foreign Service public diplomacy officers (including Public Affairs, Cultural Affairs, Information, Information Resources, and Regional English Language officers) operate under the authority of the chief of mission and report to their regional bureau managers in Washington, D.C. In fiscal year 2005, State dedicated $597 million to public diplomacy and public affairs. According to the department’s performance plan, its investment in public diplomacy continues to increase, particularly for efforts targeting audiences in the Middle East. Exchange programs received $356 million, the majority of fiscal year 2005 funding and a 12.4 percent increase over fiscal year 2004. State’s information programs received roughly $68 million in fiscal year 2005 to fund programs such as the U.S. speakers program, mission Web sites, and American Corners, which are centers that provide information about the United States, hosted in local institutions and staffed by local employees. The remaining public diplomacy funds went to State’s regional bureaus to pay the salaries of locally engaged staff overseas, among other purposes. Since the terrorist attacks of September 11, 2001, State has expanded its public diplomacy efforts globally, focusing particularly on countries in the Muslim world considered to be of strategic importance in the war on terrorism. Between 2004 and 2006, total spending on overseas public diplomacy will increase 21 percent, from $519 million to an estimated $629 million. Much of this increase has gone to regions with significant Muslim populations, including South Asia (39 percent), East Asia and the Pacific (28 percent), and the Near East (25 percent). These increases continue the trend we reported in 2003, when we found that the largest relative increases in overseas public diplomacy resources went to regions with large Muslim populations. However, the Bureau of European and Eurasian Affairs continues to receive the largest overall share of overseas public diplomacy resources—roughly 36 percent of the total for all six regional bureaus. In 2003, we noted that authorized officer positions overseas had significantly expanded, with the most notable increases occurring in State’s Near East (27-percent increase) and South Asia (15-percent increase) bureaus. However, current data show that staff numbers have stayed largely the same over the past 3 years, with increases of 3 percent or less. In January 2006, Secretary Rice announced plans to reposition officers as part of her transformational diplomacy initiative. State officials said that the department will initially reposition approximately 75 Foreign Service officers this year from posts in Europe and Washington, D.C., to India, China, and Latin America, as well as to the Muslim world. According to these officials, 28 of the positions to be relocated are public diplomacy positions. Since 2003, we have reported on the lack of strategic elements to guide U.S. public diplomacy efforts. Despite several attempts, the United States still lacks an interagency public diplomacy strategy. While State has recently developed a strategic framework for its public diplomacy efforts, it has not issued guidance on how this framework is to be implemented in the field. In addition, posts generally lack a strategic approach to public diplomacy. In 2003, we reported that the United States lacked a governmentwide, interagency public diplomacy strategy, defining the messages and means for communication efforts abroad. We reported that the administration had made a number of aborted attempts to develop a strategy, but to date no public diplomacy strategy has been developed. The lack of such a strategy complicates the task of conveying consistent messages, which increases the risk of making damaging communication mistakes. State officials said that the lack of such a strategy diminishes the efficiency and effectiveness of governmentwide public diplomacy efforts, while several reports concluded that a strategy is needed to synchronize agencies’ target audience assessments, messages, and capabilities. On April 8, 2006, the President established a new Policy Coordination Committee on Public Diplomacy and Strategic Communications. This committee, to be led by the Under Secretary for Public Diplomacy and Public Affairs, is intended to coordinate interagency activities to ensure that: all agencies work together to disseminate the President’s themes and all public diplomacy and strategic communications resources, programs, and activities are effectively coordinated to support those messages; and every agency gives public diplomacy and strategic communications the same level of priority that the President does. According to department officials, one of the committee’s tasks will be to issue a formal interagency public diplomacy strategy. It is not clear when this strategy will be developed. In 2005, the Under Secretary established a strategic framework for U.S. public diplomacy efforts, which includes three priority goals: (1) support the President’s Freedom Agenda with a positive image of hope; (2) isolate and marginalize extremists; and (3) promote understanding regarding shared values and common interests between Americans and peoples of different countries, cultures, and faiths. The Under Secretary noted that she intends to achieve these goals using five tactics—engagement, exchanges, education, empowerment, and evaluation—and by using various public diplomacy programs and other means. This framework partially responds to our 2003 recommendation that the department develop and disseminate a strategy to integrate all State’s public diplomacy efforts and direct them toward achieving common objectives. However, the department has not yet developed written guidance that provides details on how the Under Secretary’s new strategic framework should be implemented in the field. In 2005, we noted that State’s efforts to engage the private sector in pursuit of common public diplomacy objectives had met with mixed success and recommended that the Secretary develop a strategy to guide these efforts. State is currently establishing an office of private sector outreach and is partnering with individuals and the private sector on various projects. The Under Secretary plans to institutionalize this function within the department surrounding key public diplomacy objectives, but it is unclear when this office will be established and whether it will develop a comprehensive strategy to engage the private sector. GAO and others have suggested that State adopt a strategic approach to public diplomacy by modeling and adapting private sector communication practices to suit its purposes (see fig. 1). However, based on our review of mission performance plans and on fieldwork in Nigeria, Pakistan, and Egypt, we found that the posts’ public diplomacy programming generally lacked these important elements of strategic communications planning. In particular, posts lacked a clear theme or message and did not identify specific target audiences. According to a senior embassy official in Pakistan, the United States has too many competing messages, and the post needs to do a better job of defining and clarifying its message. Posts also failed to develop detailed strategies and tactics to direct available public diplomacy programs and tools toward clear, measurable objectives in the most efficient manner possible. Finally, posts lack detailed, country- level communication plans to coordinate their various activities. Recently, State has begun to help posts improve their strategic communications planning. For example, the department has issued guidance on preparing fiscal year 2008 mission performance plans that calls for more strategic thinking and planning than was required in the past, including identification of specific target audiences, key themes and messages, detailed strategies and tactics, and measurable performance outcomes that can clearly demonstrate the ultimate impact of U.S. public diplomacy efforts. If fully implemented, this guidance should begin to address the shortcomings we found in mission performance plans; however, it will not be implemented for another 2 years, raising significant concerns about what the department intends to do now to address strategic planning shortfalls. Moreover, it is unclear whether this guidance will include all the strategic elements from private sector communication practices. In addition to this guidance, the department is currently developing a sample country-level communication plan and has asked 15 pilot posts to develop specific plans for their host countries. These plans are intended to better focus U.S. efforts to counter ideological support for terrorism, according to State. Part of this process will include the development of a key influencers analysis to help identify target audiences in each country. State officials said that they expect to have plans for these countries by fall or winter 2006. Public diplomacy efforts in the field face several other challenges, many of which are heightened in the Muslim world. Officials at posts we visited said they lacked sufficient staff and time to conduct public diplomacy tasks, and we found that many public diplomacy positions are filled by officers without the requisite language skills. Furthermore, public diplomacy officers struggle to balance security with public access and outreach to local populations. While several recent reports on public diplomacy have recommended an increase in spending on U.S. public diplomacy programs, several embassy officials stated that, with current staffing levels, they do not have the capacity to effectively utilize increased funds. According to State data, the department had established 834 public diplomacy positions overseas in 2005, but 124, or roughly 15 percent, were vacant. Compounding this challenge is the loss of public diplomacy officers to temporary duty in Iraq, which, according to one State official, has drawn down field officers even further. Staffing shortages may also limit the amount of training public diplomacy officers receive. According to the U.S. Advisory Commission on Public Diplomacy, “the need to fill a post quickly often prevents public diplomacy officers from receiving their full training.” In addition, public diplomacy officers at post are burdened with administrative tasks and thus have less time to conduct public diplomacy outreach activities than previously. One senior State official said that administrative duties, such as budget, personnel, and internal reporting, compete with officers’ public diplomacy responsibilities. Another official in Egypt told us that there was rarely enough time to strategize, plan, or evaluate her programs. These statements echo comments we heard during overseas fieldwork and in a survey for our 2003 report. Surveyed officers told us that, while they manage to attend functions within their host country capitals, it was particularly difficult to find time to travel outside the capitals to interact with other communities. This challenge is compounded at posts with short tours of duty, which include many in the Muslim world. According to data provided by State, the average tour length at posts in the Muslim world is about 22 percent shorter than tour lengths elsewhere. Noting the prevalence of one-year tours in the Muslim world, a senior official at State told us that Public Affairs officers who have shorter tours tend to produce less effective work than officers with longer tours. To address these challenges, we recommended in 2003 that the Secretary of State designate more administrative positions to overseas public affairs sections to reduce the administrative burden. Officials at State said that the Management bureau is currently considering options for reducing the administrative burden on posts, including the development of centralized administrative capabilities offshore. State is also repositioning several public diplomacy officers as part of its transformational diplomacy initiative; however, this represents a shift of existing public diplomacy officers and does not increase the overall number of officers in the department. In 2005, 24 percent of language-designated public diplomacy positions were filled by officers without the requisite language proficiency, similar to our findings in 2003. At posts in the Muslim world, this shortfall is even greater, with 30 percent of public diplomacy positions filled by officers without sufficient language skills. This figure is primarily composed of languages that are considered difficult to master, such as Arabic and Persian, but also includes languages considered easier to learn, such as French. Security concerns have limited embassy outreach efforts and public access, forcing public diplomacy officers to strike a balance between safety and mission. Shortly after the terrorist attacks of September 11, 2001, then-Secretary of State Colin Powell stated, “Safety is one of our top priorities… but it can’t be at the expense of the mission.” While posts around the world have faced increased threats, security concerns are particularly acute in countries with significant Muslim populations, where the threat level for terrorism is rated as “critical” or “high” in 80 percent of posts (see fig. 2). Security and budgetary concerns have led to the closure of publicly accessible facilities around the world, such as American Centers and Libraries. In Pakistan, for example, all of the American Centers have closed for security reasons; the last facility, in Islamabad, closed in February 2005. These same concerns have prevented the establishment of a U.S. presence elsewhere. As a result, embassies have had to find other venues for public diplomacy programs, and some activities have been moved onto embassy compounds, where precautions designed to improve security have had the ancillary effect of sending the message that the United States is unapproachable and distrustful, according to State officials. Concrete barriers and armed escorts contribute to this perception, as do requirements restricting visitors’ use of cell phones and pagers within the embassy. According to one official in Pakistan, visitors to the embassy’s reference library have fallen to as few as one per day because many visitors feel humiliated by the embassy’s rigorous security procedures. Other public diplomacy programs have had to limit their publicity to reduce the risk of becoming a target. A recent joint USAID-State report concluded that “security concerns often require a ‘low profile’ approach during events, programs or other situations, which, in happier times, would have been able to generate considerable good will for the United States.” This constraint is particularly acute in Pakistan, where the embassy has had to reduce certain speaker and exchange programs. State has responded to security concerns and the loss of publicly accessible facilities through a variety of initiatives, including American Corners, which are centers that provide information about the United States, hosted in local institutions and staffed by local employees. According to State data, there are currently approximately 300 American Corners throughout the world, including more than 90 in the Muslim world, with another 75 planned (more than 40 of which will be in the Muslim world). However, two of the posts we visited in October 2005 were having difficulty finding hosts for American Corners, as local institutions fear becoming terrorist targets. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4128 or fordj@gao.gov. Individuals making key contributions to this statement include Diana Glod, Assistant Director; Michael ten Kate; Robert Ball; and Joe Carney. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Public opinion polls have shown continued negative sentiments toward the United States in the Muslim world. Public diplomacy activities--led by the State Department (State)--are designed to counter such sentiments by explaining U.S. foreign policy actions, countering misinformation, and advancing mutual understanding between nations. Since 2003, we have issued three reports on U.S. public diplomacy efforts that examined (1) changes in public diplomacy resources since September 11, 2001; (2) strategic planning and coordination of public diplomacy efforts; and (3) the challenges facing these efforts. We have made several recommendations in the last 3 years to the Secretary of State to address strategic planning issues, private sector engagement, and staffing challenges related to public diplomacy. For example, today's report recommends that the Secretary develop written guidance detailing how the department intends to implement its public diplomacy goals as they apply to the Muslim world. State has consistently concurred with our findings and recommendations for improving public diplomacy, and the department, in several cases, is taking appropriate actions. However, the department has not established a timetable for many of these actions. Since the terrorist attacks of September 11, 2001, State has expanded its public diplomacy efforts globally, focusing particularly on countries in the Muslim world considered to be of strategic importance in the war on terrorism. Since 2001, State has increased its public diplomacy resources, particularly in regions with significant Muslim populations. That funding trend has continued more recently, with increases of 25 percent for the Near East and 39 percent for South Asia from 2004 to 2006, though public diplomacy staffing levels have remained largely the same during that period. The Secretary of State recently announced plans to reposition some staff to better reflect the department's strategic priorities, including plans to shift 28 public diplomacy officers from posts in Europe and Washington, D.C., to China, India, and Latin America, as well as to the Muslim world. In 2003 and again in 2005, we reported that the government lacked an interagency communication strategy to guide governmentwide public diplomacy activities, and it continues to lack this strategy. We also noted that State did not have a strategy to integrate its diverse public diplomacy activities and that efforts to effectively engage the private sector had met with mixed success. Today, although State has developed a strategic framework to focus its public diplomacy efforts and related tactics to achieve these goals, the department has not issued guidance on how to implement these strategies and tactics. In addition, posts' public diplomacy efforts generally lack important strategic communication elements found in the private sector, which GAO and others have suggested adopting as a means to better communicate with target audiences. These elements include having core messages, segmented target audiences, in-depth research and analysis to monitor and evaluate results, and an integrated communication plan to bring all these elements together. State officials indicate that the department has begun to develop communication plans for 15 pilot posts, but it remains to be seen whether these communication plans will contain all of these strategic elements. Posts throughout the world, and particularly in the Muslim world, face several challenges in implementing their public diplomacy programs, including concerns related to staff numbers and language capabilities and the need to balance security with public outreach. For example, we found that 24 percent of language-designated public diplomacy positions worldwide were filled by officers without the requisite language skills. Furthermore, security concerns have limited embassy outreach efforts and public access. State has begun to address many of these challenges, but it is too early to evaluate the effectiveness of many of these efforts.
|
Federal employees have a choice of multiple health plans offered by private health insurance carriers participating in the FEHBP. Mirroring private sector trends, several participating carriers have begun to offer CDHPs. In 2003, the APWU plan became the first CDHP offered to federal employees. OPM administers the FEHBP by contracting with private health insurance carriers to provide health benefits to over 8 million federal employees, retirees, and their dependents. Federal employees enrolled in the FEHBP can select from a number of private insurance plans. In 2005, 19 national plans and more than 200 local plans were offered through the FEHBP. Plans vary in terms of benefit design and premiums. In 2004, nearly 75 percent of those covered under the FEHBP were enrolled in national PPOs; the remainder were in regional or local HMOs. CDHPs are a relatively new health care plan design. While many variants exist on CDHP models, such plans generally include three basic precepts: An insurance plan with a high deductible. Deductibles are about $1,900 on average for an individual plan and about $3,900 for a family plan, compared to about $320 and $680, respectively, on average for a traditional PPO plan. A savings account to pay for services under the deductible. The savings account may encompass different models, the two most prominent being health reimbursement arrangements (HRAs) and health savings accounts (HSAs). Important distinctions exist between HRAs and HSAs. HRAs are funded solely by the employer, are generally not portable once the employee leaves, and may accumulate up to a specified maximum. In contrast, HSAs may include contributions from both the employer and the employee, are portable, and may accumulate without limit. Unused savings account balances from prior years may roll over and accumulate, along with the annual contributions from year to year. If the savings account is exhausted, the enrollee pays out of pocket for services until the deductible is met, after which point, the plan pays for services much like a traditional health plan. To avoid the likelihood of enrollees curtailing preventive care services—such as cancer screening tests or immunizations—to preserve their account balances, most of the cost of these services is typically paid for by the plan, regardless of whether or not the enrollee has met the deductible. Decision-support tools. CDHPs may provide enrollees information to help them become actively engaged in making health care purchase decisions, such as the typical fees charged for specific health procedures at participating hospitals, and quality measures for participating health care providers. In addition, plans may provide enrollees online access to their savings account to help them manage their spending. Proponents of CDHPs assert that the savings account and higher deductibles encourage consumers to become more price conscious, and use only necessary health care services to maintain and accumulate balances in their savings accounts. The availability of information on provider fees and quality is also expected to enable consumers to select providers on the basis of price and quality. In addition, the higher deductibles typically result in lower premiums than for a PPO plan with similar benefits, because the enrollee bears a greater share of the initial costs of care. Opponents, however, question the underlying premise of CDHPs—that health care spending is discretionary and will be constrained to any significant extent by the financial incentives offered through a health savings or reimbursement account. They cite, for example, research that indicates that 10 percent of the population accounts for the majority— about 70 percent—of health care spending. For such high-cost users, a savings or reimbursement account would likely be quickly exhausted and provide little incentive for enrollees to change health care utilization and purchasing behavior. Some analysts have also reported that decision- support tools such as comparative cost and quality information about providers—important to enable effective consumer participation in health care purchase decisions—are lacking or not widely used. Given the relatively recent introduction of CDHPs, conclusive assessments of their effectiveness at restraining health care utilization and spending have not been made. Analysts believe that enrollment in CDHPs should reach sufficient levels for a sustained period of time before definitive conclusions about the cost and utilization of services can be drawn. Employers are increasingly offering CDHPs to their employees. According to a 2005 annual survey, the share of employers offering such plans coupled with either an HRA or HSA was 4 percent, compared to the 1 percent reported in a separate 2004 annual survey. Many health insurance carriers now offer CDHPs, including Aetna, Anthem/Wellpoint, Blue Cross and Blue Shield plans, CIGNA, Humana, and United HealthCare. The FEHBP has recently begun to offer CDHPs to federal employees. The American Postal Worker’s Union (APWU CDHP) was the first to offer a CDHP in 2003, followed by Aetna and Humana in 2004. In January 2005, several carriers began offering health plans designed to be coupled with the newly authorized HSAs, increasing the number of CDHPs in the FEHBP to 3 national and 13 local plans. OPM expects that additional CDHPs will be offered in 2006. Nevertheless, as of January 2005, these plans collectively insured fewer than 38,000 covered lives, a small share of the more than 8 million employees, retirees, and dependents covered under the FEHBP. Administered by Definity Health Plan, the APWU CDHP is a high- deductible PPO plan coupled with an HRA. The deductibles are currently $1,800 for an individual plan and $3,600 for a family plan. For an individual plan, the first $1,200 of the deductible is paid for from the HRA—which is funded every year by the enrollee’s employing federal agency. The remaining $600 of the deductible is considered the member’s responsibility. Unused balances may accumulate and roll over from year to year up to a maximum of $5,000 for an individual plan and $10,000 for a family plan. The member responsibility is paid by the employee, either out of pocket or from accumulated balances in the HRA from prior years. Once the deductible has been met and the HRA is exhausted, the plan generally pays 85 percent of the cost of covered services. The HRA may be used to pay for two types of services: basic expenses, such as doctor visits and hospital charges, and “extra” expenses, such as certain preventive care services that are not covered by the plan. The HRA coverage of extra expenses does not count toward the deductible. For example, if an enrollee exhausts the HRA by spending $1,200 on basic physician office visit expenses, and then spends another $600 out of pocket for extra preventive care services, the enrollee would need to spend another $600 out of pocket on basic expenses before the $1,800 deductible is met and the plan begins paying 85 percent of expenses. The APWU CDHP is a small but fast-growing health plan whose enrollees on average were younger than enrollees in national PPO plans. In addition, the APWU CDHP enrollees were healthier, better educated, and more likely to enroll in an individual plan than enrollees in other new plans and the national PPO plans. Enrollment in the APWU CDHP grew from 4,500 in 2003, its first year of operation, to approximately 7,600 in 2004, an increase of almost 70 percent. In 2005, enrollment grew an additional 25 percent, to approximately 9,500. Including dependents, total covered lives were estimated to be approximately 10,000, 16,800, and 21,000 in each of the 3 years, respectively. Most APWU CDHP enrollees in 2003 and 2004 migrated from FEHBP national PPO plans—57 percent—and HMO plans— 26 percent, while 17 percent were not previously covered by an FEHBP plan. Fewer retirees and elderly people selected the APWU CDHP compared to the national PPO plans, a phenomenon also found among the other new plans. Among the APWU CDHP and other new plans, 11 and 19 percent of enrollees, respectively, were retirees or aged 65 or over, compared to 53 percent for the national PPO enrollees. The distribution of enrollees by age groups was similar for the APWU CDHP and other new plans, while national PPO plans had a smaller share of enrollees in all age groups under 55 and a significantly higher share of enrollees in the over-65 age group. Figure 1 illustrates the share of enrollees in the APWU CDHP, the other new plans, and the national PPO plans within each age group. The average age of APWU CDHP enrollees was comparable to that of enrollees in other new plans, but lower than enrollees in the national PPO plans by about 15 years—47 each in both the APWU CDHP and the other new plans compared to 62 for the PPO plans. Excluding the elderly and retirees, the average ages of enrollees in the APWU CDHP, the other new plans, and the national PPO plans were more similar—45, 43, and 47, respectively. (See table 1.) Excluding enrollees over age 65, the proportion of APWU CDHP enrollees who reported on annual satisfaction surveys being in “excellent” or “very good” health status was higher than among the other new plan and PPO plan enrollees. APWU CDHP enrollees also appeared to be better educated than enrollees in other new plans and the PPO plans. The proportion of APWU CDHP enrollees under the age of 65 who reported having a 4-year or higher college degree was higher than among the other new plan and the PPO plan enrollees. (See table 2.) Excluding retirees and the elderly, a lower share of APWU CDHP enrollees selected family plans compared to other enrollees. About 55 percent of APWU CDHP enrollees selected family plans, compared to 66 percent and 65 percent of enrollees in other new plans and PPO plans, respectively. APWU CDHP enrollee satisfaction with overall plan performance was higher than that of other new plan enrollees, but lower than that of national PPO plan enrollees. APWU CDHP enrollee satisfaction was generally comparable to that of other new plan and national PPO plan enrollees on four of five specific plan performance measures—access to health care, timeliness of health care, provider communications, and claims processing. APWU CDHP enrollee satisfaction was higher than other new plan enrollees but lower than national PPO plan enrollees for the remaining specific measure relating to customer service. In addition, some APWU CDHP enrollees may have more difficulty tracking their health care spending under the APWU CDHP compared to other FEHBP enrollees. On the overall plan performance measure included in annual consumer satisfaction surveys, APWU CDHP enrollees were more satisfied than other new plan enrollees, but less satisfied than national PPO plan enrollees—67 percent versus 53 and 76 percent, respectively. This performance measure is not comprised of component scores, nor is it directly related to the scores for the other performance measures. Rather, according to OPM, overall plan performance is a measure of enrollees’ broad assessment of the plan. (See fig. 2.) For four of five specific plan performance measures—access to health care, timeliness of health care, provider communications, and claims processing—APWU CDHP enrollee satisfaction was generally comparable to that of other enrollees. APWU CDHP enrollee satisfaction with customer service, though higher than that of other new plan enrollees, was lower than that of the PPO plan enrollees by 7 percentage points— 67 percent versus 59 and 74 percent respectively. (See fig. 3.) Moreover, for three of the components that constitute the customer service performance measure, APWU CDHP enrollees were less satisfied than national PPO plan enrollees. The components are satisfaction with finding or understanding information, satisfaction with getting help when calling customer service, and satisfaction with the health plan paperwork. (See fig. 4.) Our analysis of appeals regarding claim disputes filed with OPM for the APWU CDHP and PPO plans in 2003 and 2004 indicate a higher rate of confusion about certain APWU CDHP features, such as enrollees’ ability to track their account expenditures and their progress toward meeting their deductibles. The average annual rate of appeals per 1000 enrollees filed with OPM against the APWU CDHP was almost twice as high as the rate for national PPO plans—1.98 and 1.11 respectively. Some health policy researchers have noted that this may be expected as CDHP enrollees gain familiarity with a relatively new plan concept. However, whereas appeals for the PPO plans were distributed among a wider variety of issues, a disproportionate share of the APWU CDHP appeals—over half—related to tracking account expenditures or deductible balances. Possibly contributing to enrollee inability to track their progress toward meeting their deductible, the APWU CDHP brochure contains potentially confusing language about whether expenses for dental and vision services count toward the deductible. APWU CDHP officials told us that in 2005, the HRA may be used to pay for dental and vision services, and that these services would also count toward the member’s deductible. However, while one page of the plan brochure explicitly states that these expenses count toward the deductible, another page appears to indicate that such expenses do not count toward the deductible. The lower enrollee satisfaction related to overall plan performance and customer service, and enrollee confusion in tracking their account spending, may relate to the recent introduction of the APWU CDHP. OPM officials said that a higher rate of dissatisfaction and confusion about plan features are traits typically observed among new plans, as enrollees gain familiarity with their benefits and features. According to one health policy analyst, CDHP enrollees are more likely to report problems understanding the plan because CDHPs are a relatively new concept, and plan paperwork and management of the HRA account are new experiences for enrollees. Provider networks appeared to provide APWU CDHP enrollees with similar access to health care providers compared to networks of other FEHBP plans. In 21 states, the APWU CDHP used the same provider networks as other large, national PPO plans participating in the FEHBP— each with over 70,000 enrollees. These 21 states account for approximately 40 percent of the total APWU CDHP enrollment. In 13 of the remaining states, accounting for approximately 22 percent of total plan enrollment, the APWU CDHP used networks that were listed among the 25 most commonly used PPO networks nationwide. In 8 states, accounting for another 22 percent of total plan enrollment, the APWU CDHP used generally large networks that had been in existence for over 10 years. For example, the APWU CDHP network included over 70 percent of the hospitals in one state, and over 90 percent of the hospitals in another state. In the remaining 9 states, accounting for approximately 16 percent of total plan enrollment, the APWU CDHP used networks that were either nationally accredited, or were comparable in size to networks used by other FEHBP plans based on counts of hospitals or physicians included in the network. (See table 3). Provider networks appeared to provide APWU CDHP enrollees with negotiated provider discounts that were comparable to those of another large national FEHBP plan. Across all states, the average hospital inpatient and physician discounts for the APWU CDHP and another national PPO plan differed by no more than 2 percentage points. The actual level of the hospital and physician discounts in the APWU CDHP and the national PPO plan were comparable to industry standard discounts negotiated by large PPO plans, according to an industry expert we interviewed. We received comments on a draft of this report from OPM (see app. I) and APWU. Both generally concurred with our findings. OPM said that consumer-directed health plans have the potential to lower health insurance costs by allowing health plan members greater choice over their health care spending. Regarding the potential for CDHPs to disproportionately attract healthier enrollees, OPM said that while enrollment in the APWU CDHP is growing, the plan accounted for a small fraction of total FEHBP enrollment and that OPM did not anticipate any harm accruing to other FEHBP enrollees as a result of its enrollment trends. Nevertheless, OPM said it would continue to monitor enrollment trends and take appropriate action to eliminate or minimize any adverse effects. OPM also provided technical comments, which we incorporated in the report as appropriate. APWU acknowledged that the language concerning dental and vision coverage in its plan brochure could have contained greater clarity, and said that in consultation with OPM it has revised the language for the 2006 plan brochure. APWU also stated that in spite of the potentially confusing language, the plan credited enrollees’ dental and vision services incurred in 2005 toward the enrollees’ deductible. We made reference to their comment in our report. APWU also requested that we disclose the source of the appeals data we cited in the report because it did not believe its rate of appeals was significantly higher than other national PPO plans. We notified APWU officials that we obtained the appeals data from OPM. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to OPM and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7119 or at dickenj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Randy DiRosa, Assistant Director, and Iola D’Souza also made key contributions to this report.
|
Since 2003, the Federal Employees Health Benefits Program (FEHBP) has offered "consumer-directed" health plans (CDHP) to federal employees. A CDHP is a high-deductible health plan coupled with a savings account enrollees use to pay for health care. Unused balances may accumulate for future use, providing enrollees the incentive to purchase health care prudently. However, some have expressed concern that CDHPs may attract younger and healthier enrollees, leaving older, less healthy enrollees to drive up costs in traditional plans. They also question whether enrollees are satisfied with the plans, and have sufficient access to health care providers and discounts on health care services. GAO was asked to study the first FEHBP CDHP, offered by the American Postal Workers Union (APWU). GAO compared the number, characteristics, and satisfaction of APWU enrollees to those of FEHBP enrollees in other recently introduced (new) non-CDHP plans, and national preferred provider organization (PPO) plans. GAO also compared the APWU CDHP provider networks and discounts to those of other FEHBP plans. The APWU CDHP is a small but growing FEHBP health plan whose enrollees were younger than PPO plan enrollees, and healthier and better educated than other new plan and PPO enrollees. The average age of APWU CDHP and other new plan enrollees was the same (47 years), but younger than that of PPO plan enrollees (62 years), largely because fewer retirees and elderly people selected the new plans. Excluding retirees and the elderly, the average age of enrollees was more similar across the plans. A larger share of nonelderly enrollees in the APWU CDHP reported being in "excellent" or "very good" health status compared to the other new plan and PPO plan enrollees--73 percent versus 64 and 58 percent, respectively. Similarly, a larger share of nonelderly enrollees in the APWU CDHP reported having a 4-year or higher college degree compared to enrollees in the other new plans and PPO plans--49 percent versus 42 and 36 percent, respectively. Enrollee satisfaction with the APWU CDHP was mixed compared to enrollee satisfaction with the other FEHBP plans. For overall plan performance, APWU enrollees were more satisfied than other new plan enrollees, but less satisfied than PPO plan enrollees. For four of five specific quality measures--access to health care, timeliness of health care, provider communication, and claims processing--APWU enrollees were as satisfied as other enrollees. On the fifth measure, customer service, APWU enrollees were more satisfied than other new plan enrollees, but less satisfied than PPO plan enrollees. In particular, a lower share of APWU enrollees were satisfied with their ability to find or understand written or online plan information, the help provided by customer service, and the amount of paperwork required by the plan. The APWU CDHP provider networks and discounts were comparable to other FEHBP PPO plans. In 21 states, the APWU CDHP used the same networks used by other national PPO plans. In the remaining states, the APWU CDHP networks were among the most commonly used networks nationwide, or were large, nationally accredited, or comparable in size to networks used by other FEHBP plans. Across all states the average hospital inpatient and physician discounts obtained by the APWU CDHP were within 2 percentage points of the discounts obtained by another large national FEHBP PPO plan. GAO received comments on a draft of this report from the Office of Personnel Management (OPM) and APWU. Both generally concurred with our findings. Regarding the potential for CDHPs to disproportionately attract healthier enrollees, OPM said it would continue to monitor the enrollment trends and take appropriate action to eliminate or minimize any adverse effects.
|
When DHS was created in March 2003, ODP was transferred from the Justice Department’s OJP to DHS’s Directorate of Border and Transportation Security. In March 2004, the Secretary of Homeland Security consolidated ODP with the Office of State and Local Government Coordination to form the Office of State and Local Government Coordination and Preparedness (SLGCP). In addition, other preparedness grant programs from agencies within DHS were transferred to SLGCP. SLGCP, which reports directly to the Secretary, was created to provide a “one-stop shop” for the numerous federal preparedness initiatives applicable to state and local first responders. As shown in figure 1, while SLGCP/ODP has program management and monitoring responsibility for domestic preparedness grants, it relies upon the Justice Department’s Office of the Comptroller (OC) for grant fund distribution and assistance with financial management support, which includes financial monitoring. Within ODP, the Preparedness Programs Division (formerly the State and Local Program Management Division) is specifically tasked with enhancing the capability of state and local emergency responders to prevent, deter, respond to, and recover from terrorist attacks involving the use of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. For these purposes, ODP provides grant funds to the 50 states, the District of Columbia, the Commonwealths of Puerto Rico and the Northern Mariana Islands, American Samoa, the Virgin Islands, Guam, and selected urban areas. In addition to this grant funding for specialized equipment and other purposes, ODP provides direct training, exercises, technical assistance, and other counterterrorism expertise. During fiscal years 2002 and 2003, ODP managed 25 grant programs totaling approximately $3.5 billion. About $2.98 billion (85 percent) of the total ODP grant funds for both years was for statewide grants—the State Domestic Preparedness Program (SDPP), which is a predecessor grant program to SHSGP, and SHSGP I and II—and grants targeted at selected urban areas (UASI I and II). The SDPP/SHSGP grant funds accounted for about 68 percent ($2.38 billion) and the UASI I and II grant funds about 17 percent ($596 million). Table 1 shows the amounts provided for these and other ODP grants. See appendix II for the SDPP/SHSGP grant funding awarded in fiscal years 2002 and 2003 and the UASI I and II grant funding awarded in fiscal year 2003. The SDPP/SHSGP grant programs expanded from funding equipment, exercises, and administrative activities in fiscal year 2002 to include, in fiscal year 2003, the cost of planning and training. The SDPP generally provided funding for advanced equipment, exercises, and administrative activities. The SHSGP I provided, among other things, funding for specialized equipment, exercises, training, and planning and administrative costs. From a separate appropriation, the SHSGP II supplemented funding available through SHSGP I for basically the same purposes, but included separate funding for critical infrastructure protection. The SDPP/SHSGP grant funds were distributed using a base amount of 0.75 percent of the total allocation to each state, the District of Columbia, and the Commonwealth of Puerto Rico, and 0.25 percent of the total allocation to the Commonwealth of the Northern Mariana Islands, American Samoa, Guam, and the Virgin Islands, with the balance being distributed on a population-share basis. The UASI I grant funds were provided directly to seven selected urban areas to address the unique equipment, training, planning, and exercise needs of large high-threat urban areas and specifically, to assist in building an enhanced and sustainable capacity to prevent, respond, and recover from threats or acts of terrorism. From a separate appropriation, UASI II provided funding through the states (not directly) to 30 selected urban areas for basically the same purposes. The UASI grant funds were awarded on the basis of the following factors: population density, critical infrastructure, and current threat estimates. (See appendix II for the urban areas that received UASI I and II grant funds.) Table 2 shows the funding authority for these grant programs. Over time, ODP has developed and modified its procedures for awarding grants to states, governing how states distribute funds to local jurisdictions, and facilitating reimbursements for states and localities purchasing first responder equipment and services. ODP also developed requirements intended to hold states and localities accountable for how grant expenditures were planned, justified, expended, and tracked. These accountability-related requirements evolved over time. For instance, prior to fiscal year 2004, the states were primarily required to provide information on the specific items they and localities planned to purchase on the basis of ODP’s evolving authorized equipment lists. In fiscal year 2004, to better determine the impact of expenditures on preparedness efforts, ODP began placing more emphasis on results-based reporting of planned and actual grant expenditures. ODP instituted new state and local reporting requirements aimed at ensuring that grant expenditures would align with goals and objectives contained in state and urban area homeland security strategies. ODP also, over time, has stepped up its state grant-monitoring activities. For fiscal years 2002 and 2003, ODP developed procedures and guidelines for awarding SDPP/SHSGP and UASI grants to states that enabled states to distribute grant funds and states and localities to expend funds and seek reimbursement for first responder equipment or services they purchased directly. After enactment of appropriations for the grant programs, ODP developed and made available program guidelines including the grant application for each grant program. With the exception of UASI I, once a grant application was submitted to and approved by ODP, ODP awarded grant funds directly to each state, which was required to designate a state administrative agency to administer the grant funds. States in turn transferred or subgranted the funds to local jurisdictions or urban areas, with a designated core city and county/counties. For UASI I grants, ODP awarded grant funds directly to selected urban areas (i.e., selected cities). Figure 2 illustrates the main steps involved in the SDPP/SHSGP and UASI II grant cycle. For SDPP/SHSGP grant programs, ODP allowed the states flexibility in deciding how the grant programs were structured and implemented in their states. In general, states were allowed to determine such things as the following: the formula for distributing grant funds to local jurisdictional units; the definition of what constitutes a local jurisdiction eligible to receive funds, such as a multicounty area; the organization or agency that would be designated to manage the grant program; and whether the state or local jurisdictions would purchase grant-funded items for the local jurisdictions. UASI I grantees, for the most part, have had flexibilities similar to those of the states and could, in coordination with members of the Urban Area Working Group, designate contiguous jurisdictions to receive grant funds. For UASI II, while the states subgranted the grant funds to selected urban areas, states retained responsibility for administering the grant program. The core city and county/counties worked with the state administrative agency to define the geographic borders of the urban area and coordinated with the Urban Area Working Group. Once the grant funds were awarded to the states and then subgranted to the local jurisdictions or urban areas, certain legal and procurement requirements had to be met, such as a city council needing to approve acceptance of grant awards. Once these requirements were satisfied, states, local jurisdictions, and urban areas could then obligate their funds for first responder equipment, exercises, training, and services. Generally, when a local jurisdiction or urban area directly incurred an expenditure for first responder equipment, it submitted related procurement documents, such as invoices, to the state. The state would then draw down the funds from the Justice Department’s OJP. According to OJP, funds from the U.S. Treasury were usually deposited with the states’ financial institution within 48 hours. The states, in turn, provided the funds to the local jurisdiction or urban area. In addition to the guidelines ODP developed for the grant award, distribution, and reimbursement process, ODP developed separate guidance that required every state to develop a homeland security strategy as a condition of receiving grant funds. Specifically, ODP required states to develop homeland security strategies that would provide a roadmap of where each state should target grant funds for fiscal years 1999 to 2001 (subsequently extended to fiscal year 2003). To assist the states in developing these strategies, state agencies and local jurisdictions were directed to conduct needs assessments on the basis of their own threat and vulnerability assessments. The needs assessments were to include related equipment, training, exercise, technical assistance, and research and development needs. In addition, state and local officials were to identify current and required capabilities of first responders to help determine gaps in capabilities. ODP directed the states in fiscal year 2003 to update their homeland security strategies to better reflect post-September 11 realities and to identify progress on the priorities originally outlined in the initial strategies. As with these initial strategies, the updated strategies included goals and objectives the states wanted to achieve to meet homeland security needs, such as upgrading emergency operations centers and command posts. As directed by statute, ODP required completion and approval of these updated strategies as a condition for awarding fiscal year 2004 grant funds. As of July 2004, ODP had approved or conditionally approved all state strategies and awarded all fiscal year 2004 SHSGP funds. Figure 3 shows an overview of the state homeland security assessment and strategy development process in place for fiscal years 1999 through 2003. In conjunction with the development of the states’ updated homeland security strategies, ODP revised its approach to how states and localities reported on grant spending and use. Specifically, ODP took steps to shift the emphasis away from reporting on specific items purchased and toward results-based reporting on the impact of states’ expenditures on preparedness. ODP maintains a list of authorized items that all states and localities were required to use as a guideline for making purchases. This evolving list, comprising hundreds of first responder items, is arranged by category, such as personal protection equipment; explosive device mitigation and remediation equipment; CBRNE search and rescue equipment; interoperable communications equipment; and more. Under this arrangement, states and localities consulted ODP’s authorized equipment list and selected the equipment and quantity they planned to purchase—including such diverse items as personal protection suits for dealing with hazardous materials and contamination, bomb response vehicles, and medical supplies. This information is in turn listed on itemized budget detail worksheets that localities submitted to states for their review. Prior to the fiscal year 2004 grant cycle, states submitted the worksheets to ODP. States also compared purchased items against these worksheets when approving reimbursements to localities. According to ODP, this list-based reporting method made it difficult to track the cumulative impact of individual expenditures on the goals and objectives in a state’s and urban area’s homeland security strategy. While the budget detail worksheets reflected the number and cost of specific items that states and localities planned to purchase, neither states nor ODP had a reporting mechanism to specifically assess how well these purchases would, in the aggregate, meet preparedness planning needs or priorities, or the goals and objectives contained in state or urban area homeland security strategies. To help remedy this situation, ODP revised its approach for fiscal year 2004. Rather than being required to submit budget detail worksheets to ODP, states, urban areas, and local jurisdictions were required instead to submit new Initial Strategy Implementation Plans (ISIP). These ISIPs are intended to show how planned grant expenditures for all funds received are linked to one or more larger projects, which in turn support specific goals and objectives in either a state or urban area homeland security strategy. The state administrative agency is responsible for submission of all ISIPs to ODP within 60 days of the state’s grant award. The final submission is to include one ISIP from the state administrative agency if the agency retains a portion of the funding, and one ISIP for every local jurisdiction, state agency, or nongovernmental organization receiving grant funds. ODP said that almost all of the states have submitted their ISIPs. In addition to the ISIPs, ODP now requires the states to submit biannual strategy implementation reports showing how the actual expenditure of grant funds at both the state and local levels was linked by projects to the goals and objectives in the state and urban area strategy. According to ODP, this reporting process is intended to better enable states and ODP to track grant expenditures from all funding sources against state and urban area homeland security strategies as well as collect critical project output and performance data. The first biannual strategy implementation reports covering the 6-month period ending December 31, 2004, were due to ODP on January 31, 2005. At the time of our review, it was too early to determine whether the new approach would improve expenditure tracking and performance reporting. While progress has been made in updating state homeland security strategies and planned improvements for reporting and tracking grant- related expenditures are under way, some federal, state, and local officials expressed concerns about the accuracy of the needs assessments on which the state strategies were based. When ODP instructed states and local jurisdictions to update their fiscal year 1999 needs assessments in fiscal year 2003, the agency told them not to constrain their estimates of needs to a specific period of time or take potential sources of funding into account. At the same time, ODP instructed states to review and analyze local jurisdictions’ needs assessments and the aggregated results before submitting their needs assessment data to ODP. The needs assessments for equipment received by ODP from 56 states and territories as a result of this process totaled $352.6 billion. By contrast, the funding available for SHSGP I and II in fiscal year 2003 totaled roughly $2.1 billion. State and local officials in three of the five states we visited cited concerns about the accuracy of the needs assessments for their individual states. For example, the needs assessment for one state we reviewed amounted to about $11.8 billion—nearly 300 times the $39.5 million in total state homeland security grant funds awarded to the state in fiscal year 2003. Grant managers in this state said that they had reviewed the local jurisdictions’ threat estimates and determined that, because of a misinterpretation of the term “threat” by local officials, the number of critical assets needing protection was higher than estimated by the state. In their opinion, the local jurisdictions included items in their needs assessments that were not needed to protect the state’s critical assets. Nevertheless, state officials did not revise the aggregated needs assessment estimates included in their state strategy. ODP conditionally approved the strategy for this state, noting, among other things, a “disconnect” between the state’s mission and goals and that time lines were “too broad” and “not realistic.” Grant managers in a second state said that the state did not base its strategy on the needs assessments prepared by the local jurisdictions, in part, because they judged the unconstrained assessments for equipment to be unrealistically high— approximately $13 billion over an open-ended, multiyear period. While the state submitted the total of these local assessments to ODP; it submitted a strategy on the basis of its own planning procedures for 1 year only, resulting in a $92 million estimate of needs. After discussions with ODP, the state later submitted a broader, multiyear $9.6 billion needs assessment for equipment. ODP has taken steps to address its concerns, and some states’ concerns, related to the estimates included in the needs assessments. In a conference held with state officials in March 2004, ODP personnel discussed concerns that arose from their review of aggregated needs assessment data and identified some possible sources of the problems. They determined that, before submitting their fiscal year 2003 needs assessments to ODP, states might not have adequately considered such factors as mutual aid agreements for first responder assistance within jurisdictions or whether jurisdictions within a region could share resources, rather than submit separate or overlapping requests for first responder equipment. In response, ODP requested the states to validate and revise, if necessary, the needs assessment data to take these factors into account and to resubmit their assessments. States were to submit their validated assessments to ODP by October 15, 2004. According to an ODP document, ODP is currently completing its analysis of the assessment data. In addition to the issues raised about the accuracy of the fiscal year 2003 needs assessments, other factors may affect ODP’s and states’ abilities to identify and assess first responder needs and priorities. For example, according to some state officials we interviewed as well as recent reports by DHS’s Office of Inspector General (IG) and the House Select Committee on Homeland Security, efforts by state and local jurisdictions to prioritize expenditures to enhance first responder preparedness have been hindered by the lack of clear guidance in defining the appropriate level of preparedness and setting priorities to achieve it. Additionally, in our recent report on the management of first responder grants in the National Capital Region, we reported that the lack of national preparedness standards that could be used to assess existing first responder capacities (such as the number of persons per hour that could be decontaminated after a chemical attack), identify gaps in those capacities, and measure progress in achieving specific performance goals was a challenge. We also reported that effectively managing federal first responder grant funds requires the ability to measure progress and provide accountability for the use of public funds. This required a coordinated strategic plan for enhancing preparedness, performance standards to guide how funds are used to enhance first responder capacities and preparedness, and data on funds available and spent on first responder needs. National performance standards for assessing domestic preparedness capabilities and identifying gaps in those capabilities that reflect post- September 11 priorities are being developed. ODP has submitted to the Secretary of DHS a definition of a national preparedness goal that is intended to provide assurance of the nation’s capability to prevent, prepare for, respond to, and recover from major events, especially terrorism. ODP plans call for achieving the full capability needed to sustain the preparedness levels required by the new national standards by September 2008. In order to develop performance standards that will allow ODP to measure the nation’s success in achieving this goal, ODP is using a capabilities-based planning approach—one that defines the capabilities required by states and local jurisdictions to respond effectively to likely threats. These capability requirements are to establish the minimum levels of capability required to provide a reasonable assurance of success against a standardized set of 15 scenarios for threats and hazards of national significance. ODP’s efforts to develop national preparedness standards are, in part, a response to Homeland Security Presidential Directive-8 (HSPD-8), issued by the President in December 2003. HSPD-8 called for a new national preparedness goal and performance measures, standards for preparedness assessments and strategies, and a system for assessing the nation’s overall preparedness. The directive required the DHS Secretary to submit the new national preparedness goal to the President through his Homeland Security Council for review and approval prior to, or concurrently with, DHS’s fiscal year 2006 budget submission to the Office of Management and Budget in September 2004. HSPD-8 also requires the preparation and approval of statewide, comprehensive all-hazards preparedness strategies in order to receive federal preparedness assistance at all levels of government, including grants, after fiscal year 2005. As part of the HSPD-8 implementation process, ODP plans to develop a list of capability requirements by the end of January 2005 in keeping with the fiscal year 2005 DHS appropriations act. To help define the capabilities that jurisdictions should set as targets, ODP first drafted a list of tasks required to prevent or respond to incidents of national consequence. They include such generic tasks as integrating private-sector entities into incident response activities or coordinating housing assistance for disaster victims. The list of target capabilities includes the policies, procedures, personnel, training, equipment, and mutual aid arrangements needed to perform the tasks required to prevent or respond to the national planning scenarios. ODP further plans to develop performance measures, on the basis of the target capability standards that define the minimal acceptable proficiency required in performing the tasks outlined in the task list. ODP plans to complete initial development of the performance measures by March 2005 and to refine them subsequently. According to ODP’s plan, the measures will allow the development of a rating methodology that incorporates preparedness resources and information about overall performance into a summary report that represents a jurisdiction’s or agency’s ability to perform essential prevention, response, or recovery tasks. The office acknowledges that this schedule may result in a product that requires future incremental refinements but has concluded that this is preferable to spending years attempting to develop a “perfect” process. ODP held a workshop in mid-October 2004 to obtain input from representatives from states, national associations, and other federal departments and agencies regarding the implementation of HSPD-8. At the workshop, some participants voiced concerns that the process, among other things, was moving too fast and did not consider the state and local needs assessments that had already been done. In addition, some participants believed that better communication and a more collaborative process was needed. ODP officials promised to address the participants’ concerns and asked for additional input on how ODP could better implement the process and work better with state and local jurisdictions. ODP has taken steps to improve its oversight procedures with respect to state, urban area, and local grantees. ODP is responsible for ensuring administrative and programmatic compliance with relevant statutes, regulations, policies, and guidelines of the grants it manages. ODP also monitors the progress that states make toward the goals and objectives contained in their homeland security strategies. Prior to September 11, 2001, ODP formally monitored grantees through such activities as office- based reviews at ODP of grantees’ financial reports and other documents, followed by on-site visits to state grant officials. Office-based reviews entail a review of grant files to ensure that all grant documentation is complete and up-to-date and that any apparent problems are addressed through follow-up telephone or e-mail contact with the state or urban area. Upon completion of an office-based review, an ODP preparedness officer prepares a memorandum for the file. This review usually takes place before an on-site visit is scheduled, according to ODP. During an on-site visit, an ODP preparedness officer is to discuss administrative and financial issues and programmatic issues such as whether the state or urban area is meeting the goals and objectives in the homeland security strategies. The ODP preparedness officer is to prepare a monitoring report for each on-site visit. ODP officials told us that formal on-site monitoring visits were temporarily discontinued after September 11, 2001, because of a high volume of work. For fiscal years 2002 and 2003, ODP did not set formal monitoring goals, such as a specific number of on-site visits to be made in a given year. ODP officials said they continued to maintain active, almost daily contact with the states by telephone, e-mail, and regular correspondence and through informal visits to monitor programmatic and financial aspects of the grants; however, no memorandums or formal site-visit reports were filed during that period. In fiscal year 2004, ODP updated its grant-monitoring guidance and established new monitoring goals. According to the guidance, at least one office file review and one on-site visit—resulting in a monitoring visit report—should be completed for each state (inclusive of urban area grantees) each fiscal year. As of September 30, 2004, ODP had completed 44 office file reviews and 44 on-site visits for the 56 states and territories. According to ODP, of the remaining 12 reviews and visits for the fiscal year 2004 monitoring cycle, 8 have been conducted as of December 2004. ODP officials said that these reviews and visits were delayed, in part, because of turnover in preparedness officer positions and scheduling problems. These on-site monitoring visits are a principal tool for, among other things, ascertaining a grantee’s progress on its strategy implementation, and noting problems with implementing the grant program and the steps the grantee and ODP will take to resolve them. These on-site visits are needed to track whether and how grantees are managing their program funds. ODP cited staffing challenges that have affected its grant management in general. ODP has made progress in filling authorized staff positions, but vacancies remain. ODP had 146 full-time equivalent positions authorized for fiscal years 2003 and 2004, 30 of which were preparedness officers. As of September 2004, ODP had filled 138 of these positions compared with 63 filled positions at the end of fiscal year 2003. Of the eight vacancies remaining, five were preparedness officer positions. In addition to performing office-based and scheduled on-site monitoring, these officers serve as day-to-day liaisons to designated states. According to ODP, the ODP preparedness officers currently have responsibility for one to five states each, depending on the state’s population. ODP officials told us that, in hiring staff, they face challenges shared by other agencies. ODP has acknowledged that it experienced significant staffing shortages in fiscal years 2002 and 2003 because of a hiring freeze. In addition, officials cited other factors, including staff turnover, the lack of recruitment and relocation bonuses, the high cost of living in the Washington metropolitan area, and competition with other DHS entities and contracting firms for high-quality candidates. These officials also said that the lengthy federal hiring process is further extended by the need to conduct security clearances for job candidates. To deal with some staff shortages, ODP has relied on outside contractors and temporary employees, but they are not working directly with states and local jurisdictions on grants, and none are ODP preparedness officers. State and local officials in two of the five states visited also cited a lack of sufficient state and local personnel to administer and manage their grant programs. While the fiscal years 2002 and 2003 grants provided funding that states and local jurisdictions could use to administer the grants, these officials said that the 3 percent limit on grant management and administrative costs imposed by ODP in the fiscal year 2003 SHSGP II was not sufficient to cover the grant administrative costs needed to administer and manage the grants. This allowance can be used at the state and/or local levels, but the combined allowance cannot exceed 3 percent of the total first responder preparedness grant funds for each state. For SHSGP II first responder preparedness grant funds, the allowable administrative costs ranged among all states from a low of about $102,000 to a high of about $3.1 million per state. Some officials said they have not been able to hire the personnel necessary to administer and manage the grant programs, in part, because of the limit on funds used for administrative costs. DHS’s IG and Homeland Security Advisory Council Task Force also cited similar reports from state and local officials they spoke with. In responding to DHS’s IG report, ODP said that the homeland security grant programs allow for the hiring of both full- and part-time personnel and contractors to implement the program and that this option could be more widely used by states to address the issue of inadequate staffing. ODP officials recently told us that the fiscal year 2005 grant guidelines allow states to retain 3 percent of the total grant award and local jurisdictions to use 2.5 percent of their grant allocation for management and administrative purposes. According to these officials, this change should alleviate some of the staffing issues. Congress, the Conference of Mayors, some state and local officials, and others expressed concerns about the time ODP was taking to award grant funds to states and for states to transfer grant funds to local jurisdictions. For SDPP and SHSGP I grants, ODP was not required to award grant funds to states within a specific time frame. During fiscal year 2002, ODP took 123 days to make the SDPP grant application available to states and, on average, about 21 days to approve states’ applications after receipt. For SHSGP II, however, the appropriations statute required that ODP make the grant application available to states within 15 days of enactment of the appropriation and approve or disapprove states’ applications within 15 days of receipt. According to ODP data for SHSGP II, ODP made the grant application available to states within the required deadline and awarded over 90 percent of the grants within 14 days of receiving the applications. For SHSGP II, the appropriations statute also mandated that states submit grant applications within 30 days of the grant announcement. According to ODP data, all states met the statutory 30-day mandate. For SHSGP II, the average number of days from grant announcement to application submission declined from about 81 days in fiscal year 2002 to about 23 days. To expedite the transfer of grant funds from the states to local jurisdictions, ODP program guidelines and subsequent appropriations acts imposed additional deadlines on states. For SDPP, there were no mandatory deadlines or dates by which states should transfer grant funds to localities. One of the states we visited, for example, took 91 days to transfer the SDPP grant funds to a local jurisdiction while another state we visited took 305 days. In addition, a DHS IG report found that for SDPP, two of the states it visited took 73 and 186 days, respectively, to transfer funds to local jurisdictions. Beginning with SHSGP I, ODP required in its program guidelines that states transfer grant funds to local jurisdictions within 45 days of the grant award date. Congress subsequently included this requirement in the appropriations statute for SHSGP II grant funds. To ensure compliance, ODP required states to submit a certification form indicating that all awarded grant funds had been transferred within the required 45-day period. States that were unable to meet the 45-day period had to explain the reasons for not transferring the funds and indicate when the funds would be transferred. According to ODP, for SHSGP I and II, respectively, 33 and 31 states certified that the required 45-day period had been met. To further assist states in expediting the transfer of grant funds to local jurisdictions, ODP also modified its requirements for documentation to be submitted as part of the grant application process for fiscal years 2002 and 2003. In fiscal year 2002, ODP required states to submit budget detail worksheets and program narratives indicating how the grant funds would be used for equipment, exercises, and administration—and have them approved. If a state failed to submit the required documentation, ODP would award the grant funds, with the special condition that the state could not transfer, expend, or draw down any grant funds until the required documentation was submitted and approved. In fiscal year 2002, ODP imposed special conditions on 37 states for failure to submit the required documentation and removed the condition only after the states submitted the documentation. The time required to remove the special conditions ranged from about 1 month to 21 months. For example, in one state we reviewed, ODP awarded SDPP grant funds and notified the state of the special conditions on September 13, 2002; the special conditions were removed about 6 months later on March 18, 2003, after the state had met those conditions. However, in fiscal year 2003, ODP allowed states to move forward more quickly, by permitting them to transfer grant funds to local jurisdictions before all required grant documents had been submitted. If a state failed to submit the required documentation for SHSGP I, ODP awarded the grant funds and allowed the state to transfer the funds to local jurisdictions. While the state and local jurisdictions could not expend—and the state could not draw down—the grant funds until the required documentation was submitted and approved, they could plan their expenditures and begin state and locally required procedures such as obtaining approval of the state legislature or city council to use the funds. For SHSGP I, ODP imposed special conditions on 47 states for failure to submit the required documentation and removed the condition only after the states submitted the documentation. The special conditions were removed approximately 1 month to 15 months after the grant funds were awarded to the states. For the SHSGP II grant cycle, in order to further expedite the award process and availability of fiscal year 2003 funds for expenditure, ODP no longer required states to submit the budget detail worksheets and certain other documents as part of the grant application process. Rather, these documents could be submitted later with the state’s biannual progress report. Thus, states were able to transfer, expend, and draw down grant funds immediately after ODP awarded the grant funds. (See appendix III for grant award and distribution timelines for selected state and local grantees.) Despite congressional and ODP efforts to expedite the award of grant funds to states and the transfer of those funds to localities, some states and local jurisdictions could not expend the grant funds to purchase equipment or services until other, nonfederal requirements were met. Some state and local officials’ ability to spend grant funds was complicated by the need to meet various state and local legal and procurement requirements and approval processes, which could add months to the process of purchasing equipment after grant funds had been awarded. For example, in one state we visited, the state legislature must approve how the grant funds will be expended. If the state legislature is not in session when the grant funds are awarded, it could take at least 4 months to obtain state approval to spend the funds. In another state we visited, a city was notified on July 17, 2003, that SHSGP I grant funds were available for use, but the city council did not vote to accept the funds until almost 4 months later. A 2004 report by the House Select Committee on Homeland Security also cited instances of slowness at the state and local government levels in approving the acceptance and expenditure of grant funds. For example, according to the committee report, one county took about 7 months after receiving its SHSGP I grant award to get authorization to spend the grant funds. Some state and local officials we talked with said that complying with their normal procurement regulations could also take months. They said that these regulations require, among other things, competitive bidding for certain purchases—a frequently lengthy process in their view. Some states, in conjunction with DHS, have modified their procurement practices to expedite the procurement of equipment and services. Officials in two of the five states we visited told us they established centralized purchasing systems that allow equipment and services to be purchased by the state on behalf of local jurisdictions, freeing them from some local legal and procurement requirements. As reported by the House Select Committee on Homeland Security in April 2004, many states were looking to move to a centralized purchasing system for the same reason. In addition, the DHS’s Homeland Security Advisory Council Task Force reported that several states developed statewide procurement contracts that allow local jurisdictions to buy equipment and services using a prenegotiated state contract. According to DHS, it has offered options for equipment procurement, through agreements with the U.S. Department of Defense’s Defense Logistics Agency and the Marine Corps Systems Command, to allow state and local jurisdictions to purchase equipment directly from their prime vendors. DHS said that these agreements provide an alternative to state and local procurement processes and often result in a more rapid product delivery at a lower cost. For example, one state we visited is using a Defense Logistics Agency prime vendor to make equipment purchases. Local jurisdictions can order the equipment without having to go through their own locally based competitive bidding process. Congress has also taken steps to address a problem that some states and localities cited concerning a federal policy that provides reimbursement to states and localities only after they have incurred an obligation, such as a purchase order, to pay for goods and services. Until fiscal year 2005, after submitting the appropriate documentation, states and localities could receive federal funds to pay for these goods and services several days before the payment was due so that they did not have to use their own funds for payment. However, according to DHS’s Homeland Security Advisory Council Task Force, many municipalities and counties had difficulty participating in this process either because they did not receive their federal funds before payment had to be made or their local governments required funds to be on hand before commencing the procurement process. Officials in one city we visited said that, to solve the latter problem, the city had to set up a new emergency operations account with its own funds. The task force recommended that for fiscal year 2005, ODP homeland security grants be exempt from the Cash Management Improvement Act to allow funds to be provided to states and municipalities up to 120 days in advance of expenditures. The fiscal year 2005 DHS appropriations legislation includes a provision that exempts formula-based grants (SHSGP) and discretionary grants, including UASI and other ODP grants, from the act's requirement that an agency schedule the transfer of funds to a state so as to minimize the time elapsing between the transfer of funds from the U.S. Treasury and the state’s disbursement of the funds for program purposes. In addition, DHS efforts are under way to identify and disseminate best practices, including how states and localities manage legal and procurement issues that affect grant distribution. DHS’s Homeland Security Advisory Council Task Force stated in a June 2004 report that some jurisdictions have been “very innovative” in developing mechanisms to support the procurement and delivery of emergency-response-related equipment. For example, one state cited in the report was in the process of forming a procurement working group to address issues as they arise. The report also cited that several states have developed statewide procurement contracts that allow municipal government units to buy first responder equipment and services. One state created a password- protected Web site that allowed local jurisdictions to view their allocation balance and place orders for equipment up to their funding allocation limit. According to the task force, these efforts substantially reduced the time it takes for localities to purchase and receive their equipment. The task force recommended that, among other things, DHS should, in coordination with state, county, and other governments, identify, compile, and disseminate best practices to help states address grant management issues. According to ODP, in an effort to complement and reinforce the task force’s recommendations, in partnership with the National Criminal Justice Association, it established a new Homeland Security Preparedness Technical Assistance Program service to enhance the grant management capabilities of state administrative agencies. In an August 30, 2004, Information Bulletin, ODP requested that state administrative agencies complete a survey designed to gather information on their grant management technical needs and best practices related to managing and accounting for ODP grants, including the procurement of equipment and services at the state and local levels. The information that ODP is gathering is to serve as a foundation for the development of a tailored, on- site assistance program for states to ensure that identified best practices are implemented and critical grant management needs and problems are addressed. According to ODP, this program will be operational in December 2004. Despite efforts to streamline local procurement practices, some challenges remain at the state and local levels. An ODP requirement that is based on language in the appropriations statute could delay procurements, particularly in states that have a centralized purchasing system. Specifically, for the fiscal 2004 grant cycle, states are required by statute to pass through no less than 80 percent of total grant funding to local jurisdictions within 60 days of the award. In order for states to retain grant funds beyond the 60-day limit, ODP requires states and local jurisdictions to sign a memorandum of understanding (MOU) indicating that states may retain—at the local jurisdiction’s request—some or all funds in order to make purchases on a local jurisdiction’s behalf. The MOU must specify the amount of funds to be retained by the state. A state official in one state we visited said that, while the state’s centralized purchasing system had worked well in prior years, the state has discontinued using it because of the MOU requirement, since establishing MOUs with every locality might take years. The state transferred the fiscal year 2004 grant funds to local jurisdictions so they can make their own purchases. In another state, officials expressed concern that this requirement would negatively affect their ability to maintain homeland security training provided to local jurisdictions at state colleges that had been previously funded from local jurisdictions’ grant funds. In a June 23, 2004, ODP Information Bulletin, ODP strongly recommended that states retaining funds at the state level on behalf of local jurisdictions have the MOUs reviewed by DHS’s Office of General Counsel to ensure that the MOUs meet the requirements of the appropriation language and ODP program guidelines. ODP officials told us that they were assisting states to adapt to the new requirement. The terrorist attacks of September 11, 2001, forced the nation to reexamine its requirements for domestic safety, including the capacity and resources that would be needed at the state and local levels to prevent, prepare for, respond to, or recover from potential future threats from terrorists and minimize their impact. Congress addressed this concern in the months after the attacks, in part by increasing the grant funds that states would receive to enhance their emergency first responder and public health and safety capabilities to deal with terrorist attacks involving CBRNE weapons. Not surprisingly, the enormous effort required to bolster first responder capacity nationwide posed challenges for government administrators at the federal, state, and local levels. A major challenge in administering first responder grants is balancing two goals: minimizing the time it takes to distribute grant funds to state and local first responders, and ensuring appropriate planning and accountability for the effective use of grant funds. ODP’s approach to striking this balance has been evolving from experience, congressional action, and feedback from states and local jurisdictions. Over the last 2 years, working in concert with state governments and others, DHS has made progress, through ODP, in managing its state homeland security grant programs. ODP has addressed management problems regarding how grants were awarded and funds distributed, which arose following the dramatic increase in federal funding for first responders after September 11. While some localities continue to face legal and procurement challenges that can tie up access to grant funds, ODP is taking steps to provide technical assistance that will, among other things, give state and local officials access to best-practice information on how other jurisdictions have successfully addressed procurement challenges. As ODP continues to administer its state and urban first responder grant programs, it will likely face new challenges. In particular, as DHS and ODP work to develop national preparedness standards, it will be important to listen and respond fully to the concerns of states, local jurisdictions, and other interested parties about, among other things, the planned time frames for implementing the new standards. It will also be important to ensure that there is adequate collaboration and guidance for moving forward. Effective collaboration among ODP, states, and others in developing appropriate preparedness performance goals and measures will be essential to ensuring that the nation’s emergency response capabilities are appropriately identified, assessed, and strengthened. DHS generally agreed with the report’s findings. In particular, the agency concurred that it faced a number of challenges related to effectively managing first responder grants and highlighted the progress it has made in addressing them. The agency expressed the view, however, that the progress already achieved in meeting these challenges was not appropriately reflected in the title of the report. We disagree. As DHS notes, our report acknowledges the efforts the agency has made in revising grant procedures to expedite awards while maintaining accountability. Nevertheless, not all of the agency’s efforts have gone smoothly, as attested, for example, by the problems that DHS and the states experienced in realistically defining first responder equipment needs in 2003. In view of the concerns recently expressed by state and other officials, DHS may, in our view, continue to face significant challenges in meeting its time tables to develop realistic capability requirements and performance measures for first responders. DHS also provided further details on some grant management issues we raised in the report. We have revised the report as appropriate to include these and other technical comments provided. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees and subcommittees, the Secretary of Homeland Security, and to other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or jenkinswo@gao.gov. Key contributors to this report are listed in appendix VI. We initially addressed our researchable questions regarding the Office for Domestic Preparedness’s (ODP) structure and processes for program and financial management of its grants and its monitoring policies and processes in a briefing to the Subcommittee on Homeland Security of the House Committee on Appropriations. In addressing those questions, we identified 25 domestic preparedness programs managed by ODP in fiscal years 2002 and 2003. For this report, we selected the five largest programs in terms of federal funding provided to state and local jurisdictions for our detailed review. Three of the five programs that addressed state and local preparedness issues were basically for the same purposes but received funding from separate appropriations. These were the fiscal year 2002 State Domestic Preparedness Program (SDPP) and the fiscal year 2003 State Homeland Security Grant Programs (SHSGP) I and II. The other two programs were awarded to selected urban areas. These were the fiscal year 2003 Urban Areas Security Initiatives (UASI) I and II grant programs. We also selected Arizona, California, Florida, Missouri, and Pennsylvania and 19 local jurisdictions within those states: The cities of Phoenix and Pima and Maricopa and Coconino Counties in Arizona. The cities of Los Angeles and Sacramento and the County of Los Angeles in California. The city of Miami, Miami-Dade County, and the Miami and Tallahassee Regional Domestic Security Task Forces in Florida. The city of St. Louis, St. Louis and Franklin Counties, and the rural cities of Jackson and Sikeston in Missouri. The city of Philadelphia and the Southeastern and South Central Regional Terrorism Task Forces in Pennsylvania. The five states were selected on the basis of the amount of ODP grant funding received, population size, and other factors. The local jurisdictions were selected on the basis of a mix of urban and rural locations to include cities and counties that received UASI funding. To determine how SHSGP and UASI were administered in fiscal years 2002 and 2003 so that ODP could ensure that grant funds were spent in accordance with grant guidance and state preparedness planning, we interviewed ODP officials and homeland security and grant management officials and first responders in the five selected states and from selected local jurisdictions within those states. We also obtained and reviewed related ODP policy guidance and program guidelines for the SDPP/SHSGP and UASI grant programs. We also obtained and reviewed documentation on grant awards to state and local jurisdictions. We spoke with ODP officials about their grant monitoring and reporting processes and obtained and reviewed related ODP grant-monitoring guidance and monitoring reports for fiscal year 2004. We also obtained and analyzed data on the number of office-based and on-site-monitoring reviews conducted in fiscal year 2004. We reviewed these data for obvious inconsistency errors and completeness and compared these data with on- site-monitoring reports prepared by ODP. On the basis of these efforts, we determined that the monitoring review data were sufficiently reliable for the purpose of this report. In addition, we spoke with ODP and state and local officials about staffing issues that affect grant management. We also interviewed ODP and state and local officials and reviewed documentation about ODP’s state homeland security needs assessment and strategy development process and the similar needs assessment and strategy development process for selected urban areas. In addition, we obtained and reviewed the state domestic preparedness strategies for the selected five states. In conjunction with this effort, we also obtained information about the steps that ODP is taking to implement Homeland Security Presidential Directive-8 regarding national preparedness goals and performance standards. We also reviewed relevant reports on homeland security and domestic preparedness that discuss the development of national performance standards. To determine the time frames for awarding and distributing SHSGP and UASI grants established by ODP grant guidance or by law, and how these time frames affected the grant cycle, we obtained and analyzed appropriations acts and program guidelines for the grant programs. We also met with ODP officials and state homeland security and grant management officials, and local grant managers and first responders in the selected states and local jurisdictions to discuss how the time lines affected the grant cycle. We obtained and analyzed data on the time frames associated with the grant award and distribution processes. We reviewed these data for obvious inconsistency errors and completeness and compared these data with hard-copy documents we obtained for these states. When we found discrepancies, we brought them to the attention of ODP and state and local officials and worked with them to correct the discrepancies before conducting our analyses. On the basis of these efforts, we determined that the time-frame data were sufficiently reliable for the purpose of this report. We also obtained information about local procurement policies and practices. In addition, we reviewed recent reports and studies on issues related to federal funding and oversight of grants for first responders. We also obtained grant funding and expenditures as of July 31, 2004, for the 56 states and territories and the urban areas. Given that the grant funding and expenditure data are used for background purposes only, we did not assess the reliability of these data. We also obtained and analyzed key dates associated with the grant award, distribution, and reimbursement processes for selected states and local jurisdictions. We conducted this work from November 2003 through November 2004 in accordance with generally accepted government auditing standards. Given that these grant-funding and drawn-down amounts are used for background purposes only, we did not assess the reliability of these data. Appendix III: Grant Award, Distribution, and Reimbursement Process for Selected States and Local Jurisdictions SAA officially notifies County A that it will receive $119,202 in SDPP funding for equipment. County A's local emergency planning notifies County A that it 2002 SDPP program guidelines to states. applications for grant funding; an equipment list is sent to SAA for approval. and exercises. County A for SDPP funds. with SDPP funds. SDPP purchase. 2/18/03 County A's Board of Supervisors approves agenda item to accept funding to ODP. SDPP funds for equipment. agenda item to County A's Board of Supervisors to accept in SDPP funding. SDPP funds for equipment. State A for SDPP funds. County A's first responder grant applications are due. County A’s local emergency planning committee sends grant application forms to all county first responder agency administrators. County's $1,383,000 grant share, and approves will be spent by local jurisdictions and state agencies. appropriation adjustments to county departmental budgets receiving grant funds. A SAA expenditure report shows that application to state. $25,733 of the total $3,705,921 awarded SDPP funding for the operational area had been expended. SAA posts local grant guide to its Web site. ODP awards State B $24,831,000 in SDPP funding, but imposes special conditions. grant items. 12/12/03 City B's Controller releases fund to emergency management grant administrators for grant uses. SAA approves County B's grant purchase orders. application to ODP. $3,705,921 in SDPP funding for by the county. City B's funds into an emergency operations fund and approves the City draw down. Council's decision to accept the city's grant share ($1,139,241). The Mayor also authorizes the purchase of approved equipment with funds from the emergency operations fund. City B's City Council approves the acceptance of its $1,139,241 available for draw down. grant share, and approves the transfer of city funds to an emergency operations fund to support grant purchases. 6/11/03 State B's ESO approves County B's grant application and awards grant totaling $9,491,596 for the entire area emcompassed by the county. County B's Board of Supervisors accepts county's $3,923,000 grant share. 12/15/03 City B begins issuing purchase orders. 6/23/03 ODP sends written correspondence to State B inquiring about the state's progress toward complying with the grant guidance requirement to obligate funds to local communities within 45 days of receiving the grant award. 7/15/03 City B's City Council approves the acceptance of $2,738,053 grant share, and the transfer of city funds to an emergency operations fund to support grant purchases. A State B ESO expenditure report shows that $100,947 of the total $9,491,596 for all jurisdictions in the county had been expended. 8/01/03 City B's Chief Administrative Officer issues notice to exempt grant purchases from freeze on equipment purchases imposed because of city budget constraints. 7/23/03 City B's Mayor concurs with City Council's acceptance of $2,738,053 grant share. State B submits its application to ODP. 7/21/03 Formal ODP Grant Adjustment Notice removing most of the special conditions that prohibited the expenditure or draw down of funds for equipment and training. The restrictions on $1,495,000 in equipment funds remain in effect until detailed equipment budget worksheets are submitted and approved by ODP. application to state. 7/09/03 State B's SAA transmits to ODP a form certifying that State B has obligated all SHSGP I funds within the required 45 days. 1/28/04 County B expenditure report shows that $123,144 of the total $9,491,596 grant award had been expended by all jurisdictions in the county. County B notifies City B that it may begin purchasing approved grant items. 8/06/03 City B's grant administrators instructed to create a new account to receive funds from emergency operations account to purchase grant items. County B notifies City B that grant-approved items. City B's Controller releases funds to grant use. orders. 8/08/03 State B's SAA approves County B's grant application and awards a grant totaling $22,421,072 for the entire area encompassed by the county. A State B ESO expenditure report shows that $98,865 of the total $22,505,947 awarded to County B had been expended. County's $9,824,000 share, and approves obligated all SHSGP II funds within the required 45-days. departmental budgets receiving grant funds. SHSGP I funding. grant were removed. 7/11/03 SAA notifies ODP that State C has obligated all SHSGP I funds within the required 45-day period. orders for equipment on regional task force behalf. Counties that comprise the regional task 12/17/03 Equipment purchased by the state for the regional task force is shipped to County C. State C submits its application to ODP. to SAA for approval. task force. County C's Board of Commissioners approves SHSGP I funds for expenditure on planning, training, exercises, and program administration. The board also approves funds for equipment purchases 7/31/03 SAA submits additional information to ODP to remove special conditions. Formal ODP Grant Adjustment Notice removing most of by the state for the regional task force. the special conditions. State C SAA. task force; all purchase orders made by the state for the regional task force are released. $18,570,000 in SHSGP I funding, with special conditions. force equipment list. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness. GAO-04-1009. Washington, D.C.: September 15, 2004. Homeland Security: Federal Leadership Needed to Facilitate Interoperable Communications between First Responders. GAO-04- 1057T. Washington, D.C.: September 8, 2004. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-740. Washington, D.C.: July 20, 2004. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-963T. Washington, D.C.: July 20, 2004. Homeland Security: Coordinated Planning and Standards Needed to Better Manage First Responder Grants in the National Capital Region. GAO-04-904T. Washington, D.C.: June 24, 2004. Homeland Security: Management of First Responder Grants in the National Capital Region Reflects the Need for Coordinated Planning and Performance Goals. GAO-04-433. Washington, D.C.: May 28, 2004. Emergency Preparedness: Federal Funds for First Responders. GAO-04- 788T. Washington, D.C.: May 13, 2004. Homeland Security: Challenges in Achieving Interoperable Communications for First Responders. GAO-04-231T. Washington, D.C.: November 6, 2003. Homeland Security: Reforming Federal Grants to Better Meet Outstanding Needs. GAO-03-1146T. Washington, D.C.: September 3, 2003. In addition to those persons mentioned above, David Alexander, Leo Barbour, Amy Bernstein, Mona Nichols Blake, Laura Helm, Carlos Garcia, Jessica Kaczmarek, and Katrina Moss made key contributions to this report.
|
The Office for Domestic Preparedness (ODP)--originally established in 1998 within the Department of Justice to help state and local first responders acquire specialized training and equipment needed to respond to terrorist incidents--was transferred to the Department of Homeland Security upon its creation in March 2003. After September 11, 2001, the scope and size of ODP's grant programs expanded. For example, from fiscal year 2001 through fiscal year 2003, ODP grants awarded to states and some urban areas grew from about $91 million to about $2.7 billion. This growth raised questions about the ability of ODP and states to ensure that the domestic preparedness grant programs--including statewide and urban area grants--are managed effectively and efficiently. GAO addressed (1) how statewide and urban area grants were administered in fiscal years 2002 and 2003 so that ODP could ensure that grant funds were spent in accordance with grant guidance and state preparedness planning and (2) what time frames Congress and ODP established for awarding and distributing grants, and how time frames affected the grant cycle. ODP has established and refined grant award procedures for states and localities to improve accountability in state preparedness planning. For fiscal years 2002 and 2003, ODP developed procedures and guidelines for awarding statewide and urban area grants to states and for determining how states and localities could expend funds and seek reimbursement for first responder equipment or services. ODP gave states flexibility by allowing them to determine how grant funds were to be managed and distributed within their states. In fiscal year 2003, ODP required states to update homeland security strategies and related needs assessments prepared in earlier years. These efforts are intended to guide states and localities in targeting grant funds. ODP also took steps to improve grant oversight procedures. Finally, to help meet mandates contained in a presidential directive, ODP has begun drafting national preparedness standards to identify and assess gaps in first responder capabilities on a national basis. Congress and ODP have acted to expedite grant awards by setting time limits for grant application, award, and distribution processes. For fiscal year 2002 through February 2003, the appropriations statutes did not require ODP to award grant funds to states within a specific time frame. Then, in April 2003, the supplemental appropriations act imposed new deadlines on ODP and the states. As a result, ODP reported that all states submitted grant applications within the mandated 30 days of the grant announcement, and that over 90 percent of grants were awarded within the mandated 15 days of receipt of the applications. ODP also took steps to expedite the transfer of funds from states to local jurisdictions. Nevertheless, the ability of states and localities to spend grant funds expeditiously was complicated by the need to adhere to various legal and procurement requirements. ODP is identifying best practices to help states address the issue. In reviewing a draft of the report, the Department of Homeland Security generally agreed with GAO's findings; however, it questioned whether the report's title adequately reflected the agency's progress in meeting grant management challenges.
|
Transportation Security Act (ATSA) was enacted. Among other things, ATSA required that TSA provide for the screening of all checked baggage for explosives transported on flights departing U.S. commercial airports. Pursuant to ATSA, TSA deployed EDS and ETD equipment to screen checked baggage and identify potential threats from explosives. While TSA is responsible for operating or overseeing the operation of checked- baggage-screening equipment, TSA and S&T share responsibilities for the research and development of checked-baggage-screening technologies. During fiscal year 2006, most research and development functions within DHS, including TSA, were consolidated within S&T. After this consolidation, S&T assumed primary responsibility for the research, development, and related test and evaluation of airport checked- baggage-screening technologies. S&T also assumed responsibility from TSA for the TSL, which tests and evaluates technologies under development against TSA-established detection requirements. TSA continues to be responsible for: identifying the requirements for new checked-baggage-screening technologies; operationally testing and evaluating technologies in airports; and procuring, deploying, and maintaining technologies. TSA relies on S&T as the central coordination point to manage all work related to explosives that involve the TSL, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, and the Air Force Research Laboratory at Tyndall AFB. Deploying EDSs and ETDs According to TSA, since fiscal year 2001, TSA has made over $8 billion available to EBSP for activities related to checked-baggage screening. TSA uses two types of technology for checked baggage screening—the EDS and the ETD—at 462 U. S. commercial airports. EDS is used to identify suspicious bulk items or anomalies in checked baggage that could be explosives or detonation devices. In airports that have EDS, it is used for primary screening of checked baggage while ETD machines are used for secondary screening of checked baggage to help resolve EDS alarms. Additionally, at airports without EDS, the ETD machines are used for primary screening of checked baggage. See figure 1 for a photograph of an EDS machine and figure 2 for a photograph of an ETD machine. TSA deploys EDSs in multiple configurations, such as an in-line configuration and a stand-alone configuration. The in-line configuration integrates EDS with an airport’s baggage handling system—the conveyor system that sorts and transports baggage for loading onto an aircraft. EDS in stand-alone configurations are separate baggage screening units that are not integrated with a baggage-handling system and are typically located in an airport lobby although they may also be located in other airport locations. Checked baggage is manually loaded and unloaded on stand-alone EDS machines. As of October 2010, TSA had 2,297 EDS machines in its fleet, 1,938 of which were deployed at airports in the United States. At airports and terminals that do not use EDSs, ETD machines are used for primary checked-baggage screening. Typically, ETDs are used for primary screening of checked baggage at smaller airports. These airports typically do not have EDSs for primary screening of checked baggage. As of February 2011, TSA estimated that there were about 5,200 ETD machines used for the primary or secondary screening of checked baggage at U.S. commercial airports. TSA certifies the EDS it deploys to commercial airports for screening checked baggage, based on tests performed by the TSL. Specifically, TSA certifies that EDSs, alone or as part of an integrated system, can detect, under realistic operating conditions, the amounts, configurations, and types of explosive material which would be likely to be used to cause catastrophic damage to an aircraft, using requirements developed in consultation with experts from outside TSA. Furthermore, TSA periodically reviews threats to civil aviation security, including: explosive material that presents the most significant threat to civil the minimum amounts, configurations, and types of explosive material that can cause, or would be expected to cause, catastrophic damage to aircraft in air transportation; and the amounts, configurations, and types of explosive material that can be detected reliably by existing or reasonably anticipated, near-term explosive detection technologies. Currently, TSA requires that EDSs undergo three types of testing— certification testing, integration testing, and operational testing—before it will purchase such equipment. First, TSA verifies that vendors’ explosives detection systems meet—that is, are capable of detecting in accordance with—the TSA established explosives detection requirements through the certification testing process. TSA’s decision to certify an EDS relies on the results of independent test and evaluation performed at the TSL. Prior to certification testing, TSL conducts preliminary evaluations of vendors’ EDS, known as certification readiness testing (CRT) and pre-certification, to determine the extent to which vendors are ready to enter certification testing. During CRT, TSL provides feedback to vendors on their EDS’s strengths and weaknesses in detecting explosives in order to help vendors make necessary adjustments to their detection software. Second, in addition to being certified that the EDS can meet explosives detection requirements, EDSs being deployed in an in-line configuration must also undergo integration testing. As part of this testing, machines deployed in an in-line configuration must demonstrate in a controlled environment that they can be successfully integrated within the baggage- handling systems used for checked baggage. Finally, following certification and integration testing, EDSs undergo operational testing in an airport setting to demonstrate that they can reliably and effectively function in a live airport environment. In 2005, TSA revised explosives detection requirements for the EDS; however, some number of the EDSs are currently operating at the levels to detect explosives as set forth only in the 2005 requirements. When TSA established the 2005 requirements, it did not have a plan that identified the appropriate time frames needed to deploy EDSs to meet the requirements. In January 2010, TSA again revised the EDS explosives detection requirements and plans to deploy EDSs meeting these requirements in a tiered and phased approach over a number of years. One tier of requirements consists of three levels and expanded the number and types of explosives that EDSs must detect. TSA is in the process of developing another tier of requirements, which will refine the amount (for example, minimum mass) of an explosive that can cause catastrophic damage to an aircraft. If TSA deploys EDSs that fully meet the one tier of the requirements, TSA must ensure that ETD machines are capable of detecting all of the explosives that EDSs will be able to detect to minimize any potential screening difference between the EDS and ETD. In November 2005, TSA revised its explosives detection requirements for EDS that had previously been established in 1998 by the Federal Aviation Administration. However, as of January 2011, some number of the EDSs in TSA’s fleet are configured to detect explosives at the levels established only in the 2005 requirements. The remaining EDSs are configured to detect explosives at 1998 levels. When TSA established the 2005 requirements, it did not have a plan with the appropriate time frames needed to deploy EDSs to meet the requirements. Standard practices for program and project management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate steps, time frames, and milestones needed to achieve those results. Despite the absence of a plan, TSA officials stated that they must conduct testing to compare the false alarm rates for machines operating at one level of requirements to those operating at another level of requirements. According to TSA officials, the results of this testing would allow them to determine if additional staff are needed at airports to help resolve false alarms once the EDSs are configured to operate at a certain level of requirements. TSA officials reported that they had anticipated this operational testing was to be completed in March 2011. According to agency officials, TSA did not begin this operational testing immediately after the previous explosives detection requirements were established in November 2005 because the agency officials were aware at the time of a potential further revision of the requirements based upon a planned computer modeling effort to revise the detection standards that became known as Project Newton. TSA and S&T officials told us they had planned to use the results from Project Newton to further revise the explosives detection requirements to reflect the mass of an explosive that would cause catastrophic damage to an aircraft. Although Project Newton did not begin until 2007, TSA officials told us that they were aware of plans to further revise the requirements prior to the initiation of Project Newton and delayed operational testing in anticipation of the results of the computer modeling effort. As of April 2011, the EDS explosives detection requirements have not been changed based on results of the computer modeling because Project Newton is still under way, though TSA officials told us that they plan to use the results of Project Newton to later define a tier of the 2010 EDS requirements. (We discuss the status of Project Newton in more detail in app. II.) However, once it became apparent that the results of Project Newton would not become available to further revise the requirements, TSA did not establish a plan with time frames for completing the additional testing related to staffing. Standard practices for planning and project management suggest TSA should have defined the operational testing plan and milestones as part of a road map for assessing potential staffing changes when these EDSs were first deployed after the 2005 requirements were established. Establishing reasonable time frames to complete the operational testing could help TSA ensure it achieves its desired goal of activating EDSs capable of detecting the explosives established in the 2005 requirements in a timely manner. In January 2010, TSA revised the EDS checked-baggage explosives detection requirements partly in response to credible and immediate threats to civil aviation. TSA plans to meet the 2010 EDS requirements using a tiered and phased approach: one tier is to be implemented over a number of years and expands the types of explosives that EDSs must detect, and another tier adds the use of the results of the computer- modeling effort known as Project Newton. TSA is not certain when this tier of requirements will be implemented. TSA plans to deploy EDSs that meet one tier of the 2010 requirements in a phased approach beginning in late fiscal year 2011 as part of its planned EDS acquisition. Regarding the other tier, TSA officials stated that Project Newton is to enhance TSA’s understanding of explosives effects by simulating hundreds of explosives tests using computer modeling to determine the effects explosives will have when placed in different locations within an aircraft. TSA’s and S&T’s understanding of how explosives affect aircraft has largely been based on data obtained from live-fire explosive tests on retired aircraft hulls and other data. Project Newton is jointly managed and funded by DHS’s S&T and TSA. Through fiscal year 2009, S&T and TSA had invested about $12.5 million in national laboratories for computer modeling activities as part of Project Newton, according to a senior TSA official. We discuss Project Newton and its budget in more detail in appendix II. TSA plans to implement one tier of requirements in a phased approach that consists of three levels (see fig. 3). As our past work has shown, an incremental or phased approach to implementing requirements can reduce risk and make a program more achievable by providing more time to develop and test key technologies. TSA officials told us that the ability to develop EDS is likely to become increasingly complex as the implementation of requirements progresses. According to TSA, it expects to begin procuring EDSs to meet the 2010 requirements in July 2011. Consequently, if TSA is successful in deploying EDSs that meet all three levels of the 2010 requirements, TSA’s EDS fleet would be certified to detect more explosives than a fleet meeting the 1998 or 2005 requirements. ETD machines are currently certified to detect some different explosives than those EDSs that meet the 2005 EDS detection requirements. However, if TSA purchases and deploys EDSs that fully meet Level C of the 2010 EDS requirements, ETD machines are not required to detect all of the explosives that can be detected by EDSs. TSA’s existing ETD explosives detection requirements identify the types and quantities of explosives materials, that is, traces of explosives, that must be detected and the minimum detection rate for each category of explosive. According to TSA officials, the ETD explosives detection requirements have not been revised because TSA wanted to first focus on revising the EDS explosives detection requirements in time for its planned EDS acquisition, which is aimed at replacing and upgrading its fleet of EDSs used to screen checked baggage. TSA officials stated that they are developing a combined set of explosives detection requirements that could eventually result in the ETD and EDS machines detecting the same explosives. TSA officials stated that the combined set of EDS and ETD requirements would not be expected to be approved until sometime in calendar year 2011. Although combined detection requirements for ETD and EDS are to help ensure that both machines can detect the same explosives, the machines are not expected to be required to detect the same amounts of explosives because the purpose of the ETD is to detect traces of explosives in nanograms while the EDS is designed to detect larger amounts of explosives. At all airports that use EDSs to screen checked baggage, ETD machines are used in conjunction with EDSs to screen checked baggage for explosives. At these airports, if an EDS alarms—indicating that checked baggage may contain an explosive or explosive device that cannot be cleared—ETD machines are used as a secondary screening device in order to attempt to resolve the alarm. However, the differences between the EDS and ETD requirements may impact the resolution of EDS alarms by the ETD in the future. According to TSA’s 2010 EBSP Acquisition Strategy, additional equipment—other than the ETDs currently deployed—is to be employed to support alarm resolution when EDSs that meet the new checked-baggage explosives detection requirements are deployed. However, the acquisition strategy does not specify what additional equipment or screening protocols will be employed to resolve alarms nor does the strategy discuss whether TSA will continue to use ETD equipment to resolve EDS alarms. According to TSA, the agency is currently evaluating what additional technologies and/or changes to screening protocols may be needed to address any potential gap in capability between the newly certified EDSs and ETDs used for alarm resolution. If TSA begins operating EDSs that detect explosives in subsequent phases of the 2010 requirements, this potential screening difference between EDS and ETD will exist until TSA deploys additional equipment and/or implements new screening protocols that could be used for secondary screening. In commenting on this issue, TSA officials stated that checked-baggage- screening technologies are only one layer of security and that other layers of security exist to help address potential threats to the aviation security system. However, officials agreed that they have not yet developed new screening protocols or deployed additional equipment that will address the potential gap in screening capability between EDS and ETD if the new EDSs are deployed. Standards for program management require that specific desired outcomes or results be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate steps and time frames needed to achieve those results. Because TSA decided to revise explosives detection requirements for EDSs prior to revising the ETD requirements, the differences in the requirements may affect TSA’s capability to detect the 2010-required levels until TSA identifies technologies or protocols needed to address the potential gap. Without a plan to ensure that secondary-screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed with additional capability, it will be difficult for TSA to provide assurances that the potential capability gap has been resolved. TSA has developed an EBSP acquisition strategy to guide its efforts to improve its fleet of checked-baggage-screening machines, but has faced several challenges in implementing plans for the current EDS acquisition under this strategy. First, TSA has experienced challenges in collecting explosives data on the physical and chemical properties of certain explosives needed by vendors to develop EDS detection software and needed by TSA before procuring and deploying EDSs to meet the 2010 requirements. TSA and S&T have experienced these challenges because of problems associated with safely handling and consistently formulating some explosives. Second, the challenges related to data collection for certain explosives have resulted in problems carrying out the EDS procurement as planned. Specifically, attempting to collect data for certain explosives while simultaneously pursuing the EDS procurement has delayed the EDS acquisition schedule by at least 7 months. Finally, EDS vendors have expressed concerns about the extent to which TSA is communicating with the business community about the current EDS procurement. In July 2010, DHS approved TSA’s current acquisition strategy for the EDS, and under this strategy, TSA plans to increase the threat detection capabilities of the EDS using a competitive procurement to purchase and deploy EDS beginning in 2011. According to TSA officials, most of the previous EDS acquisitions were sole source procurements. However, TSA is implementing a competitive procurement for the current EDS acquisition to, in part, meet the EBSP acquisition strategy’s goals and objectives., Furthermore, the EBSP acquisition strategy calls for acquiring new EDSs as part of the recapitalization plan to replace aging EDS. Under the current EDS procurement, TSA plans to award contracts to purchase 260 EDSs, including those in its recapitalization plan, at an estimated cost of $256 million during the fourth quarter of fiscal year 2011. Until TSA begins purchasing machines under the current EDS acquisition to meet 2010 requirements, the agency has continued to purchase EDSs under existing contracts with current vendors. TSA and S&T have experienced a number of challenges related to collecting data on some explosives data needed to procure and deploy EDSs that meet the 2010 requirements. These data are needed both by vendors to develop EDS detection software and by the TSL for the certification testing process and includes such information as the physical and chemical properties of explosives. Participants in the data collection effort include S&T; the TSL, which is taking the lead; the Air Force Research Laboratory (AFRL) at Tyndall AFB, Florida; and Lawrence Livermore National Laboratory. The AFRL is assisting the TSL because the AFRL facility at Tyndall AFB, Florida, is better equipped to safely handle certain explosives as part of the testing and data collection efforts than the TSL facility, according to S&T and Air Force officials. In the course of collecting data, TSL and AFRL officials determined that some of the explosives were very unstable or volatile and special care and procedures were required to reliably and safely handle them. This caused delays as TSA and S&T were unable to provide vendors with all of the data on the explosives or simulants, so that vendors could create and test the software used to detect them. TSL and AFRL also collected scans of explosives as another means to provide vendors with needed data. Because micro-computed technology (micro-CT) data are provided by scanning smaller, less dangerous, amounts of each explosive, TSL and AFRL officials did not face the same challenges in safely synthesizing and determining the physical and chemical properties of the micro-CT data that they faced in working with larger amounts, known as full threat weight, of explosives. TSL officials told us that they provided the micro-CT data to help vendors in developing their explosives detection software. Specifically, TSA was able to distribute the micro-CT data to vendors in early fiscal year 2010, and five of the six vendors we interviewed stated that these data were of limited use in developing their explosives detection software. For example, one vendor stated that the micro-CT data provided some guidance, but that there were too many unknowns to fully use the data to develop their explosives detection software. Further, TSL officials stated that providing the micro-CT data to vendors only served as an intermediary step to providing full threat-weight data that vendors needed to develop their explosives detection software. TSL officials stated that the micro-CT data could not provide vendors with all of the data they needed to fully develop their explosives detection software to meet the 2010 EDS requirements because vendors need scans of the full threat weight of explosives on their respective EDSs to finish developing their detection software. Because of the limitations of simulants and micro-CT data, TSA and S&T decided to collect and distribute scans of the explosives to vendors using the full threat weight of the explosives specified in the 2010 EDS requirements. These scans were collected using each vendor’s respective EDS equipment. TSA has distributed some, but not all, of the full threat-weight data needed by vendors to develop EDS detection software. TSL officials stated that they needed full threat-weight data to conduct certification testing. Additionally, five of the six vendors we interviewed agreed that the full threat-weight data will be necessary in order for vendors to develop their explosives detection software. However, all six vendors noted that because of concerns about the safety of handling certain explosives, they are relying on TSA for the full threat- weight data. Further, four of the vendors said that in the past they had access to some explosives and could collect their own data to develop and test their detection software in order to prepare for certification testing. However, because the vendors cannot safely handle certain explosives, they are reliant upon the data provided by TSA. Consequently, until S&T completes the data collection on all identified explosives being performed at TSL and the AFRL facility at Tyndall AFB, TSA cannot provide all of the data that vendors need to develop their explosives detection software and prepare for certification, nor can the TSL start certification testing of new equipment as part of the current EDS acquisition. TSA’s plans to award contracts for the current EDS acquisition have been delayed by at least 7 months, in part, due to the challenges experienced by S&T related to collecting explosives data. TSA officials stated that, initially, they planned to conduct the current EDS acquisition separately from efforts to collect the data needed to deploy EDSs that meet the 2010 requirements. Specifically, TSA officials stated that they planned to complete the data collection before initiating the procurement to buy EDSs that meet the 2010 requirements. However, officials stated that they subsequently decided to collect explosives data at the same time as implementing the current EDS acquisition because TSA and other stakeholders believed that the data collection effort would be straightforward and that the new requirements could be easily applied to machines procured in the current EDS acquisition. Additionally, program officials stated that procuring and deploying EDSs that meet the 2010 requirements in a phased approach (that is, implementing Level C first, then Level B, then Level A) would help to mitigate any additional challenges and some of the risks associated with collecting data needed for the 2010 requirements. However, TSA and S&T officials acknowledged that pursuing the competitive procurement and explosives data collection at the same time had been more challenging than originally anticipated and had presented problems for the current EDS acquisition. TSA officials stated that all of the 260 EDSs they plan to purchase in 2011 will be upgraded to meet all of the 2010 EDS requirements at a later date. In our prior reports regarding acquisitions, we reported on the elevated risk of poor program outcomes from the substantial overlap of development, test, and production activities. Specifically, we have identified development cost increases, additional delays in manufacturing and testing schedules, and increased financial risk due to pursuing procurement before testing is complete. By separating the effort to collect data on explosives needed to meet the new requirements from the related competitive procurement, TSA and S&T would have more time to collect data identifying the physical and chemical properties of explosives, provide vendors with the time needed to develop detection software, and attempt to pass CRT and certification testing without the added pressure of an acquisition deadline. For example, by completing data collection for each of the phases of the 2010 EDS requirements prior to pursuing procurements for EDSs that meet those requirements, TSA could avoid additional delays to the acquisition schedule due to any data collection challenges. To help avoid these challenges in the future, TSA officials stated that they do not plan for subsequent procurements of EDS capable of meeting the more stringent explosives detection requirements until after the data collection for these explosives has been fully completed. We recognize that it is difficult in such situations to identify firm milestones. However, TSA has not documented its revised approach for conducting the needed data collection and related procurements sequentially rather than simultaneously. TSA does not yet have a documented strategy in place for deploying EDSs beyond July 2011; such a strategy would be valuable because TSA plans to complete the implementation of all of the requirements at an undetermined time after July 2011. Standard practices for program management state that the successful execution of any plan includes identifying in the planning process the schedule that establishes the timeline for delivering the plan. Documenting a plan to separate data collection efforts and certification from future procurements could help TSA ensure it avoids the challenges it has encountered during the current procurement. Officials from five of six EDS vendors we interviewed expressed concerns about the extent to which TSA has communicated effectively with vendors interested in the current procurement. Specifically, these five vendors expressed concerns about the timeliness in which TSA responded to their questions regarding the current procurement or the manner in which TSA communicated important schedule changes, or both. Standards for Internal Control in the Federal Government state that management should ensure there are adequate means of communicating with and obtaining information from external stakeholders that may have a significant impact on the agency achieving its goals. Additionally, the Federal Acquisition Regulation (FAR) encourages exchanges of information among all interested parties, from the earliest identification of a requirement through the receipt of the proposal. The FAR further states that the purpose of exchanging information is to improve the understanding of government requirements and industry capabilities, thereby allowing vendors to judge whether or how they can satisfy the government’s requirements. The improved understanding resulting from such information exchange also enhances the government’s ability to obtain quality supplies and services at reasonable prices and, among other things, potentially increases efficiency in vendors’ proposal preparations. However, five out of six vendors we interviewed said TSA often did not provide information or respond to their questions in a timely manner, if at all. For example, four out of these five vendors said TSA did not answer their questions in a timely manner, in one case taking several months to provide answers to questions posted via a question tracker accessible online to all interested vendors. Meanwhile four of the five vendors’ officials stated TSA did not respond at all to some of their questions, while officials from the fifth vendor stated they were frustrated with how long it took TSA to answer their questions. Officials from two vendors stated that the lack of timely communication regarding schedule changes for the EDS acquisition caused them to incur additional costs allocating extra resources and time to meet the original deadline. Specifically, officials from one vendor noted that they spent additional costs on personnel to aggressively pursue software development for the planned start of certification readiness testing (CRT), despite not having all of the full threat-weight explosives data TSA had intended to provide. Subsequently, these officials told us, TSA did not announce to vendors that CRT would be delayed until one week prior to the original deadline. EBSP officials stated that, because vendors had not yet received all of the full threat weight explosives data, they should have been aware that CRT was not going to happen according to the established schedule. However, EBSP officials agreed that providing vendors with a revised schedule prior to the previously established deadline would have helped promote greater vendor understanding about the proposed changes to TSA’s acquisition strategy. TSA stated that it has taken a number of important steps to alleviate confusion and provide as much information to the vendors as possible. Among other things, at the start of the current procurement, TSA conducted three conferences with industry, called “industry days,” to provide a forum for sharing information with the vendor community regarding the current EDS acquisition. TSA also reported sharing multiple draft versions of the requirements documents and soliciting vendor comments. Additionally, TSA officials stated that they shared draft copies of the detection requirements and held individual classified meetings during the industry days with each interested vendor to obtain input regarding the acquisition. Finally, TSA stated that it also allowed vendors to use government owned equipment and paid for engineering services associated with the testing to help offset vendor costs. Although EBSP officials stated that they have made a concerted effort to be responsive to vendors’ questions and to call vendors directly when issues such as schedule changes arose, EBSP officials agreed that the agency did not always effectively communicate with vendors in a timely manner. Establishing a process for more timely communication with vendors competing for the current EDS procurement could help TSA to ensure that vendors have all of the information they need to meet TSA’s needs for new checked baggage screening equipment. TSA does not have an integrated master schedule (IMS) for the EBSP, and TSA’s schedule for the current EDS acquisition, which is only a part of the program, does not fully meet best practices for preparing an acquisition schedule. Additionally, while TSA completed an initial cost estimate for the EBSP, TSA officials reported that the current cost estimate does not reflect the anticipated costs of purchasing EDSs to meet the 2010 EDS requirements. To meet the explosives detection requirements established in January 2010, TSA plans to upgrade the detection software of a currently unknown number of the deployed EDSs and 260 of the EDSs to be purchased under the current acquisition after they are deployed to airports. However, TSA has not yet developed a plan or cost estimate for the planned upgrades. As part of EBSP’s responsibility to provide equipment to screen all checked baggage originating at U.S. commercial airports, it is acquiring and deploying explosives detection technology to replace aging systems and meet emerging threats. While TSA established the EBSP as a long- term program to procure, test, deploy, and maintain checked-baggage- screening equipment, TSA officials confirmed in December 2010 that there is currently no IMS for the EBSP. Among other things, best practices and related federal guidance call for a program schedule to be programwide in scope, meaning that it should include the integrated breakdown of the work to be performed by both the government and its contractors over the expected life of the program. Without an IMS identifying long-term plans for the EBSP, it is difficult for TSA to have a comprehensive program view of the work that must be completed to deliver explosive detection technology to replace aging systems and meet emerging threats. Without such a view, a sound basis does not exist for knowing with any degree of confidence when and how the program will be completed. While there is no IMS for the EBSP, TSA has established a schedule for the current EDS acquisition. However, while the schedule identifies activities through the first contract award—scheduled for July 2011—of the current EDS procurement, our analysis shows that it does not identify activities planned for subsequent award windows. Additionally, TSA has encountered a number of challenges in implementing the schedule. For example, according to TSA officials, TSA originally planned to award the first EDS contract in December 2010 in order to procure machines required to meet Level C of the 2010 EDS requirements. However, TSA has since revised the schedule due to the challenges of collecting explosives data needed before development of EDS can be completed and certification testing of the machines can begin. Based on the revised schedule, certification testing began in late 2010, according to TSA, so that in July 2011 the first EDS contract can be awarded to procure machines that meet one part of the Level C explosives detection requirements. Furthermore, while TSA has stated that it plans to procure and deploy 640 additional EDSs at an estimated cost of approximately $964 million during fiscal years 2012 through 2015, it is unclear when TSA plans for those machines to meet the remaining 2010 EDS requirements. As of March 2011, TSA officials estimate that it will take a number of years to certify EDSs that meet all three levels—C, B, and A—of the 2010 requirements. However, the officials stated that they cannot fully develop these plans until they can evaluate the capability of the equipment to meet these requirements. This is expected to happen during the testing process associated with the current EDS procurement. TSA officials stated that they plan to deploy EDSs that meet the full set of Level C, B, and A requirements, but more precise planning, including establishing timelines, cannot occur until TSA better understands the potential for the EDS equipment to meet those requirements. However, best practices state that a comprehensive schedule should at least reflect all activities planned for a project even though some activities may be tentative and there may be uncertainties in schedule estimates due to, among other things, limited data. In addition to the challenges TSA has encountered in carrying out the schedule as originally planned, based on our analysis, the current schedule leading up to the first contract award is not reliable. Best practices state that the success of a large-scale system acquisition, such as the current EDS acquisition, depends in part on having a reliable schedule that identifies: when the program’s set of work activities and milestone events will how long they will take, and how they are related to one another. Best practices also call for the schedule to expressly identify and define the relationships and dependencies among work elements and the constraints affecting the start and completion of work elements. Additionally, best practices indicate that a well-defined schedule also helps to identify the amount of human capital and fiscal resources that are needed to execute an acquisition. However, based on our assessment of both the original as well as an updated version of the schedule, TSA’s schedule for the current EDS acquisition does not fully comply with nine best practices for preparing a schedule as shown in table 1. Appendix III has additional information about GAO’s assessment of the extent to which TSA’s schedule meets each best practice. Although TSA’s schedule does not fully comply with any of the nine best practices, TSA has taken action to partially or minimally meet eight of the best practices. For example, consistent with best practice 4, the schedule establishes the duration of all activities and properly reflects how long each activity should take. However, while the schedule establishes the duration of all activities, 61 percent of activities represented in the schedule are based on a 7-day calendar that does not account for holidays. Similarly, our analysis found that, consistent with best practice 5, the schedule is vertically integrated; however, issues with sequencing logic in the schedule prevent it from being fully horizontally integrated. Vertical and horizontal integration ensures that products and outcomes associated with other sequenced activities are arranged in the right order and that dates for supporting tasks and subtasks are aligned. Other areas of the schedule that remain unaddressed also reflect weaknesses that limit its usefulness as a program management tool. For example, the schedule does not fully identify the resources needed to do the work or the availability of these resources. Specifically, the schedule does not reflect what labor, material, and overhead are needed to complete key activities for the program. Resource information would assist the program office in forecasting the likelihood of activities being completed based on their projected end dates. If the current schedule does not allow for insight into current or projected over-allocation of resources, then the risk of the program slipping is significantly increased. Additionally, TSA officials did not complete a schedule risk analysis when developing the schedule. A schedule risk analysis may be used to determine the level of uncertainty and to help identify and mitigate the associated risks. In the absence of a schedule risk analysis, the acquisition faces the risk of delays to the scheduled completion date if any delays were to occur on critical path activities. Furthermore, without this information, TSA is limited in its ability to answer questions such as how likely it is to complete the project on time and which risks are most likely to delay the project. Similarly, without a valid critical path, EBSP management lacks a clear picture of the tasks that must be performed to achieve the acquisition’s target completion date. While TSA officials noted that they had no staff or expertise to complete a schedule risk analysis, TSA provided no explanation as to why a schedule consistent with the other eight best practices had not been developed. TSA officials stated that the EDS acquisition is one of the largest acquisition programs in DHS. However, the absence of a reliable schedule makes it difficult for management to predict with any degree of confidence whether the estimated completion date for the acquisition is realistic. Furthermore, without the development of a schedule that meets scheduling best practices, TSA is limited in its ability to monitor and oversee the progress of the billions of dollars being invested in the procurement of new EDSs. The EBSP does not yet have an up-to-date approved life-cycle cost estimate in place, and as a result, DHS has no reliable basis for understanding how much the program will cost. While TSA reported that it had completed a life-cycle cost estimate (LCCE) for the EBSP in May 2010, program officials reported in February 2011 that the estimate is currently being revised to reflect assumptions related to the current EDS acquisition. Specifically, officials indicated that the May 2010 LCCE did not include the anticipated costs for purchasing any EDSs that meet the revised 2010 requirements. TSA officials stated that they are working to revise the LCCE to reflect the anticipated costs of the current EDS acquisition and expected to complete the LCCE by the end of April 2011. Additionally, after conducting a review of the LCCE that was completed in May 2010, DHS’s Cost Analysis Division (CAD) found that the LCCE needed more comprehensive data and that its accuracy could not be determined. As a result, the DHS Acquisition Review Board directed the CAD to develop an appropriate cost estimate, including a reconciliation with the EBSP’s LCCE. In January 2011, officials in DHS’s CAD stated that they had initiated work on the independent cost estimate for the EBSP but were only able to complete the portion of the estimate related to current detection capabilities in the Level C requirements for one tier. Officials stated that the lack of detail in program requirements for some of Level C and all of Levels B and A limited their ability to develop an estimate that would be usable for budgetary purposes. CAD officials further noted that significant portions of the total EBSP program have yet to be defined and estimated. During the course of our review, the anticipated completion date of the LCCE has been delayed multiple times and was expected to be completed at the end of April 2011. As a result, we were unable to evaluate TSA’s approach to developing the cost estimates for the program. We reported in June 2010 that inaccurate or incomplete cost estimates were often a factor in cost growth for DHS programs we previously reviewed. We also reported that initial cost estimates for most DHS programs were often developed after the start of acquisition activities, so they do not capture earlier cost changes. Further, our best practices for cost estimation state that estimates are integral to determining and communicating a realistic view of likely cost and schedule outcomes that can be used to support a program including planning the work necessary to develop, produce, and install equipment. However, because TSA had not established a cost estimate that accurately reflects the anticipated costs of the acquisition prior to initiating the current EDS procurement, it is unclear how DHS could determine if the budget for the EBSP is reasonable. Furthermore, in the absence of an approved cost estimate and baseline financial information for the current EDS acquisition, including the costs of purchasing machines that meet the 2010 EDS requirements, TSA has limited information to make essential cost-informed program decisions. Although we were unable to evaluate TSA’s cost estimates for the program, the fact that TSA’s schedule for the EDS acquisition does not meet best practices for schedule estimating also raises questions about the credibility of the program’s LCCE. For example, the absence of a schedule risk analysis would have made it difficult for officials to account for the cost effects of schedule slippage when developing the LCCE. Best practices for cost estimation state that because some program costs such as labor, supervision, rented equipment, and facilities cost more if the program takes longer, a reliable schedule can contribute to an understanding of the cost impact if the program does not finish on time. The program’s success depends on the quality of its schedule and an integrated schedule is key to managing program performance and is necessary for determining what work remains and the expected cost to complete the work. In a memo from the DHS Under Secretary for Management dated July 10, 2008, DHS endorsed the use of best practices that we identified and stated that DHS would be utilizing them as a “best practices” approach in the future. However, in the absence of a reliable schedule to guide cost estimates, having a current cost estimate that reflects anticipated costs for the EDS acquisition, or submitting the revised LCCE to DHS for departmental approval, it is unclear how TSA utilized a best practices approach in developing cost estimates for the program. TSA officials stated that they expect to upgrade an unknown number of the current fleet of 2,297 EDSs and 260 of the EDSs to be purchased under the current acquisition after they are deployed to airports to fully meet all phases of the 2010 requirements. However, similar to when TSA revised the EDS explosives detection requirements in 2005, it has no plan in place outlining how it will approach these upgrades. Specifically, TSA has not established an upgrade plan or conducted an analysis to determine what type of approach to upgrading deployed EDSs is likely to be most feasible, efficient, or effective. TSA officials stated that there are too many unknowns at this time regarding potential approaches to upgrading the fleet of EDSs. Standards for program management require that specific desired outcomes or results be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate steps and time frames needed to achieve those results. Until TSA develops a plan identifying how it will approach the upgrades for currently deployed EDSs—and the plan includes such items as estimated costs, the number of machines that can be upgraded, and the number of times a given machine must be upgraded to meet the 2010 EDS requirements—it will be difficult for TSA to provide reasonable assurance that its upgrade approach is feasible or cost-effective. TSA’s 2010 acquisition strategy identifies the planned purchase of 900 EDSs over the next 5 years, but it does not indicate what level requirements those EDSs will be required to meet. For the currently deployed equipment that will not be replaced, TSA would need to upgrade the equipment to meet the 2010 requirements. TSA officials stated that they may not upgrade all of the current fleet of EDSs to the 2010 EDS requirements because in some cases, certain models of the EDS may not be upgradeable and in other cases, it may ultimately be more cost effective to replace older EDSs with new machines. According to TSA, upgrading EDSs will require an assessment of currently deployed EDS’ detection capabilities and that the results of the assessment will affect the EDS program’s schedule, budget, and detection goals. TSA was working with a consulting firm to modify a computer model that will be used to project the costs of the upgrades if TSA were to use a time- phased installation for the upgrades. While TSA officials were working with a consulting firm, they have not yet established a plan for how they will approach the upgrades. TSA officials further stated that the number of upgrades TSA performs on currently deployed equipment will depend on the cost of the upgrades, the level of complexity of the upgrades, and whether the upgrades can be conducted in the airports or must be performed in the factory. TSA’s approach to deploy EDSs that meet the 2010 requirements could result in the same EDSs being upgraded multiple times in order to first meet all of the Level C requirements and to then meet the Levels B and A explosives detection requirements. For example, TSA’s decision to revise its acquisition strategy and deploy EDSs that meet the Level C requirements in a phased approach could result in upgrading the same currently deployed machines twice before they may have to be upgraded a third time to meet Level B requirements and then upgraded a fourth time to meet Level A requirements. Moreover, based on TSA’s schedule for the current EDS acquisition, by the time some or all of the 260 new EDSs under the current EDS acquisition have been deployed in airports, TSA may have approved a subsequent tier of the EDS explosives detection requirements, which could involve upgrading the machines again or replacing these newly purchased and deployed machines because they cannot meet the subsequent tier of explosives detection requirements. Therefore, TSA may procure and deploy 260 EDSs that will only be used in airports for a short period of time before they will need to be upgraded, possibly multiple times, or replaced with new machines. TSA officials told us that they will evaluate the need to upgrade EDSs to a subsequent tier at the time those requirements are finalized. TSA officials stated that they initially delayed the analysis of the upgrade approach until the 2010 EDS explosives detection requirements were approved, an approval that occurred in January 2010. TSA officials subsequently stated that their plan to upgrade deployed EDSs is included in the recapitalization strategy due to be completed at the end of May 2011. According to TSA, vendors that have previously sold EDSs to TSA are to be asked to also include proposals to upgrade their currently deployed machines when submitting proposals for the current EDS procurement. Specifically, vendors are to be asked to include a plan for upgrading their currently deployed EDS equipment as well as cost estimates for the upgrades. TSA plans to then analyze the feasibility and costs of the vendors’ proposals. However, TSA officials stated that the equipment upgrades may or may not be implemented as part of the contract award and that TSA has discretion regarding which aspects of the contracts to implement. According to TSA, the total number of EDSs to be upgraded and the associated costs will not be known until the agency receives proposed upgrade plans and cost estimates from EDS vendors in summer 2011. According to TSA officials, any upgrades are not to occur until calendar year 2012 at the earliest and will depend on available funding and complexity of the upgrades. TSA officials as well as officials from three of six current EDS vendors told us that they are confident that currently-deployed EDSs can be upgraded to meet Level C requirements. Specifically, TSA officials stated that the EDS vendors can rewrite the detection software to provide the capability to detect the 2010 EDS requirements. Minor hardware changes, such as new computer chips, are also expected to be made as part of these upgrades according to the TSA officials. The officials stated that, after they approve the software and hardware upgrades, EDS vendors will install the upgrades on the machines in the airports. Officials from three EDS vendors noted that they believe upgrades to the EDSs can be made in the airports when regularly-scheduled routine maintenance work is conducted. Once deployed EDSs have been upgraded to fully meet the Level C requirements, TSA will have to make decisions about how to ensure these machines can meet subsequent phases of the 2010 EDS requirements (Levels B and A). Officials from all six EDS vendors stated that given the absence of additional data on the explosives that will be included in subsequent phases of the 2010 EDS requirements, it is difficult to know precisely what must be done to upgrade newly-purchased equipment. Therefore, none of the officials from the six vendors could provide estimates for the cost to upgrade EDSs to meet all of the requirements for one tier. However, officials from two vendors estimated the cost to upgrade new EDSs to meet Level C requirements at $50,000 to $150,000 per machine. An official from one vendor stated that the CT technology currently used in EDSs might not be sufficient to detect Level A requirements and that an as yet undeveloped technology may be needed. The official noted that this could result in substantially higher costs to upgrade the current fleet of machines to Level A requirements than it would cost to upgrade machines to Level B requirements. Similarly, officials from two other vendors stated that meeting Level A requirements may require either new technology or a combination of current technologies instead of only using an EDS. Although TSA and vendor officials expressed confidence that deployed EDSs can be upgraded, TSA officials also confirmed that the agency has never previously upgraded the detection software of deployed EDS or ETD machines to meet revised explosives detection requirements. Additionally, even though TSA has estimated that it will take a number of years to certify new EDSs to fully meet Levels B and A of the 2010 requirements, TSA has not yet developed similar time frames to upgrade deployed equipment. Given the number of unknowns associated with upgrading EDSs, it is unclear how long it will take the agency to upgrade deployed EDSs to meet Levels C, B, and A of the 2010 requirements. Furthermore, TSA has identified the EDS upgrade effort as a high program risk. Consequently, TSA and vendor officials’ confidence that it will be feasible and cost effective to upgrade deployed machines at airports may be unwarranted as it has not been based on experience, supported by analysis, or a documented plan. TSA faces a complex task in its efforts to address explosives threats in its current and future procurements and existing fleet of checked-baggage- screening systems. The complexity of this task is amplified when taking into account the large volume of checked baggage that TSA must screen for explosives without disrupting commerce. TSA’s plan to procure and deploy EDSs that meet the 2010 requirements in a phased approach that spans a number of years is aimed at allowing more time to collect necessary explosives data, test key technologies, and provide a means for TSA to continue to purchase EDSs to meet its needs for new checked- baggage-screening equipment at the nation’s commercial airports. However, TSA officials recognized that if TSA deploys EDS capable of detecting all explosives included in the 2010 EDS requirements, TSA must ensure that ETD machines are capable of detecting all of the explosives that EDSs will be able to detect to minimize any potential screening difference between the EDS and ETD. Without a plan to help ensure that additional screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed that detect a broader set of explosives than existing ETD machines used to resolve EDS screening alarms, it will be difficult for TSA to provide reasonable assurance that a potential capability gap has been resolved. By separating the effort to collect data needed to meet the 2010 EDS requirements from the related competitive procurement, TSA would have more time to identify the physical and chemical properties of the explosives, collect full threat weight data, provide vendors with the time needed to develop detection software, and attempt to pass CRT and certification testing without the added pressure of an acquisition deadline. TSA also faces additional challenges related to the agency’s plans for implementing the current EDS procurement. For example, the lack of timely communication with vendors may impact vendors’ abilities to ensure they can meet TSA’s needs for the current EDS acquisition. By establishing a process to communicate with vendors in a timely manner, TSA could help ensure that vendors have the information necessary to meet TSA’s needs for new checked-baggage-screening equipment. Moreover, by addressing challenges related to planning for the acquisition, TSA may be able to better avoid further delays and potential cost overruns for the current procurement. Specifically, completing a reliable IMS that fully meets the nine best practices could help DHS and TSA management to predict whether the estimated acquisition completion date is realistic and manage program performance. Once a reliable schedule is in place, TSA can in turn revise current cost estimates for the program to better reflect actual acquisition costs including, for example, the potential cost impacts resulting from schedule slippage to give program decision-makers a more accurate and comprehensive view of current and projected program costs. As TSA plans to deploy EDSs that meet the 2010 requirements, it is critical that TSA plans its approach to ensure that all airports with EDS equipment are capable of detecting the required explosives. Because TSA has not yet upgraded most of the deployed EDSs to meet certain requirements, many EDSs are only capable of detecting certain explosives. Moreover, of the EDSs currently deployed, TSA is currently operating some number of them at the capability needed to detect the explosives identified in the 2005 requirements, although activating the software and operationally testing the machines to detect the 2005 requirements would help address this issue. As part of TSA’s phased approach to meet the 2010 EDS requirements, TSA may have to upgrade many of its currently deployed EDSs and hundreds of newly purchased EDSs over a period of years, upgrades that may require significant investments in new technologies to help meet more stringent explosives detection requirements. However, until TSA develops a plan identifying how it will approach the upgrades for currently deployed EDSs—and the plan includes such items as estimated costs, the number of machines that can be upgraded, time frames for upgrading them, and the number of times a given machine must be upgraded to meet the 2010 EDS requirements—it will be difficult for TSA to provide reasonable assurance that its upgrade approach is feasible or cost- effective. To help ensure that TSA takes a comprehensive and cost-effective approach to the procurement and deployment of EDSs that meet the 2010 EDS requirements and any subsequent revisions, we recommend that the Assistant Secretary for TSA take the following six actions: Develop a plan to ensure that screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed that detect a broader set of explosives than existing ETD machines used to resolve EDS screening alarms. Develop a plan to ensure that TSA has the explosives data needed for each of the planned phases of the 2010 EDS requirements before starting the procurement process for new EDSs or upgrades included in each applicable phase. Establish a process to communicate information to EDS vendors in a timely manner regarding TSA’s EDS acquisition, including information such as changes to the schedule. Develop and maintain an integrated master schedule for the entire Electronic Baggage Screening Program in accordance with the nine best practices identified by GAO for preparing a schedule. Ensure that key elements of the program’s final cost estimate reflect critical issues, such as the potential cost impacts resulting from schedule slippage identified once an integrated master schedule for the Electronic Baggage Screening Program has been developed in accordance with the nine best practices identified by GAO for preparing a schedule. Develop a plan to deploy EDSs that meet the most recent EDS explosives-detection requirements and ensure that new machines, as well as machines deployed in airports, will be operated at the levels established in those requirements. This plan should include the estimated costs for new machines and upgrading deployed machines, and the time frames for procuring and deploying new machines and upgrading deployed machines. We provided a draft of this report to DHS on June 23, 2011, for review and comment. On July 6, 2011, DHS provided written comments, which are presented in appendix IV. We also provided relevant excerpts of our draft report to DOD and the Department of Energy for review and comment. In commenting on our report, DHS stated that it agreed with our six recommendations and identified actions planned or under way to implement them. DOD provided written technical comments and the Department of Energy provided technical comments in an e-mail. Both stated that the draft report excerpts related to their respective agencies contained accurate information. Overall, DHS stated that, because of the urgent need to meet ongoing requirements, TSA began addressing many of the issues identified by this audit while the audit was being conducted. However, as DHS noted in its letter, TSA still needs to complete many actions to resolve the issues identified in this report. Additionally, TSA stated that it suspended the implementation of the 2005 requirements because of the computer modeling effort known as “Project Newton” and then issued the 2010 detection standards when Project Newton did not yield timely results. Thus, in its comments, TSA confirmed that it is using some number of EDSs that meet requirements established in 1998 by the FAA, as we reported, an approach that raises questions about how well some of its deployed equipment detects current explosives threats. In addition, DHS stated that TSA is currently taking steps to collect the operational data necessary to support the upgrade of deployed equipment based on 2010 detection standards and that the operational data-collection effort is to be completed in 2011. However, as discussed in the report, this could be a difficult endeavor as TSA is still in the process of conducting operational testing to determine the staffing implications of operating EDSs that meet 2005 explosives detection requirements. Therefore, it will have taken TSA 6 years from the time that the 2005 EDS explosives detection requirements were issued until this operational testing is to be completed. Furthermore, if the results of the operational testing show that operating EDS machines, to meet the 2005 requirements, will require additional TSA staff and/or slow down the rate of checked-baggage screening, TSA may have to make difficult decisions and trade-offs that could affect aviation security and commerce, and also affect the schedule for meeting the 2010 requirements. DHS concurred with our first recommendation to develop a plan to ensure that screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed that detect a broader set of explosives than existing ETD machines used to resolve EDS screening alarms. DHS stated that TSA convened a working group to assess capability gaps for secondary screening technology, evaluate current technology capabilities against the capabilities of future EDSs, and prepare a plan to procure any additional technology required to ensure alarms can be resolved. DHS expects this plan to be finalized by the fourth quarter of fiscal year 2012. While these actions and planned actions represent positive steps to fully implement the recommendation, TSA should develop a plan to ensure that screening devices or protocols are in place to resolve EDS alarms if EDSs are deployed that detect a broader set of explosives than existing ETD machines used to resolve EDS screening alarms. DHS concurred with our second recommendation to develop a plan to ensure that TSA has the data needed for each of the planned phases of the 2010 EDS requirements before starting the procurement process for new EDSs or upgrades included in each applicable phase. DHS commented that TSA modified its strategy for the EDS’s competitive procurement in July 2010 in response to the challenges in working with the explosives for data collection and alerted the vendor community on September 3, 2010. DHS stated that the new baseline schedule removed data collection from the acquisition process. Additionally, DHS stated that TSA is working with DHS S&T to establish a laboratory by summer 2011 to support further data collection and independent test and evaluation. Although these actions respond in part to the intent of our recommendation, separating data collection from the acquisition process does not necessarily ensure that the needed data will be available before starting the procurement process for the new EDSs or upgrading currently deployed EDSs. Consequently, we continue to believe that, to fully address our recommendation, a plan is needed to establish a process for ensuring that data are available before starting the procurement process for new EDSs or upgrades for each applicable phase. Developing and following such a plan would assist TSA in implementing the acquisition and making upgrades in an efficient and effective manner and would benefit DHS in its oversight role of TSA by allowing DHS to determine progress against the plan. DHS concurred with our third recommendation to establish a process to communicate information to EDS vendors in a timely manner regarding TSA’s EDS acquisition, including information such as changes to the schedule. In the letter, DHS stated that TSA has a process for communicating information to the vendor community and will continue to follow this process in adherence with guidelines outlined in the Federal Acquisition Regulation. DHS also stated that TSA significantly changed the business model to procure checked-baggage-screening equipment from what has historically been a sole-source environment to a competitive environment, resulting in significant improvements in communication with industry. In addition, according to DHS, TSA has already made a number of efforts to improve the quality and frequency of communication with industry, but TSA recognizes the complexity associated with many of the acquisitions currently ongoing. As such, TSA acknowledged that there are opportunities to continue to improve communication with the vendor community and will take steps to ensure that vendors are provided with the most current information possible in an efficient manner. Since the agency did not provide us with evidence of how it plans to ensure more timely and effective communications with vendors in the future, we continue to believe that such a process is needed to ensure that TSA officials are aware of the specific guidelines to follow to communicate with vendors about current and future acquisitions. Our meetings with vendors indicated that TSA’s communications with them continue to leave room for improvement. DHS concurred with our fourth recommendation to develop and maintain an IMS for the entire EBSP in accordance with the nine best practices identified by GAO for preparing a schedule. DHS commented that TSA has already begun working with key stakeholders to develop and define requirements for an IMS and to ensure that the schedule aligns with the best practices outlined by GAO. DHS stated that this effort is expected to be completed by the second quarter of fiscal year 2012. In addition, DHS stated that, as the program matures and increases its focus on flexible and upgradeable technology, an IMS will ensure close coordination among the program’s procurement, deployment, recapitalization, and upgrade capabilities, and that the EBSP IMS will be updated as a result of these efforts to be in accordance with the nine best practices. While these actions and planned actions are steps toward implementing our recommendation, to fully implement the recommendation, TSA needs to develop and maintain an IMS for the entire EBSP in accordance with the nine best practices identified by GAO for preparing a schedule. DHS concurred with our fifth recommendation to ensure that key elements of the program’s final cost estimate reflect critical issues, such as the potential cost impacts resulting from schedule slippage. Such a slippage might be identified once an IMS for the EBSP has been developed in accordance with the nine best practices identified by GAO for preparing a schedule. DHS stated that TSA is working to update the EBSP LCCE to incorporate cost estimates associated with enhanced detection, work that should be completed in the fourth quarter of fiscal year 2011. DHS also stated that, per the recommendations of GAO and DHS, TSA is developing a master schedule to document timelines associated with various projects. DHS further stated that risks to the costs and schedules will be analyzed and that the risk analysis will produce confidence intervals for the life-cycle costs to the program. Although TSA discussed activities to address the EBSP LCCE, to fully implement this recommendation, it will be important that key elements of the program’s final cost estimate reflect critical issues, such as the potential cost impacts resulting from schedule slippage identified once an IMS for the EBSP has been developed in accordance with the nine best practices. DHS concurred with our sixth recommendation to develop a plan to deploy EDSs that meet the most recent EDS explosives detection requirements and ensure that new machines, as well as machines deployed in airports, will be operated at the levels established in those requirements. This plan should include the estimated costs for new machines and upgrading deployed machines, and the time frames for procuring and deploying new machines and upgrading deployed machines. DHS commented that TSA has a plan in place to evaluate and implement the most recent certified algorithms on the existing fleet of deployed EDSs, assuming the evaluation results in minimal to no operational impact. In contrast, our recommendation calls for a plan to deploy new EDSs as well as to upgrade existing EDSs in airports to meet the 2010 EDS explosives detection requirements and, importantly, ensure that new machines will be operated at the levels established in those requirements. As we discussed in the report, some number of the EDSs in airports are operating at a level that meets the 2005 explosives detection requirements. Our recommendation is intended to ensure that TSA operates all EDSs in airports to meet the most recent requirements, which are currently the 2010 requirements. Consequently, we continue to believe that a plan is needed describing the approach that TSA will use to deploy EDSs that meet the most recent EDS explosives detection requirements and ensure that new machines, as well as machines deployed in airports, will be operated at the levels established in those requirements. TSA also provided written technical comments, which we incorporated in the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Assistant Secretary of the Transportation Security Administration, and appropriate congressional committees. This report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or LordS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report discusses: (1) the extent to which TSA has revised explosives detection requirements and deployed EDSs and ETDs to meet these revised requirements; (2) any challenges that TSA and the Department of Homeland Security’s (DHS) Science and Technology Directorate (S&T) have experienced in implementing the EDS acquisition; and (3) the extent to which TSA’s approach to its EDS acquisition meets best practices for schedule and cost estimates, and includes plans for potential upgrades to deployed EDSs. To determine the extent to which TSA has revised explosives detection requirements for checked baggage screening, we reviewed TSA’s EDS explosives detection requirements for checked baggage screening and assessed the extent to which the 2010 detection requirements differed from the 2005 detection requirements for EDSs. We compared specific explosives 1998, 2005, and 2010 detection requirements to identify commercial and homemade variants of the explosives. We also identified, analyzed, and discussed with TSA and S&T officials the differences between the tiers and multiple levels of explosives detection requirements in the 2010 EDS explosives detection requirements. We discussed with TSA officials the 2002 explosives detection requirements for the ETD and reviewed the 2006 explosives detection requirements for ETD. We also compared the 2010 EDS requirements with the 2006 ETD requirements and discussed with TSA and S&T officials the differences between the explosives detection requirements for the EDS and ETD. We discussed TSA’s Standard Operating Procedures for resolving EDS and ETD alarms with TSA officials. Finally, we visited three of the Department of Energy’s national laboratories, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories, to determine the status of Project Newton. (For more on Project Newton, see app. II.) To identify any challenges that TSA has experienced implementing the EDS acquisition, we reviewed documentation from TSA’s Electronic Baggage Screening Program (EBSP), the program responsible operational testing, procurement, deployment, and maintenance of checked-baggage-screening technologies. Among other things, we reviewed available program documentation on the status of its EDS acquisition including EBSP strategic plans from previous years as well as the most recent EBSP strategy approved in July 2010. We also reviewed documentation from the program’s first Acquisition Review Board review, the EBSP risk management plan, the most recent procurement specifications for the EDS, information posted by EBSP for interested vendors on FedBizOpps.gov, and DHS acquisition guidance and directives. We also interviewed EBSP program officials, including the EBSP program manager, regarding the program’s approach to the current EDS acquisition and received updates on revisions to the program’s EDS acquisition strategy and timelines for the current procurement. To further understand the challenges TSA and S&T face in preparing for new EDSs to meet revised detection requirements, we reviewed documentation provided by TSA outlining the agency’s plan for deploying EDSs that meet the 2010 requirements as well as documentation regarding S&T’s approach to testing and certification carried out at the Transportation Security Laboratory (TSL). We also conducted interviews with TSA and S&T officials and conducted site visits to the TSL in Atlantic City, New Jersey, and to the Air Force Research Lab (AFRL) facility at Tyndall Air Force Base (AFB), Florida, to obtain information on efforts to certify EDSs that meet the 2010 requirements. We visited the TSL because that is where S&T tests and evaluates transportation technologies including checked baggage screening technologies. We visited AFRL because they are assisting TSL in their efforts to collect data regarding the physical and chemical properties of explosives included in the 2010 EDS requirements in preparation to certify EDSs for the current procurement. Additionally, we conducted site visits and/or telephone interviews with all six EDS vendors competing in the first phase of the current EDS procurement. These vendors were able to provide us with an understanding of their companies’ views regarding TSA’s approach to the current procurement as well as potential challenges they believe vendors face in preparing to compete for the current EDS procurement. While information we obtained from these interviews may not be generalized across the industry as a whole, we were able to obtain the perspectives of all companies planning to compete for the current EDS procurement, and they were able to provide an understanding of their companies’ abilities to develop EDSs that meet the 2010 requirements. We also reviewed TSA documentation to identify the explosives detection technologies that are used for checked baggage screening. Additionally, we interviewed TSA and S&T officials to identify the number of currently-deployed explosives detection machines that meet the previous and most recent detection requirements, and found the data for the number of machines to be sufficiently reliable. To determine the extent to which TSA’s approach to its EDS acquisition meets best practices for schedule and cost estimates and includes plans for potential upgrades to deployed EDSs, we determined the extent to which TSA had established an integrated master schedule (IMS) for the EBSP, and due to the lack of an IMS, assessed the EDS acquisition schedule against nine best practices in our Cost Estimating and Assessment Guide. We conducted this assessment to determine the extent to which the schedule reflects key estimating practices that are fundamental to having and maintaining a reliable schedule. In doing so, we independently assessed the schedule for the current EDS acquisition and its underlying activities against our nine best practices, as provided to us in July 2010. We subsequently interviewed cognizant program officials to discuss their use of best practices in creating the schedule and to discuss the findings resulting from our review of the schedule. After TSA revised the schedule to reflect changes in some of the timelines and provided it to us in October 2010, we reviewed the updated schedule and compared it to information in the original schedule in order to understand how the new schedule was constructed and to determine to what extent TSA had resolved weaknesses that we identified in its original schedule. We also assessed the schedule against relevant best practices in our Cost Estimating and Assessment Guide to determine the extent to which it reflects key estimating practices that are fundamental to having a reliable schedule. We compared TSA’s efforts with internal control standards and recommended practices we previously identified for sound acquisition planning. To further evaluate TSA’s planning for the current EDS acquisition, we also interviewed TSA, S&T, and EDS vendors’ officials to identify any challenges involved and expected in upgrading EDS detection capabilities, and TSA’s plans to upgrade equipment to meet future implementations of the 2010 EDS requirements. Also, during our site visits and telephone interviews with the six vendors, as discussed previously, vendors provided their perspectives on TSA’s approach to upgrade currently deployed EDSs as well as those to be deployed in the future. We also obtained from the six vendors their perspectives on how upgrades to deployed EDSs might be accomplished and potential costs involved in performing the upgrades. We conducted this performance audit from September 2009 through May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. During the course of our review, we revised the engagement objectives and scope to facilitate a broader examination of TSA’s efforts to revise its explosives detection requirements and related schedule for the EDS acquisition, which increased the time for completing this audit. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and Transportation Security Administration (TSA) began a project in 2007, known as Project Newton, to identify the minimum mass of explosives that could cause catastrophic damage to an aircraft. Through 2009, S&T and TSA had invested approximately $12.5 million in Project Newton modeling activities, according to a senior TSA official. A different senior TSA official stated that TSA allocated an additional $2.5 million to $3.1 million for Project Newton as of August 2010: $1.0 million to $1.6 million for incremental development of computer models and $1.5 million to develop a plan to validate the models. As part of the effort to understand the effects of explosives detonations on aircraft, S&T and TSA have been working to simulate the complex dynamics of explosive blast effects on an in-flight aircraft by using computer models at three Department of Energy national laboratories—Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories. According to TSA officials, the current understanding of the effects of explosives on aircraft has been largely based on data from live-fire explosives tests conducted with retired aircraft hulls at ground level. These officials stated that, compared to running a computer simulation, live-fire tests can be more expensive, which limits the number of live-fire tests conducted and, therefore, the amount of data available for analysis. S&T, TSA, and national laboratory officials stated that computer modeling can cost- effectively simulate the effects of explosives detonations in various locations of different types of aircraft at ground level, which can provide significant data for analysis. In January 2010, TSA revised the explosives detection requirements for the explosives detection system (EDS) and established tiers of explosives that are required to be detected. One tier of requirements is to be implemented over a number of years. TSA plans to incorporate the computer-modeling results into the requirements for a subsequent tier. Although TSA expected that the computer-modeling results would be used to revise EDS explosives detection requirements as early as 2012, as of December 2010, TSA officials were uncertain when the computer- modeling results will be used for this purpose because the computer models had not been validated. In 2009, TSA established a Blue Ribbon Panel to, among other things, assess the three national laboratories’ computer models and their results and comment on whether they were valid to be used to revise explosives detection requirements. The panel members included DHS and TSA officials as well as officials from academia and the private sector. In March 2010, the Blue Ribbon Panel recommended that, among other things, before the computer modeling results are used to revise EDS explosives detection requirements, the computer models and their results should be validated, according to a senior TSA official. This official stated that the panel also recommended specific locations to add to the computer models, so that the models can simulate the effects of explosives detonations in those additional locations on the aircraft. Validating the computer models and their results is essential before relying on them to revise explosives detection requirements. A senior TSA official stated that it will take a number of months to validate the computer models, validation that is expected to be completed later in 2011. In determining the extent to which the Transportation Security Administration’s (TSA’s) Electronic Baggage Screening Program (EBSP) schedule meets established best practices, we identified that TSA did not have an integrated master schedule (IMS) for the program. As a result, we assessed the explosives detection system (EDS) acquisition schedule against each of nine best practices. Specifically, we assessed TSA’s initial schedule, which was provided to GAO in July 2010, and met with TSA officials to discuss our assessment and provided officials with suggestions for corrective action. TSA later revised the schedule and provided GAO with an updated version in October 2010. We completed a separate assessment of TSA’s revised schedule. The following table presents our two assessments. In addition to the contact named above, Glenn Davis, Assistant Director, and Joseph E. Dewechter, Analyst-in-Charge, managed this assignment. Scott Behen, Samantha Carter, and Orlando Copeland made major contributions to the planning and all other aspects of the work. David Alexander and Richard Hung assisted with design, methodology, and data analysis. Jason Lee and Karen Richey assisted with acquisition cost and schedule analysis. John Hutton and Nathan Tranquilli assisted with acquisition and contracting issues. Katherine Davis provided assistance in report preparation. Thomas Lombardi and Tracey King provided legal support.
|
Explosives represent a continuing threat to aviation security. The Transportation Security Administration (TSA), within the Department of Homeland Security (DHS), seeks to ensure through the Electronic Baggage Screening Program (EBSP) that checked-baggage-screening technology is capable of detecting explosives. Generally, the explosives detection system (EDS) is used in conjunction with explosives trace detection (ETD) machines to identify and resolve threats in checked baggage. As requested, GAO assessed the extent to which: (1) TSA revised explosives detection requirements and deployed technology to meet those requirements, and (2) TSA's approach to the current EDS acquisition meets best practices for schedules and cost estimates and includes plans for potential upgrades of deployed EDSs. GAO analyzed EDS requirements, compared the EDS acquisition schedule against GAO best practices, and interviewed DHS officials. This is a public version of a sensitive report that GAO issued in May 2011. TSA revised EDS explosives detection requirements in January 2010 to better address current threats and plans to implement these requirements in a phased approach. The first phase, which includes implementation of the previous 2005 requirements, is to take years to fully implement. However, deploying EDSs that meet 2010 requirements could prove difficult given that TSA did not begin deployment of EDSs meeting 2005 requirements until 4 years later in 2009. As of January 2011, some number of the EDSs in TSA's fleet are detecting explosives at the level established in 2005. The remaining EDSs in the fleet are configured to meet the 1998 requirements because TSA either has not activated the included software or has not installed the needed hardware and software to allow these EDSs to meet the 2005 requirements. Developing a plan to deploy and operate EDSs to meet the most recent requirements could help ensure EDSs are operating most effectively and should improve checked-baggage screening. However, TSA has faced challenges in procuring the first 260 EDSs to meet 2010 requirements. For example, due to the danger associated with some explosives, TSA and DHS encountered challenges in developing simulants and collecting data on the explosives' physical and chemical properties needed by vendors and agencies to develop detection software and test EDSs prior to the current acquisition. Also, TSA's decision to pursue EDS procurement during data collection complicated both efforts and resulted in a delay of over 7 months for the current acquisition. Completing data collection for each phase of the 2010 requirements prior to pursuing EDS procurements that meet those requirements could help TSA avoid additional schedule delays. TSA has established a schedule for the current EDS acquisition, but it does not fully comply with best practices, and TSA has not developed a plan to upgrade its EDS fleet. For example, the schedule is not reliable because it does not reflect all planned program activities and does not include a timeline to deploy EDSs or plans to procure EDSs to meet subsequent phases of the 2010 requirements. Developing a reliable schedule would help TSA better monitor and oversee the progress of the EDS acquisition. TSA officials stated that to meet the 2010 requirements, TSA will likely upgrade many of the current fleet of EDSs as well as the first 260 EDS machines to be purchased under the current acquisition. However, TSA has no plan in place outlining how it will approach these upgrades. Because TSA is implementing the 2010 requirements in a phased approach, the same EDS machines may need to be upgraded multiple times. TSA officials stated that they were confident the upgrades could be completed on deployed machines. However, without a plan, it will be difficult for TSA to provide reasonable assurance that the upgrades will be feasible or cost-effective. GAO recommends that TSA, among other things, develop a plan to ensure that new machines, as well as those machines currently deployed in airports, will be operated at the levels in established requirements, collect explosives data before initiating new procurements, and develop a reliable schedule for the EBSP. DHS concurred with all of GAO's recommendations and has initiated actions to implement them.
|
The Immigration and Nationality Act (INA), as amended, is the primary body of law governing immigration and visa operations. Among other things, the INA defines the powers given to the Secretaries of State and Homeland Security and the consular and immigration officers who serve under them, delineates categories of and qualifications for immigrant and nonimmigrant visas, and provides a broad framework of operations through which foreign citizens are allowed to enter and immigrate to the United States. USCIS is generally responsible for administering the citizenship and immigration services of the United States. Most foreign nationals living abroad who wish to immigrate to the United States must obtain a visa through the Department of State’s Bureau of Consular Affairs. U.S. citizens and lawful permanent residents, that is, petitioners, can request or petition USCIS to allow certain relatives to immigrate to the United States. As the first step in a two step process, petitioners must file a family-based petition with USCIS. U.S. citizens and lawful permanent residents may file a Form I-130 for an alien relative, such as a wife or child, to immigrate to the United States. U.S. citizens (but not lawful permanent residents) may also petition to bring a noncitizen fiancé(e) to the United States by filing a Form I-129F with USCIS. The I-129F or I-130 petitions may also list “derivative beneficiaries,” such as the beneficiary’s unmarried child under 21 years old, who are eligible to immigrate with the primary beneficiary. The purpose of these petitions is to establish the petitioner’s relationship to the family member or fiancé(e) who wishes to immigrate to the United States. USCIS adjudicators are to review petitions and make determinations, in accordance with immigration law, on whether to approve or deny petitions. If a petition is approved, the second step in the process is to determine whether the noncitizen is admissible under immigration law to enter or remain in the United States. If the noncitizen is overseas, USCIS will send the approved petition to the State Department and a State Department consular officer will determine whether to issue a visa to the noncitizen. If the noncitizen is already in the United States in a nonimmigrant status, such as a visitor or student, when the petition is approved, a USCIS adjudicator will determine whether to allow the noncitizen to change, or “adjust,” his/her status to that of a lawful permanent resident. As part of its review, USCIS conducts background security checks on petitioners as well as noncitizen beneficiaries. These background checks were instituted for national security purposes such as identifying terrorists or terrorist threats, and for public safety reasons, such as identifying human rights violators or aggravated felons. According to USCIS officials, background security checks were conducted on all beneficiaries prior to September 11. As of January 2002, however, background security checks were required to be conducted on all petitioners, including U.S. citizens, as well as beneficiaries. USCIS adjudicators conduct background security checks using the Interagency Border Inspection System (IBIS), which is a multi-agency computer system of lookouts for terrorists, drug traffickers, and other such criminal types. IBIS contains numerous database files and interfaces with sources such as the FBI’s National Crime Information Center. The NCIC contains various data, including data on violent gangs and terrorists, immigration violators, and the National Sex Offender Registry. During a background check, if an IBIS query returns a “hit” where the name and date of birth information entered returns a positive response from one or more of the databases, and it appears the petitioner may have a criminal background, USCIS adjudicators are to forward this information to a Fraud Detection and National Security (FDNS) officer within USCIS. FDNS officers are to conduct further system searches for verification of the criminal hits. After researching and summarizing the criminal data on the petitioner, FDNS officers are to notate their findings in a resolution memorandum and send the memo back to the adjudicator responsible for the file. Assuming the petitioner cannot be referred to law enforcement or, in the case of a lawful permanent resident, deported, the adjudicator then continues the review and accepts or denies the petition based on whether there appears to be a valid relationship between the petitioner and the beneficiary. The FBI NCIC NSOR is a compilation of state registration information about sex offenders. The NSOR is statutorily mandated by the Pam Lychner Sexual Offender Tracking and Identification Act of 1996. The act directs the Attorney General to establish a national database at the FBI to track the whereabouts and movement of (1) each person who has been convicted of a “criminal offense against a victim who is a minor,” (2) each person who has been convicted of a “sexually violent offense,” and (3) each person who is determined to be a “sexually violent predator.” As implemented, the NSOR is a nationwide system that links states’ sex offender registration and notification programs. Each of the 50 states and the District of Columbia has created a sex offender registry based on the above three conviction categories and has established an interface with the FBI’s national system in order to transmit state registry information to the national registry. State registries contain information on sex offenders who are required to register and that reside, work, or attend school within the state. At least 398 convicted sex offenders filed petitions for spouses, fiancé(e)s, children, and other relatives in fiscal year 2005 based upon matching several data elements from USCIS’s database with data from FBI’s National Sex Offender Registry. There may be additional convicted sex offenders who filed family-based petitions. For example, we could not determine with a high degree of confidence whether 53 petitioners that had the same name and date of birth as a person in the NSOR were the same individuals because there were no additional data items, such as Social Security number or address, that we could match. Therefore, we did not include these additional 53 petitioners in our count because it is possible that two people could have the same name and date of birth. The 398 sex offenders filed a total of 420 petitions. Figure 1 shows the type of beneficiaries for which petitions were filed by convicted sex offenders. USCIS data indicate that 371 (88 percent) of the beneficiaries were spouses and fiancé(e)s, 33 (8 percent) were unmarried children under 21 years old, and 16 (4 percent) were classified as other relatives. We do not know, however, what percentage of unmarried children under 21 years old were minors under the federal criminal code, which defines a minor as under 18 years old for purposes of certain child sexual offenses. In addition, certain relatives of the primary beneficiary, such as unmarried children, under 21 years old, of the noncitizen spouse or fiancé(e), called derivative beneficiaries, may also immigrate with the beneficiary. However, USCIS’s data system only includes information on the primary beneficiary, not on any derivative beneficiaries. Therefore, our data underestimate the actual number of beneficiaries. For example, in addition to the 33 unmarried children under 21 years of age that were the primary beneficiaries of sex offenders, the State Department provided us data from its visa processing system indicating that there were at least an additional 27 children who were derivative beneficiaries associated with fiancé(e) petitions. Both USCIS and State Department data together total at least 60 unmarried children under 21 years of age. As shown in table 1, some of the sex offenders have been convicted of multiple sex crimes. The 398 sex offenders were convicted of at least 411 sex offenses, including sexual assault, rape, and child molestation, according to conviction data contained in the NSOR. At least 45 of the convictions were for sex offenses against children. It is possible that more than 45 convictions involved sex offenses against children, but this number could not be determined based on the conviction description in the registry. For example, the conviction description for 217 of the 411 convictions, or 53 percent, is “sex offense.” In addition, 14 petitioners were classified as sexual predators. Consistent with statute, the NSOR classifies “sexual predator” as an offender who has been convicted of a sexually violent offense and suffers from a mental abnormality or personality disorder that makes the person likely to engage in predatory sexually violent offenses again. These 14 sexual predators filed a total of 17 petitions. As of December 2005, 9 of the 17 petitions filed were approved and 8 were pending. Three of the 14 petitioners who were classified as sexual predators filed for unmarried children under 21 years old. Convicted sex offenders are not prohibited by the INA from petitioning to bring their spouses, fiancé(e)s, or children into the United States. According to USCIS and the Department of State, neither agency has general authority to deny a petition or visa based solely on the fact that a petitioner may be a convicted sex offender. In a December 2005 letter to GAO, USCIS’s Acting Chief Counsel stated that USCIS may not reject or deny family-based petitions on the grounds that the petitioner has a criminal background, lacks good moral character, or other possible negative factors. The review and ultimately the approval of such petitions centers on whether the facts stated in the petition are true and whether there exists the requisite relationship between the petitioner and the beneficiary. It is possible that a petitioner’s criminal history may be relevant to the question of whether the petitioner has established the requisite relationship. For example, a petitioner’s conviction for fraud, bigamy, or alien smuggling would be relevant in determining whether a bona fide relationship exists between the petitioner and a noncitizen spouse beneficiary. According to officials in the Department of State’s Bureau of Consular Affairs, the Department of State cannot deny a visa to a noncitizen based solely on the fact that the petitioner is a convicted sex offender or has other criminal convictions. The review and ultimately the approval of a visa centers on whether the noncitizen is admissible under immigration law to enter the United States and on whether there exists the prerequisite relationship between the petitioner and the beneficiary. Therefore, consular officers have no legal basis to deny a visa to a noncitizen based solely on the fact that the petitioner has a criminal sexual background. In cases where there is no basis for denying a petition or visa, both the State Department and USCIS may be faced with the issue of whether, and under what circumstances, they can disclose the petitioner’s criminal sexual history to a beneficiary consistent with any applicable privacy restrictions. According to both USCIS and Department of State officials, the compelling circumstances exception to the Privacy Act of 1974 provides authority to disclose a petitioner’s criminal sexual history to a noncitizen beneficiary on a case-by-case basis. For certain noncitizen beneficiaries, disclosure of the petitioner’s criminal background information is mandatory based on new authority granted to USCIS and the Department of State. The recently enacted International Marriage Broker Regulation Act (IMBRA) of 2005 requires disclosure of a U.S. citizen’s criminal background information, including sex crimes, to certain prospective immigrants, essentially noncitizen fiancé(e)s, but some spouses and children as well. USCIS must furnish this criminal background information to the Department of State for purposes of making IMBRA disclosures. On May 3, 2006, USCIS officials issued interim guidance to its adjudicators on making disclosures under the compelling circumstances exception to the Privacy Act and stated that USCIS would soon issue additional guidance with respect to IMBRA disclosures. The Department of State informed us that it is preparing to issue disclosure guidance to consular officers that will cover discretionary Privacy Act disclosures and that it is finalizing separate disclosure guidance with respect to the mandatory disclosures required under IMBRA., but this guidance cannot be issued until USCIS finalizes its IMBRA related procedures. The Privacy Act of 1974 states that, “no agency shall disclose any record which is contained in a system of records by any means of communication to any person, or to another agency, except pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains.” While information from the covered systems is generally not to be disclosed, there are 12 exceptions. One of these exceptions authorizes an agency to make a disclosure “to a person pursuant to a showing of compelling circumstances affecting the health or safety of an individual if upon such disclosure notification is transmitted to the last known address of such individual.” Both USCIS and the Department of State have interpreted the compelling circumstances exception in the Privacy Act as authority to permit the disclosure of a petitioner’s criminal sexual history information. In a December 2005 letter to GAO, USCIS’s Acting Chief Counsel stated that if USCIS learns that a petitioner has a substantiated history of sexual assault or child molestation, then USCIS has the discretion in compelling circumstances to disclose that information to the beneficiary. On May 3, 2006, USCIS issued Privacy Act interim guidance advising adjudicators of when it may be appropriate to disclose a petitioner’s criminal history involving violence or sex offenses to potential visa beneficiaries under the compelling circumstances exception. Generally, disclosure is limited to those portions of the petitioner’s criminal history involving violence or sex offenses that are directly relevant to the “health and safety” of the potential beneficiary. As an example, the guidance provides that normally, “a conviction as a sexual predator should be considered a compelling circumstance affecting the health and safety of a child who would reside with the sexual predator.” The guidance further states that any concerns about safety that adjudicators have that are outside the scope of the guidance should be brought to the attention of their supervisor. According to Department of State officials, protecting the health or safety of a minor child would constitute compelling circumstances to disclose a petitioner’s criminal sex offender background, though the exception might also apply in cases that did not involve a minor child. In a letter to GAO, the Chief, Advisory Opinions Branch, of the Department of State’s Visa Office wrote, “the clear possibility of abuse that an immigrant child would face while living in the same household as a convicted sex offender provides a strong basis for applying the health and safety exception in these cases.” The Department of State asserts that its position is “consistent with overall U.S. policy balancing the need to inform the public of the potential threat to a community posed by a child sex offender with the privacy interests of the offender.” According to the Department, consular officials are to consult with the department’s visa policy and legal staff prior to disclosure of a criminal record or other negative factors. For certain noncitizen beneficiaries, disclosure of the petitioner’s criminal background information is now mandatory based on new authority granted to USCIS and the Department of State. The recently enacted International Marriage Broker Regulation Act of 2005 requires disclosure of a U.S. citizen’s criminal background information, including sex crimes, to certain prospective immigrants, essentially noncitizen fiancé(e)s, but also some spouses and children (i.e., unmarried children under 21 years old who are derivatives of the primary beneficiary). IMBRA mandates disclosure of a U.S. citizen’s criminal history, including sex crimes, to certain prospective immigrants known as K nonimmigrant visa applicants, who are essentially noncitizen fiancé(e)s, but also some spouses and children. Obtaining a K visa allows the fiancé(e), spouse, or child to enter the United States as a nonimmigrant and then apply for immigrant (i.e., lawful permanent resident) status while in this country. Under section 832 of IMBRA, USCIS must revise its I-129F petition to require petitioners to disclose criminal background information for numerous specified crimes, including domestic violence, sexual assault, child abuse and neglect, and incest. Any criminal background information USCIS possesses with respect to the petitioner must accompany any approved petition that is forwarded to the Department of State. IMBRA goes on to provide: “The Secretary of State, in turn, shall share any such criminal background information that is in government records or databases with the K nonimmigrant visa applicant who is the beneficiary of the petition. The visa applicant shall be informed that such criminal background information is based on available records and may not be complete.” To effectuate IMBRA’s mandatory disclosure requirement, the Department of State must mail the visa applicant a copy of the petition, including any criminal background information, as well as a government- developed domestic violence information pamphlet. Supplementing the disclosure by mail, IMBRA also requires Department of State consular officers to “provide for the disclosure of such criminal background information to the visa applicant at the consular interview in the primary language of the visa applicant.” IMBRA’s mandatory disclosure requirement only extends to fiancé(e)s, spouses, and their minor children (i.e., unmarried children under 21 years old), who are sponsored by U.S. citizens and enter the United States on a K nonimmigrant visa issued by the Department of State. IMBRA’s mandatory disclosure requirement does not cover (1) the spouses and minor children of lawful permanent residents, who do not have the option of entering the United States using a K visa; (2) the spouses and minor children of U.S. citizens who enter the United States on an immigrant visa; or (3) any noncitizen already in the United States applying directly to USCIS for immigrant status. According to the data we reviewed, most noncitizens entering under family-based petitions will not be covered by IMBRA’s mandatory disclosure requirement. In fiscal year 2005, about 80 percent of all family-based petitions filed were for other than K visas. USCIS issued interim guidance related to Privacy Act disclosures on May 3, 2006. The guidance advises adjudicators of when “compelling circumstances” may exist to disclose a petitioner’s criminal history involving violence or sex offenses: for example, protecting the health and safety of a child beneficiary who would reside with a sexual predator would normally constitute a compelling circumstance to make a disclosure. The guidance states that disclosure should be limited only to those portions of the petitioner’s criminal history that are directly relevant to the health and safety of the potential beneficiary. The guidance also contains Privacy Act procedures that adjudicators must follow when they make a disclosure, such as providing written notice of the disclosure to the petitioner and annotating the USCIS file to maintain a record of the disclosure and the justification for it. When the beneficiary is within USCIS’s jurisdiction, the guidance informs adjudicators to make disclosures during in-person interviews with the beneficiary. When the beneficiary is abroad, the guidance requires the adjudicator to provide to the State Department any adverse information that might affect the health or safety of a beneficiary to enable the State Department to make a decision regarding disclosure. USCIS’s interim guidance related to Privacy Act disclosure states that USCIS will issue separate guidance addressing the special procedures adjudicators must follow with respect to I-129F petitions. As previously discussed, to meet IMBRA requirements, USCIS must revise its I-129F petition to require petitioners to disclose criminal background information for numerous specified crimes, including sex offenses. Any criminal background information USCIS possesses with respect to the petitioner must accompany any approved petition that is forwarded to the State Department to enable the State Department to effectuate IMBRA’s mandatory disclosure requirement. IMBRA mandated that USCIS revise its I-129F petition by March 6, 2006 (60 days after IMBRA’s January 5 enactment). USCIS has not yet revised the petition. USCIS officials told us that they have been reviewing and consolidating suggested revisions to the I-129F, including IMBRA-related changes, and expect publication of the new Form I-129F in the Federal Register in mid-June 2006. Department of State officials told us that they had drafted guidance for consular officers that addresses the disclosure of a petitioner’s criminal sexual offender background under the compelling circumstances exception to the Privacy Act. According to the Department of State, the draft guidance was essentially ready for issuance when IMBRA, which mandates disclosure of a petitioner’s criminal history to certain beneficiaries, was enacted. As a result, it decided to revise its draft guidance to take the new statutory requirements into account. The officials said that they are preparing to issue disclosure guidance to consular officers that will cover discretionary Privacy Act disclosures not covered under IMBRA and are finalizing separate guidance with regard to the mandatory disclosures required under IMBRA. However, according to State Department officials, the IMBRA-related guidance cannot be issued until USCIS finalizes its IMBRA procedures, including revising the I-129F petition. Convicted sex offenders can sponsor noncitizen relatives, such as spouses, fiancé(e)s, and children, for entry into the United States. Not all beneficiaries may know that their petitioner has a criminal sex offender history that may put the beneficiary at risk. Recently enacted legislation has mitigated this risk for certain beneficiaries by requiring the State Department, in cooperation with USCIS, to disclose the petitioner’s criminal background information, including sex crimes. Both agencies said that they plan to issue guidance on the new mandatory disclosure requirement. For beneficiaries who are not covered by the mandatory disclosure requirement, both USCIS and the State Department interpret a Privacy Act exception as giving them discretion to disclose a petitioner’s criminal sexual history based on “compelling circumstances affecting the health or safety” of the beneficiary. Until recently, neither agency had issued guidance on this authority, but USCIS has now issued interim guidance to its adjudicators addressing compelling circumstance disclosures, and the State Department is preparing to issue its guidance to consular officers regarding discretionary Privacy Act disclosures not covered by IMBRA. On the basis of the agencies’ Privacy Act guidance, beneficiaries who are not statutorily protected by the mandatory disclosure requirement may nevertheless be informed of their petitioners’ criminal sexual history and the possible risk to their safety. We requested comments on a draft of this report from the Secretaries of Homeland Security and State and the Attorney General. None of these officials provided formal comments. However, representatives from each of these departments provided technical comments which we incorporated into this report, as appropriate. We are sending copies of this report to the Secretaries of Homeland Security and State, the Attorney General, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or Jonespl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. To identify the number of convicted sex offenders who filed family-based petitions, we conducted a computer match of U.S. Citizenship and Immigration Services (USCIS) family-based petitioner data with data on individuals contained in the Federal Bureau of Investigation’s (FBI) National Crime Information Center (NCIC) Convicted Sexual Offender Registry File, known as the National Sex Offender Registry. The USCIS petitioner data file contained records on 667,023 individuals who filed petitions for noncitizen relatives, such as a spouse or child, and 66,658 individuals who filed petitions for noncitizen fiancé(e)s in fiscal year 2005. The FBI’s NCIC National Sex Offender Registry contained data on the 412,773 convicted sex offenders as of December 2005. The USCIS and FBI data files contained seven common data elements: (1) name (2) date of birth, (3) Social Security number, (4) street address, (5) city, (6) state, and (7) ZIP code that we could attempt to match in order to determine which petitioners were registered sex offenders. The name and date of birth were always present in both datasets, but in some cases the other data elements in either the USCIS or FBI dataset were either missing or not entered correctly. In order to increase the possibility of a valid match, we first applied acceptable data-cleaning steps. For example, we eliminated certain extraneous characters from the names and addresses, such as dashes, periods and hyphens and other nonessential characters that would otherwise impede our matching. In addition, we corrected for certain obvious typographical errors, such as typing a zero instead of the letter O. We conducted our match in two steps. In the first step, we matched cases on name and Social Security number since the Social Security number is considered a unique identifier. For our purposes, if the name and Social Security number were the same in both cases, we considered it a match. In the second step, after eliminating those we matched based on name and Social Security number, we matched the remaining records on name and date of birth. It is possible for two people to have the same name and date of birth. Therefore, to be deemed a match for our purposes, the name, date of birth, and several data elements needed to match to provide a high level of assurance that the petitioner and the registered sex offender were the same person. For example, if the name, date of birth and street address, city, and ZIP code were the same, we considered it a match. We also analyzed the USCIS data set to determine the number of petitioners that may have filed more than one petition to arrive at the number of unique sex offenders. To determine the reliability of the USCIS data, we observed how petitioner data are entered into the USCIS data system, interviewed relevant USCIS officials and staff, reviewed pertinent documents, and performed electronic testing for obvious errors in accuracy and completeness. To determine the reliability of the FBI’s Convicted Sexual Offender Registry File, we interviewed FBI officials and system programmers knowledgeable about the data, reviewed pertinent information regarding the FBI’s sex offender registry, and performed electronic testing for obvious errors in accuracy and completeness. We determined that the data were sufficiently reliable for the purposes of this report. We conducted our work from August 2005 through June 2006 in accordance with generally accepted government auditing standards. In addition to the above, Michael Dino, Assistant Director, Carla Brown, Christine Davis, Katherine Davis, Darryl Dutton, Lemuel Jackson, and James Ungvarsky were key contributors to this report.
|
In fiscal year 2005, U.S. citizens and lawful permanent residents filed about 730,000 petitions with the U.S. Citizenship and Immigration Services (USCIS) to sponsor noncitizen family members, including spouses, fiances, and children, to immigrate to the United States. Those doing the sponsoring are called petitioners; those benefiting from the sponsoring are called beneficiaries. If USCIS approves the petition, overseas beneficiaries must also file a visa application with the Department of State to enter the United States. In January 2002, USCIS started to conduct background security checks on all petitioners in addition to the beneficiaries. These background checks revealed that some of the petitioners had convictions for criminal sex offenses; further, some of those criminal sex offenders were filing family-based petitions for children (those under the age of 21). This report addresses the number of convicted sex offenders who filed family-based petitions in fiscal year 2005 based upon a computer match of USCIS data with individuals in the Federal Bureau of Investigation's National Sex Offender Registry and discusses USCIS's and the Department of State's framework for disclosing a sponsor's criminal sexual background to the beneficiary. DHS, the Department of State, and the Department of Justice reviewed a draft of this report. Only technical comments were provided and have been incorporated into this report. At least 398 convicted sex offenders filed a total of 420 petitions in fiscal year 2005 for spouses, fiances, children, and other relatives. Immigration law does not prohibit convicted sex offenders from petitioning to bring their spouses, fiances, or children into the United States and generally USCIS cannot deny a petition based solely on the fact that the petitioner is a convicted sex offender. The sex offenders were convicted of at least 411 sex-related crimes, including sexual assault and rape, according to data in the Federal Bureau of Investigation's National Sex Offender Registry. At least 45 convictions involved crimes against children. While most beneficiaries were spouses and fiances, criminal sex offenders petitioned for at least 60 children. According to USCIS and Department of State officials, an exception to the Privacy Act of 1974 gives them authority to disclose a petitioner's criminal sex offender history if there are "compelling circumstances affecting the health and safety" of the beneficiary. For certain noncitizen beneficiaries, disclosure of the petitioner's criminal background information is now mandatory based on new authority granted to USCIS and the Department of State. The International Marriage Broker Regulation Act of 2005 (IMBRA) requires disclosure of a U.S. citizen's criminal background information, including sex crimes, to certain prospective immigrants, essentially noncitizen fiances, but some spouses and minor children as well. Mandatory disclosure is not required for beneficiaries not covered by IMBRA, though these beneficiaries may receive information about a petitioner's criminal background on a discretionary basis under the Privacy Act exception. GAO estimates that IMBRA's mandatory disclosure requirement will cover about 20 percent of family-based beneficiaries based on fiscal year 2005 data. On May 3, 2006, USCIS issued interim guidance to its adjudicators on when it may be appropriate to disclose information related to a petitioner's criminal history under the "compelling circumstances" exception to the Privacy Act. USCIS plans to issue separate guidance related to disclosure requirements under IMBRA. Department of State officials said that they are preparing to issue Privacy Act disclosure guidance and are finalizing separate IMBRA disclosure guidance.
|
Overall, the regulations that OPM developed to administer a performance- based pay system for senior executives serve as a substantive and positive step for agencies in holding senior executives accountable for their performance and contributions to organizational success. The new senior executive pay system raises the cap on base pay and total compensation. For 2006, the caps are $152,000 for base pay (Level III of the Executive Schedule) with a senior executive’s total compensation not to exceed $183,500 (Level I of the Executive Schedule). If an agency’s senior executive performance management system is certified by OPM and OMB concurs, the caps are increased to $165,200 for base pay (Level II of the Executive Schedule) and $212,100 for total compensation (the total annual compensation payable to the Vice President). To qualify for these flexibilities, agencies’ performance management systems need to meet nine specified certification criteria, including demonstrating that the systems align individual performance expectations with the mission and goals of the organization and that its appraisal system as designed and applied makes meaningful distinctions in performance. To receive a full 2-calendar-year certification, an agency must provide documentation that its senior executive performance management system meets all nine of the criteria. Otherwise, agencies can meet four of nine criteria and demonstrate that their system in design meets the remaining certification criteria to receive 1-year provisional certification and use the higher pay rates. Agencies with 1-year provisional certification must reapply annually, and agencies with full certification must reapply every 2 years. Those agencies with more than one performance management system for their senior executive employees are to certify each system separately. The certification criteria are framed as broad principles designed to serve as guidelines to position agencies to use their performance management system(s) strategically to support the development of a strong performance culture and the attainment of the agency’s mission, goals, and objectives. The certification criteria are generally consistent with our body of work identifying key practices for effective performance management. Specifically, we identified key practices, including aligning individual performance expectations with organizational goals, linking pay to individual performance, and making meaningful distinctions in performance, that collectively create a line of sight between an individual’s performance and an organization’s success. These practices are reflected in the final certification criteria. Key aspects of the OPM certification criteria, as outlined in the regulations, are as follows: (1) Alignment: Individual performance expectations must be linked to or derived from the agency’s mission, strategic goals, program/policy objectives, and/or annual performance plan. (2) Consultation: Individual performance expectations are developed with senior employee involvement and must be communicated at the beginning of the appraisal cycle. (3) Results: Individual expectations describe performance that is measurable, demonstrable, or observable, focusing on organizational outputs and outcomes, policy/program objectives, milestones, etc. (4) Balance: Individual performance expectations must include measures of results, employee and customer/stakeholder satisfaction, and/or competencies or behaviors that contribute to outstanding performance. (5) Assessments and Guidelines: The agency head or a designee provides assessments of the performance of the agency overall, as well as each of its major program and functional areas. (6) Oversight: The agency head or designee must certify that (1) the appraisal process makes meaningful distinctions based on relative performance; (2) results take into account, as appropriate, the agency’s performance; and (3) pay adjustments and awards recognize individual/organizational performance. (7) Accountability: Senior employee ratings (as well as subordinate employees’ performance expectations and ratings for those with supervisor responsibilities) appropriately reflect employees’ performance expectations, relevant program performance measures, and other relevant factors. (8) Performance Differentiation: Among other provisions, the agency must provide for at least one rating level above Fully Successful (must include an Outstanding level), and in the application of those ratings, make meaningful distinctions among executives based on their relative performance. (9) Pay Differentiation: The agency should be able to demonstrate that the largest pay adjustments and/or highest pay levels (base and performance awards) are provided to its highest performers, and that, overall, the distribution of pay rates in the SES rate range and pay adjustments reflects meaningful distinctions among executives based on their relative performance. In commenting on OPM’s draft regulations, we included suggestions intended to help agencies broaden the criteria to reinforce cultures that are results oriented, customer focused, and collaborative in nature. For example, we suggested that OPM require agencies to have their senior executives identify specific programmatic crosscutting, external, and partnership-oriented goals or objectives in their individual performance plans to help foster the necessary collaboration, interaction, and teamwork to achieve results. Further, based on our previous testimony that performance management processes need to assure reasonable transparency, we noted the new performance management system should have adequate safeguards to ensure fairness and guard against abuse. Specifically, we suggested that OPM require agencies to build in safeguards as part of their senior executive performance management systems when linking pay to performance. For example, communicating the overall results of the performance management decisions to the senior executives, while protecting individual confidentiality, could help enhance the transparency of the performance management process. We also recognized that scalability needs to be considered, and that small agencies might face difficulties communicating overall results of the performance management process while protecting the confidentiality of the fewer numbers of senior executives. In response, OPM changed some aspects of its criteria by incorporating these suggestions into the interim final regulations. Agencies can submit their applications to OPM for certification anytime during the year. If fully certified, the certification is good for the remainder of the calendar year in which the agency applied, as well as all of the following calendar year. If provisionally certified, an agency’s certification is only good for the calendar year in which it applied. For example, if an agency is provisionally certified in October 2005, its certification would expire in December 2005. To ensure the agency’s submission is complete, the agency’s OPM contact—the Human Capital Officer (HCO)—first verifies that the application contains the required materials and documents. If complete, the HCO sends copies to the two OPM divisions responsible for reviewing the application, the Human Capital Leadership and Merit System Accountability (HCLMSA) division and the Strategic Human Resources Policy (SHRP) division, and an additional copy to OMB. An agency’s submission is reviewed independently by representatives within HCLMSA and SHRP in an attempt to bring different organizational perspectives to the review. A submission is reviewed against the nine certification criteria, but each review team has its own method for analyzing the application. After an initial examination, the reviewers from HCLMSA and SHRP hold an informal meeting to discuss the submission. The reviewers meet again in a formal panel after a more thorough review, and this time they are joined by the HCO. This panel decides whether they have enough information to reach a certification decision about the agency. If the panel concludes there is not enough information to reach a decision, the HCO will request that the agency provide any missing or additional supporting information. If the panel decides there is sufficient information to reach a decision, it will either certify or reject the application. When an application is rejected, the HCO works with the agency to help modify its appraisal system so that it meets the criteria. If the application is approved, the HCO contacts OMB for concurrence. OMB uses the same nine criteria to evaluate agency applications, but primarily focuses on measures of agency performance. If OMB concurrence is not achieved, the HCO works with the agency to address OMB’s concerns until resolution is reached. Once OMB concurs, the Director of OPM certifies the agency’s appraisal system and the HCO provides additional comments to the agency on their system and identifies any improvement needs. For example, these comments may direct the agency to focus more on making meaningful distinctions in performance. In our ongoing work on OPM’s capacity to lead and implement human capital reform, we asked agency chief human capital officers (CHCO) and human resource (HR) directors to describe their experiences with OPM’s administration of the senior executive pay-for-performance certification process. As the Comptroller General testified before this Subcommittee in June 2006, we heard a number of concerns from agencies regarding OPM’s ability to communicate expectations, guidance, and deadlines to agencies in a clear and consistent manner. When the senior executive certification process began in 2004, OPM provided agencies with limited guidance for implementing the new regulations. OPM’s initial guidance consisted of a list of documents required for provisional and full certification and a sample cover letter to accompany each application. The lack of more specific guidance created confusion as agencies attempted to interpret the broadly defined regulatory criteria and adjust to the requirements for certification. Officials at a majority of the CHCO Council agencies told us they did not have enough guidance to properly prepare for certification. As a result, agencies did not fully understand what was required in the regulations to receive certification. For example, one official noted that while OPM tries to point agencies in the right direction, it will not give agencies discrete requirements. This leads to uncertainty about what agencies must and should demonstrate to OPM. Some CHCOs and HR directors also told us that, in some cases, OPM changed expectations and requirements midstream with little notice or explanation. However, OPM explains that it intentionally allowed some ambiguity in the regulations for the new senior executive appraisal system, in an attempt to provide agencies with management flexibilities. A senior OPM official said OPM did not provide agencies with “best practices” examples because OPM did not want agencies to think there was only one “right” way to get certified. Agencies also indicated that because OPM did not issue guidance for calendar year 2006 submissions until January 5, 2006, some were unable to deliver their submissions to OPM before the beginning of the calendar year. Further, OPM clarified this guidance in a January 30, 2006, memorandum to agencies, telling agencies that senior executive performance appraisal systems would not be certified for calendar year 2006 if the performance plans did not hold senior executives accountable for achieving measurable business outcomes. As a result, agencies had to revise their submissions, where necessary, to meet OPM’s additional requirements. Some agencies indicated that OPM’s late issuance of guidance also created an uneven playing field among agencies, as those that chose to wait until OPM issued guidance before applying for certification were unable to give their senior executives higher pay, while those who did not wait got certified sooner. OPM officials we spoke with about this agreed that they need to be able to provide clear and consistent guidance to agencies and said they are working to improve this. Further, they said their evaluation of agencies’ submissions is evolving as their understanding of the senior executive certification criteria is increasing. The regulations include several positive internal checks and balances that should help maintain the rigorous application of the new senior executive pay system. As I noted earlier, agencies granted full certification are to have their systems renewed for an additional 2 calendar years and agencies granted provisional certification are to reapply for certification after 1 calendar year in order to continue setting the rate of basic pay for senior executives at the higher level. In addition, OPM can suspend certification at any time during the certification period if it determines, with OMB concurrence, that the agency’s system is not in compliance with the certification criteria. OPM’s regulations also require review of each senior executive’s rating by a performance review board appointed by the agency head. As noted above, the regulations also require oversight of the performance appraisal system by the agency head who must certify that the system makes meaningful distinctions in relative performance. According to OPM data, 26 performance management systems at 24 agencies were certified during calendar year 2006. Of these 26, only the Department of Labor’s system received full certification. As of September 19, 2006, the remaining 25 systems received only provisional certification. These findings are not surprising. In our April 2005 testimony before this Subcommittee, we stated that a number of agencies would be challenged in the short term to provide the necessary performance data on their senior executives in order to receive full certification or to maintain their certification (agencies must provide 2 years of performance rating and bonus data showing that meaningful distinctions in senior executive performance were made to qualify). Other factors might also be at work. For example, a number of agencies have told us that the certification process is burdensome. One agency said that OPM’s requirements for the certification of a submission are time intensive, laborious, and can disrupt an agency’s recruitment and retention efforts. As we also noted at the April 2005 hearing, OPM will need to carefully monitor the implementation of agencies’ performance management systems, especially those that have provisional certification. This is because, as I have stated earlier, agencies with provisional certification can still receive the flexibilities of the new pay system, even though they do not meet all of OPM’s certification requirements. In other words, agencies can receive the benefits of the new pay-for-performance system without meeting all of its requirements and safeguards. We believe that, going forward, it will be important for OPM to continue to monitor the certification process, determine whether any obstacles are impeding agencies from receiving full certification, and take appropriate measures to address them. These actions will help ensure that agencies continue to make substantive progress toward modernized performance management systems, and that provisional certifications do not become the norm. Once agencies have provisional or full certification, OPM monitors senior executive performance appraisal systems by measuring the distributions of agencies’ performance ratings and pay. This information helps OPM determine if agencies are making meaningful distinctions among the performance of their senior executives. Such distinctions are important because effective performance management requires the organization’s leadership to make meaningful distinctions between acceptable and outstanding performance and appropriately reward those who perform at the highest level. In its Report on Senior Executive Pay for Performance for Fiscal Year 2005, OPM stated that the data indicate that federal agencies are taking seriously the requirement to develop rigorous appraisal systems and to make meaningful distinctions in performance ratings and pay. All reporting agencies have moved away from pass/fail appraisal systems and now have at least one performance level above “fully successful.” In 2005, 43 percent of career SES governmentwide were rated at the highest performance level, compared to 75 percent in 2003 prior to the implementation of the SES pay-for-performance system. Further, OPM reported for fiscal year 2005 that the percentage of SES rated at the highest performance level declined 16 percent from the prior year. OPM also reported that the largest increases in salary went to SES rated at the highest performance level. Although SES pay and performance award amounts vary by agency based on factors such as compensation strategy, funding, and agency performance levels, OPM believes these general trends suggest a further refinement may be occurring in the process of distinguishing outstanding performers. As we have said in our prior reports and testimonies, senior executives need to lead the way in transforming their agencies’ cultures to be more results oriented, customer focused, and collaborative in nature. Credible performance management systems, specifically those that (1) align individual, team, and unit performance to organizational results; (2) contain built-in safeguards; and (3) are effectively implemented, can help manage and direct this process. The pay-for-performance system for the government’s senior executives that I have discussed today is an important milestone on the march toward modern compensation systems that are more market based and performance oriented. Although OPM and agencies have encountered various challenges in implementing the system, such challenges are not surprising given the cultural shift that the new system represents. Moreover, just 2 years have passed since OPM issued its regulations for certifying agencies’ pay-for-performance systems, and some growing pains are to be expected given agencies’ lack of experience with performance management systems that meet OPM’s requirements. Moving forward, what will be important is how OPM works with agencies to provide the tools and resources they need to design and implement performance management systems that meet the certification criteria in as streamlined a fashion as possible. The lessons learned in implementing the senior executive pay-for- performance system will be critical to modernizing the performance management systems under which other federal employees are compensated. In particular, establishing an explicit line of sight between individual, team, and unit performance and organizational success, as well highlighting opportunities to improve guidance, communications, transparency, and safeguards, will serve the government well moving forward. We stand ready to assist OPM and Congress in exploring and implementing these critical human capital reforms. Chairman Voinovich, Senator Akaka, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact Brenda S. Farrell, Acting Director, Strategic Issues, at (202) 512-6806 or farrellb@gao.gov. Individuals making key contributions to this statement include Carole J. Cimitile, William Colvin, Laura Miller Craig, William Doherty, Robert Goldenkoff, Janice Latimer, Trina Lewis, Jeffrey McDermott, and Michael Volpe. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The government's senior executives need to lead the way in transforming their agencies' cultures. Credible performance management systems--those that align individual, team, and unit performance with organizational results--can help manage and direct this process. In past work, GAO found that the performance management systems for senior executives fell short in this regard. In November 2003, recognizing that reforms were needed, Congress authorized a new performance-based pay system that ended the practice of giving annual pay adjustments to senior executives. Instead, agencies are to consider such factors as individual results and contributions to agency performance. If the Office of Personnel Management (OPM) certifies an agency's new performance system and the Office of Management and Budget (OMB) concurs, the agency has the flexibility to raise the pay of its highest performing senior executives above certain pay caps. This testimony addresses (1) the performance management system's regulatory structure, (2) OPM's certification process and agencies' views of it, and (3) OPM's role in monitoring the system, and the number of agencies that have been certified to date. This statement is based on GAO's issued work, which included interviews with senior OPM officials, agency Chief Human Capital Officers and Human Resource officers, and reviews of agency documents. Overall, the regulations that OPM and OMB developed to administer a performance-based pay system for executives serve as an important step for agencies in creating an alignment or "line of sight" between executives' performance and organizational results. To qualify for the pay flexibilities included in the statute, OPM must certify and OMB must concur that an agency's performance management system meets nine certification criteria, including demonstrating that its performance management system aligns individual performance expectations with the mission and goals of the organization and that its system as designed and applied makes meaningful distinctions in performance. The certification criteria are generally consistent with key practices for effective performance management systems GAO identified that collectively create a line of sight between an individual's performance and an organization's success. To receive a full 2-calendar-year certification, an agency must document that its senior executive performance management system meets all nine of the criteria. Agencies can meet four of nine criteria and demonstrate that their system in design meets the remaining certification criteria to receive 1-year provisional certification and use the higher pay rates. Two divisions in OPM, as well as OMB, independently review agencies' certification submissions. A number of agencies GAO contacted expressed concern over OPM's ability to communicate expectations, guidance, and deadlines to agencies in a clear and consistent manner. OPM officials agreed that agencies need better guidance and were working on improvements. In monitoring agencies' performance management systems, OPM can suspend an agency's certification at any time with OMB concurrence if an agency is not complying with the certification criteria. According to OPM data, performance management systems at 24 agencies were certified during calendar year 2006. Of these, only the Department of Labor's system received full certification; the remaining systems received only provisional certification. These findings are not surprising. As GAO has noted in its past work, agencies could find it initially difficult to provide the necessary performance data to receive full certification. Going forward, it will be important for OPM to continue to monitor the certification process to help ensure that provisional certifications do not become the norm, and agencies develop performance management systems for their senior executives that meet all of OPM's requirements. The new performance management system for the government's senior executives will help agencies align individual, team, and unit performance with organizational results. Although there have been some implementation challenges, what will be important is how OPM works with agencies to meet the certification criteria. Moreover, the lessons learned in implementing the senior executive performance management system can be applied to modernizing the performance management systems of employees at other levels.
|
Since the early 1990s, DOD has increasingly relied on contractors to meet many of its logistical and operational support needs during combat operations, peacekeeping missions, and humanitarian assistance missions, ranging from Operation Desert Shield/Desert Storm and operations in the Balkans (e.g., Bosnia and Kosovo) to Afghanistan and Iraq. Factors that have contributed to this increase include reductions in the size of the military, an increase in the number of operations and missions undertaken, and DOD’s use of increasingly sophisticated weapons systems. Depending on the service being provided by contractors, contractor employees may be U.S. citizens, host country nationals, or third country nationals. Contracts supporting weapons systems, for example, often restrict employment to U.S. citizens, while contracts providing base operations support frequently employ host country or third country nationals. Contracts supporting deployed forces typically fall into three broad categories—theater support, external support, and systems support. Theater support contracts are normally awarded by contracting agencies associated with the regional combatant command, for example, the U.S. Central Command or service component commands, such as the U.S. Army Central Command, or by contracting offices at deployed locations such as in Iraq. Contracts can be for recurring services—such as equipment rental or repair, minor construction, security, and intelligence services—or for the one-time delivery of goods and services at the deployed location. External support contracts are awarded by commands external to the combatant command or component commands, such as the Defense Logistics Agency and the U.S. Army Corps of Engineers. Under external support contracts, contractors are generally expected to provide services at the deployed location. LOGCAP is an example of an external support contract. Finally, systems support contracts provide logistics support to maintain and operate weapons and other systems. These types of contracts are most often awarded by the commands responsible for building and buying the weapons or other systems. The individual services and a wide array of DOD and non-DOD agencies can award contracts to support deployed forces. Within a service or agency, numerous contracting officers, with varying degrees of knowledge about how contractors and the military operate in deployed locations, can award contracts that support deployed forces. According to DOD estimates, in 2005 several hundred contractor firms provided U.S. forces with a wide range of services at deployed locations. Figure 1 illustrates the broad array of contractor services being provided in Iraq and the DOD agency that awarded each contract. The customer (e.g., a military unit) for these contractor-provided services is responsible for identifying and validating requirements to be addressed by the contractor as well as evaluating the contractor’s performance and ensuring that contractor-provided services are used in an economical and efficient manner. In addition, DOD has established specific policies on how contracts, including those that support deployed forces, should be administered and managed. Oversight of contracts ultimately rests with the contracting officer who has the responsibility for ensuring that contractors meet the requirements set forth in the contract. However, most contracting officers are not located at the deployed location. As a result, contracting officers appoint contract oversight personnel who represent the contracting officer at the deployed location and are responsible for monitoring contractor performance. How contracts and contractors are monitored at a deployed location is largely a function of the size and scope of the contract. Contracting officers for large-scale and high-value contracts such as LOGCAP have opted to have personnel from the Defense Contract Management Agency monitor a contractor’s performance and management systems to ensure that the cost, product performance, and delivery schedules comply with the terms and conditions of the contract. Defense Contract Management Agency officials delegate daily oversight responsibilities to individuals drawn from units receiving support from these contractors to act as contracting officer’s representatives for specific services being provided. For smaller contracts, contracting officers usually directly appoint contracting officer’s representatives or contracting officer’s technical representatives to monitor contractor performance at the deployed location. These individuals are typically drawn from units receiving contractor-provided services, are not normally contracting specialists, and serve as contract monitors as an additional duty. They cannot direct the contractor by making commitments or changes that affect price, quality, quantity, delivery, or other terms and conditions of the contract. Instead, they act as the eyes and ears of the contracting officer and serve as the liaison between the contractor and the contracting officer. Table 1 provides additional information on the contract management roles and responsibilities of key DOD personnel. A number of long-standing problems continue to hinder DOD’s management and oversight of contractors at deployed locations. Although DOD has issued departmentwide guidance on the use of contractors to support deployed forces and some DOD components have taken some actions to improve management and oversight of contractors, there is no DOD-wide effort in place to resolve these long-standing problems. These problems include a lack of visibility over the totality of contractor support at deployed locations; a lack of adequate contract oversight personnel; the failure to collect and share institutional knowledge on the use of contractors at deployed locations; and limited or no training of military personnel on the use of contractors as part of their pre-deployment training or professional military education. In June 2003, we recommended that DOD take steps to improve its guidance on the use of contractors to support deployed U.S. forces. Our report noted the lack of standardized deployment language in contracts that support or may support deployed U.S. forces. Since then, in June 2005, DOD amended its acquisition regulations, the Defense Federal Acquisition Regulation Supplement, by providing DOD-wide policy and a contract clause to address situations that may require contractors to accompany U.S. forces deployed outside the United States. Our 2003 report also noted a lack of DOD-wide guidance regarding DOD’s use of and responsibilities to contractors supporting deployed forces. Since then, DOD has taken steps to improve its guidance by issuing the first DOD-wide instruction on contractor support to deployed forces. Specifically, in October 2005, DOD issued DOD Instruction 3020.41, entitled Contractor Personnel Authorized to Accompany the U.S. Armed Forces, which states, among other things, that it is DOD policy to coordinate any proposed contractor logistic support arrangements that may affect Combatant Commanders’ operational plans and operations orders with the affected geographic Combatant Commands, ensure contracts clearly and accurately specify the terms and conditions under which the contractor is to perform and describe the specific support relationship between the contractor and DOD, and maintain by-name accountability of contractors deploying with the force and contract capability information in a joint database. DOD Instruction 3020.41 provides guidance on a wide range of contractor support issues. For example, the instruction provides guidance on when contractors can be used to provide security for DOD assets, when medical support can be provided to contractors, and commanders’ responsibilities for providing force protection and security to contractors. In addition, the instruction references a number of existing policies and guidance that may affect DOD’s responsibilities to contractors supporting U.S. forces at a deployed location. However, the instruction does not address a number of problems we have raised in previous reports. For example, although the instruction addresses the need for visibility over contractors, it does not address the need to provide adequate contract oversight personnel, to collect and share institutional knowledge on the use of contractors at deployed locations, or to provide pre-deployment training on the use of contractor support. While issuance of DOD Instruction 3020.41 represents a noteworthy improvement to DOD’s guidance on the use of contractor support to deployed forces, we found little evidence that DOD components are implementing the guidance. Moreover, Congress has concerns over implementation of the instruction as evidenced by a provision in the Conference Report accompanying the National Defense Authorization Act for Fiscal Year 2007 requiring the Secretary of Defense to submit to Congress a report on the department’s efforts to implement the instruction. DOD Instruction 3020.41 assigns responsibility for monitoring and managing the implementation of the instruction to the Deputy Under Secretary of Defense for Logistics and Materiel Readiness (within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics). However, the Deputy Under Secretary of Defense for Logistics and Material Readiness is responsible for several policy areas including supply chain management and transportation policy. A number of assistant deputy under secretaries serve as functional experts responsible for these areas. For example, the Assistant Deputy Under Secretary of Defense (Transportation Policy) serves as the principal advisor for establishing policies and providing guidance to DOD components for efficient and effective use of DOD and commercial transportation resources. However, no similar individual is responsible primarily for issues regarding contractor support to deployed forces, including implementation of the instruction. According to senior officials within the Office of the Deputy Under Secretary of Defense for Logistics and Material Readiness, given the multiple issues the office is responsible for, addressing contractor support to deployed forces issues is a lower priority. Consequently, at the time of our review we found that few measures had been taken by the Office of the Deputy Under Secretary of Defense for Logistics and Material Readiness to ensure that DOD components were complying with DOD Instruction 3020.41. For example, a senior official with the Office of the Under Secretary of Defense for Intelligence told us that the office was not aware of its responsibility under the instruction to develop and implement, as required, procedures for counterintelligence and security screenings of contractors, until our inquiry regarding their compliance with that requirement. Similarly, a senior Joint Staff official involved in the issuance of DOD Instruction 3020.41 expressed concerns that only some of the senior officials who needed to know about the instruction had been made aware that it was issued. Instead, we found that working groups of subject matter experts within the Joint Staff and the services have begun to address the instruction’s requirements. For example, in May 2006 a working group began to draft a new joint publication that provides guidance on meeting the requirements of DOD Instruction 3020.41, as well as addresses other contractor support issues. As another example, beginning in April 2006 the Joint Staff Directorate of Logistics organized a joint contingency contract management working group consisting of representatives from each of the military services, the Joint Staff, and various DOD components that meets periodically to discuss issues related to implementing the instruction’s requirement to maintain by-name accountability of contractor personnel supporting deployed forces. However, joint contingency contract management working group officials told us they have no formal charter designating their responsibilities and that they therefore lack the authority to direct DOD components to implement the instruction’s requirements. Working group officials told us they are limited in how much they can accomplish without more direct involvement by senior officials within the Joint Staff and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. For example, they told us that they will likely need someone at the general officer level to act as an advocate for their ongoing efforts to implement the instruction’s requirements and address other contractor support issues. Moreover, a number of senior officials, including a general officer responsible for logistics for Multi- National Force-Iraq and a senior official from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, told us that a focused effort within the Office of the Secretary of Defense is needed to coordinate efforts to improve DOD’s management and oversight of contractors supporting deployed forces. We have previously reported on the benefits of establishing a single point of focus at a sufficiently senior level to coordinate and integrate various DOD efforts to address concerns with antiterrorism and the transformation of military capabilities. For example, DOD recognized the need for a single DOD entity to implement and improve the department’s antiterrorism guidance. In 1996, following the Khobar Towers bombing, the Downing task force investigated the incident and made recommendations on how to prevent or minimize the damage of future attacks. One of the central conclusions of the Downing task force was that DOD needed a stronger centralized approach to antiterrorism. To implement this approach, the task force said, a single DOD entity should be designated as responsible for antiterrorism. Further, this entity, among other things, should develop and issue physical security standards, inspect compliance with these standards, manage resources on both a routine and emergency basis, and assist field commanders with antiterrorism matters. The task force found in its review that the lack of a single DOD entity responsible for antiterrorism had had an adverse impact on the posture of forces in the field. In response to the task force’s recommendation, the Secretary of Defense established an office within the Joint Staff to act as the focal point for antiterrorism. Among other things, this office has: established antiterrorism training standards for all levels of command, and instituted outreach programs to collect and distribute antiterrorism lessons learned. Although DOD has long recognized the importance of having visibility over all contractor support at deployed locations, the department continues to be able to provide senior leaders and military commanders with only limited visibility over those contractors. This limited visibility continues to hinder the management and oversight of contractors in deployed locations, including Iraq. In the absence of DOD-wide efforts to address these issues, some DOD components at deployed locations and in the United States have taken their own steps to improve visibility. DOD continues to lack the ability to provide military commanders and senior leaders with visibility over all contractor support at deployed locations, including the range of services being provided to U.S. forces and the number of contractor personnel at deployed locations. Although most of the contract oversight personnel we met with had visibility over the individual contracts for which they were directly responsible, including the number and location of contractor personnel, this information was not aggregated by DOD and was not provided to commanders at higher levels. Many officials responsible for managing and overseeing contractors that support deployed forces at various levels of command in Iraq told us there was no office, database, or other source that could provide them consolidated information on all contractor support at a deployed location. The following are examples of what commanders in Iraq told us: senior commanders within Multi-National Force-Iraq and Multi-National Corps-Iraq told us they had no source to go to that could provide them with a comprehensive summary of contractor services currently being provided U.S. forces in Iraq; the base commander of Logistical Support Area Anaconda, a major logistics hub in Iraq with about 10,000 contractor personnel, told us he only had limited visibility of the number of contractors at his installation and the support they were providing; and a battalion commander from a Stryker brigade told us he was unable to determine the number of contractor-provided interpreters available to support his unit. Moreover, we found that major commands and higher headquarters do not maintain a source of information that could provide improved visibility over all contractors at deployed locations, as illustrated by the following examples: the Army Materiel Command and Air Force Materiel Command were unable to readily provide us with comprehensive information on the number of contractors they were using at deployed locations or the services those contractors were providing to U.S. forces, contracting officials at U.S. Central Command told us that they do not maintain centralized information on the contractor support within their area of operation, and Air Force headquarters officials determined the Air Force had about 500 civilians deployed to Iraq but could not readily identify how many of these individuals were contractor personnel as opposed to DOD civilians. DOD has long recognized the importance of providing visibility over contractors supporting deployed forces. As discussed in our 2003 report, DOD has required since 1990 that DOD components maintain visibility over contractors providing essential services to U.S. forces and the services they provide. However, in 2003 we reported that DOD components were not meeting this requirement and that they lacked visibility over all contractor support to forces deployed to the Balkans and Southwest Asia. Further, a 2004 Joint Staff review of contract management at deployed locations found commanders continued to have insufficient visibility over contractors operating in deployed locations and recommended that DOD provide the combatant commander the capability to maintain visibility over contractor personnel and contract capabilities. In addition, DOD has been unable to provide Congress with information on the totality of contractor support in Iraq, including numbers of contractors and the costs of the services they provide. Limited visibility over contractor support poses a variety of problems for military commanders and senior leaders responsible for contract management and oversight in deployed locations such as Iraq. With limited visibility over contractors, military commanders and other senior leaders cannot develop a complete picture of the extent to which they rely on contractors as an asset to support their operations. Further, they cannot build this reliance on contractors into their assessments of risks associated with the potential loss of essential services provided by contractors, an issue we discussed extensively in our 2003 report. We spoke with several senior military leaders in Iraq who told us their lack of visibility over contractor support in Iraq hindered their ability to incorporate contractors into their planning efforts. For example, a general officer responsible for logistics for Multi-National Force-Iraq told us that acquiring visibility over all contractor support in Iraq was a top priority because Multi-National Force-Iraq did not have the information needed to include the presence of contractors in its planning activities. A number of Multi-National Force-Iraq officials told us that when they began to develop plans to consolidate forward operating bases in Iraq, they discovered that while they could determine the number and type of military units on those bases, they had no means of obtaining similar information about contractors, including the number of contractor personnel on each base and the support the military was providing them. According to a senior Multi-National Force-Iraq official, without this information, Multi-National Force-Iraq ran the risk of overbuilding or underbuilding the capacity of the consolidated bases to accommodate the number of individuals expected to be stationed there. Because Multi-National Force-Iraq lacked a source to draw upon for information regarding the extent of contractor support in Iraq, Multi-National Force-Iraq issued a fragmentary order in April 2006 to base commanders in Iraq to conduct a census of contractors residing on the installations. However, at the time of our review, this effort had only yielded partial results which an Army official familiar with the census effort told us would not meet the initial goals of the fragmentary order. Limited visibility over contractors and the services they provide at a deployed location can also hinder military commanders’ abilities to fully understand the impact that their decisions can have on their installations. For example, when commanders make decisions to restrict access of host country nationals to an installation, this can result in the loss of some contractor-provided services, such as construction or the delivery of supplies that may be dependent upon the use of host country nationals. Similarly, one of the more frequent concerns contractors in Kuwait and Iraq related to us was the impact that base commanders’ decisions to change policies regarding badging requirements and other base access procedures had on their ability to provide services to those bases. Decisions affecting such functions as force protection and base operations support also rely on commanders having an accurate picture of the contractor assets they have in their area of operations and an understanding of the number of contractor personnel they have to support. As we reported in 2003, military commanders require visibility over contractor support at deployed locations because they are responsible for all the people in their area of operations, including contractor personnel. Given the security situation in Iraq, knowledge of who is on their installation helps commanders account for all individuals in the event of a mortar attack or other hostile action. For example, Army officials assisting the movement of contractors into and out of Iraq described to us the difficulties DOD faced determining the identity of a contractor who was taken hostage and then killed by the insurgency in Iraq. We also met with several military commanders who told us that a lack of visibility over contractors on their installations complicated their efforts to provide contractors with support such as food and housing. Several officials told us they regularly had contractor personnel unexpectedly show up in Iraq and request support, but were unable to verify what DOD-provided support those contractor personnel were entitled to. As a result, DOD and its components may be providing unauthorized support to contractors. For example, at one of the joint contingency contract management working group sessions GAO attended, an Army Materiel Command official noted that the Army estimates that it loses about $43 million every year providing free meals to contractor employees who are also receiving a per diem allowance for food. In spite of DOD’s continued lack of capability to provide commanders with the information they need regarding the extent of contractor support at a deployed location, we found that some steps have been taken to provide commanders with improved visibility over the contracts they were directly responsible for. For example: In early 2006, the commanding general of Multi-National Force-Iraq ordered his major subordinate commands in Iraq to provide a head count of non-DOD civilians on their installations, including contractor personnel for contracts exceeding $5 million per year. The information, captured in a database managed by Multi-National Force-Iraq, was needed to provide the general with a current count of all tenant organizations operating from the various forward operating bases in Iraq. Multi-National Corps-Iraq started a similar effort in February 2006 to provide the commanding general with detailed contract management information on recurring services contracts such as for the maintenance of certain aircraft, communications support, and power generation. Also in 2006, the corps support command at Logistical Support Area Anaconda created a database to track recurring services contracts that support the installation. While these individual efforts improved visibility over a specific set of contractors, we found that no organization within DOD or its components has attempted to consolidate these individual sources of information that could help improve its visibility over all contractor support in Iraq. Several DOD officials in Iraq familiar with the individual efforts described above told us that while a number of databases have been created to capture information on contractors in theater, the information is not aggregated at a higher level because no one is responsible for consolidating this information. In most cases, these efforts were initiated by individual commanders and there is no assurance that they would continue when new units with new commanders deployed to replace them in the future. Individual contractors we spoke with had excellent visibility over the number and location of their employees at specific deployed locations. For example, the contractors could readily provide us with information on the number of employees they had in Iraq in support of deployed U.S. forces and the specific installation to which those contractors were deployed. This information was typically reported on a daily or weekly basis from the contractor in Iraq to their corporate headquarters in the United States or elsewhere, as well as to the U.S. government agency that had awarded the contract. However, we found this information was not centrally collected. As discussed previously, there are several hundred contractor firms that support deployed forces, including in Iraq, and contracts are awarded by numerous contracting offices both within DOD and from other U.S. government agencies. With such a large and diverse pool of contractors at deployed locations, it is impractical for individual commanders to obtain this information from contractors on their own. For example, several military officials involved in efforts to improve visibility over contractors in Iraq told us that while they were generally able to obtain information from contractors with large numbers of employees, such as the LOGCAP contractor, it was extremely difficult to identify as well as collect information from all the numerous smaller contractors, who sometimes consisted of only one or two individuals. As discussed above, in October 2005 DOD issued DOD Instruction 3020.41, which included a requirement that DOD develop or designate a joint database to maintain by-name accountability of contractors deploying with the force and a summary of the services or capabilities they provide. Currently, no such DOD-wide database exists. However, Army Materiel Command and the Assistant Secretary of the Army for Acquisition, Logistics, and Technology have taken the initiative to develop a database that could provide improved visibility over all contractors supporting U.S. forces in deployed locations and enable military commanders to incorporate contractor support into their planning efforts. According to Army officials, this database is intended to collect information not only on the overall number of contractors supporting forces in a deployed location but also on the organization or system they are supporting and other contract information that could be used by commanders to better manage contractors at deployed locations. The Army’s goal is to require that all contractors supporting deployed forces use this database, and in turn, create the central source of information to provide commanders with visibility over all contractor support at deployed locations. However, as of the time of our review, the Army was still in the process of implementing the database, and it is uncertain when the process will be completed. For example, we found that only a few contractors were using the database, and Army officials acknowledged it does not currently capture all contractors providing support at deployed locations. According to Army and Joint Staff officials familiar with these efforts, it is likely that DOD will designate this database as the joint database for contractor visibility as required by DOD Instruction 3020.41. However, a number of issues must first be resolved. For example, efforts are still underway to get all the services to agree to enter their data into this database. Further, there is disagreement within the Army staff regarding whether the Deputy Chief of Staff responsible for logistics or personnel has responsibility for the contractor visibility database. Several officials we met with who are involved with these efforts told us that while the Army Materiel Command has made significant progress in developing the database, ultimate resolution of these issues will require action by the Office of the Secretary of Defense because the Army Materiel Command lacks the necessary directive authority to resolve them on its own. Having the right people with the right skills to oversee contractor performance is critical to ensuring that DOD receives the best value for the billions of dollars spent each year on contractor-provided services supporting forces deployed to Iraq and elsewhere. However, inadequate numbers of personnel to oversee and manage contracts that support deployed U.S. forces is another long-standing problem that continues to hinder DOD’s management and oversight of contractors in Iraq. In 2004, we reported that DOD did not always have enough contract oversight personnel in place to manage and oversee its logistics support contracts such as LOGCAP. In addition, in 2005 we reported in our High-Risk Series that inadequate staffing contributed to contract management challenges in Iraq. While we could find no DOD guidelines on the appropriate number of personnel needed to oversee and manage DOD contracts at a deployed location, several contract oversight personnel told us DOD does not have adequate personnel at deployed locations to effectively oversee and manage contractors, as illustrated by the following examples: An Army Contracting Agency official told us that due to a downsizing of its overall contracting force and the need to balance that force among multiple competing needs, the Army is struggling to find the capacity and expertise to provide the contracting support needed in Iraq. An official with the LOGCAP Program Office told us that, as the United States was preparing to commence Operation Iraqi Freedom in 2003, the office did not prepare to hire additional budget analysts and legal personnel in anticipation of an increased use of LOGCAP services. According to the official, had adequate staffing been in place early on, the Army could have realized substantial savings through more effective reviews of the increasing volume of LOGCAP requirements. Officials responsible for contracting with Multi-National Force-Iraq told us they did not have enough contract oversight personnel and quality assurance representatives to allow Multi-National Force-Iraq to award more sustainment contracts for base operations support in Iraq. The contracting officer’s representative for a contract providing linguist support in Iraq told us that he had only one part-time assistant, limiting his ability to manage and oversee the contractor personnel for whom he was responsible. As he observed, he had a battalion’s worth of people with a battalion’s worth of problems but lacked the equivalent of a battalion’s staff to deal with those problems. We also found a number of organizational and personnel policies of various DOD agencies responsible for contract management and oversight contributed to inadequate numbers of personnel to oversee and manage contracts that support deployed forces. The following are some examples: A 2004 Joint Staff review of the Defense Contract Management Agency’s responsiveness and readiness to support deployed forces in the event of war found that the agency had not programmed adequate resources to support current and future contingency contract requirements, compromising its readiness to execute its mission. The review further found that Defense Contract Management Agency manpower shortages were aggravated by internal policies that limit the availability of personnel to execute those missions. During its 2003 deployment to Iraq, a unit with the 4th Infantry Division reported that the divisional contracting structure did not adequately support the large volume of transactions that were needed in an austere environment. For example, the unit reported problems with the quality of services provided by host country nationals, which were exacerbated by a lack of contracting officer’s representatives to properly oversee the performance of contracting terms. An official with the Army Contracting Agency, Southwest Asia told us that as of January 2006 the agency had only 18 of the 33 staff it was authorized and that this number of personnel was not enough to support the agency’s mission. In contrast, he told us that other commands, such as Army Contracting Agency, Korea, were authorized more than 130 staff even though they were responsible for significantly fewer obligated funds. Without adequate contract oversight personnel in place to monitor its many contracts in deployed locations such as Iraq, DOD may not be able to obtain reasonable assurance that contractors are meeting their contract requirements efficiently and effectively at each location. For example, a Defense Contract Management Agency official responsible for overseeing the LOGCAP contractor’s performance at 27 installations in Iraq told us he was unable to personally visit all 27 locations himself during his 6-month tour in Iraq. As a result, he was unable to determine the extent to which the contractor was meeting the contract’s requirements at each of those 27 sites. Moreover, he only had one quality assurance representative to assist him. The official told us that in order to properly oversee this contract, he should have had at least three quality assurance representatives assisting him. The contracting officer’s representative for an intelligence support contract in Iraq told us he was also unable to visit all of the locations that he was responsible for overseeing. At the locations he did visit he was able to work with the contractor to improve its efficiency. However, because he was not able to visit all of the locations at which the contractor provided services in Iraq he was unable to duplicate those efficiencies at all of the locations in Iraq where the contractor provided support. As we previously reported in 2000 and 2004, when contract oversight personnel are able to review the types and levels of services provided by contractors for both economy and efficiency, savings can be realized. Conversely, without adequate contract oversight personnel in place to manage and oversee contractors, DOD continues to be at risk of being unable to identify and correct poor contractor performance in a timely manner. The inability of contract oversight personnel to visit all locations they are responsible for can also create problems for units that are facing difficulties resolving contractor performance issues at those locations. For example, officials from a brigade support battalion told us they had several concerns with the performance of a contractor that provided maintenance for the brigade’s mine-clearing equipment. These concerns included delays in obtaining spare parts and a disagreement over the contractor’s obligation to provide support in more austere locations in Iraq. According to the officials, their efforts to resolve these problems in a timely manner were hindered because the contracting officer’s representative was located in Baghdad while the unit was stationed in western Iraq. In other instances, some contract oversight personnel may not even reside within the theater of operations. For example, we found the Defense Contract Management Agency’s legal personnel responsible for LOGCAP in Iraq were stationed in Germany, while other LOGCAP contract oversight personnel were stationed in the United States. According to a senior Defense Contract Management Agency official in Iraq, relying on support from contract oversight personnel outside the theater of operations may not meet the needs of military commanders in Iraq who are operating under the demands and higher operational tempo of a contingency operation in a deployed location. Although the problems discussed above concern contract management and oversight at deployed locations, the lack of adequate contract oversight personnel is a DOD-wide problem, not limited to deployed locations. We first designated DOD contract management as a high-risk area in 1992, and it remains so today due, in part, to concerns over the adequacy of the department’s acquisition workforce, including contract oversight personnel. We subsequently reported that although DOD had made progress in laying a foundation for reshaping its acquisition workforce, it did not yet have a comprehensive strategic workforce plan needed to guide its efforts. Yet having too few contract oversight personnel presents unique difficulties at deployed locations given the more demanding contracting environment compared to the United States. For example, the deputy commander of a corps support command told us that contracting officer’s representatives have more responsibilities at deployed locations than in the United States. Similarly, several officials responsible for contract management and oversight told us that the operational tempo for contract oversight personnel is significantly higher at deployed locations than in the United States. Despite the fact the DOD and its components face many of the same types of difficulties working with contractors in Iraq that they faced in prior military operations, DOD still does not systematically ensure that institutional knowledge gained from prior experience is shared with military personnel at deployed locations. We have previously reported that DOD could benefit from systematically collecting and sharing its institutional knowledge across a wide range of issues to help ensure that it is factored into planning, work processes, and other activities. With respect to DOD’s use of contractors to support deployed forces, in 1997 we recommended that DOD incorporate lessons learned from the Bosnia peacekeeping mission and other operations in the Balkans to improve the efficiency and effectiveness of the Army’s LOGCAP contract—a recommendation DOD agreed with. Similarly, in 2004 we recommended that DOD implement a departmentwide lessons-learned program to capture the experience of military units and others that have used logistics support contracts—a recommendation DOD also agreed with. In its responses to the recommendations made in our 1997 and 2004 reports, DOD stated it would investigate how best to establish procedures to capture lessons learned on the use of contracts to support deployed forces and would make this information available DOD-wide. However as of 2006, DOD still had not established any procedures to systematically collect and share DOD’s lessons learned on the use of contracts to support deployed forces. Moreover, we found no organization within DOD or its components responsible for developing those procedures. By way of comparison, we have previously reported that when DOD created a Joint Staff office responsible for acting as a focal point for the department’s antiterrorism efforts, that office was able to develop outreach programs to collect and share antiterrorism lessons learned and best practices. While some DOD organizations such as the Joint Forces Command’s Joint Center for Operational Analysis and the Army’s Center for Army Lessons Learned are responsible for collecting lessons learned from recent military operations, we found that neither organization was actively collecting lessons learned on the use of contractor support in Iraq. Similarly, Army guidance requires that customers receiving services under LOGCAP collect and share lessons learned, as appropriate. However, we found no procedures in place to ensure units follow this guidance. Further, our review of historical records and after-action reports from military units that deployed to Iraq found that while units made some observations on the use of contractor support, DOD had done little to collect those lessons learned or make them available to other units that were preparing to deploy. Moreover, in some instances, officials from units we met with told us that their current procedures actually preclude the collection and sharing of institutional knowledge, such as lessons learned. For example, officials with the 3rd Infantry Division, as well as a corps support group that deployed to Iraq, told us that their computers were wiped clean and the information archived before they redeployed to the United States, which hindered opportunities for sharing lessons learned with incoming units. When lessons learned are not collected and shared, DOD and its components run the risk of repeating past mistakes and being unable to build on the efficiencies and effectiveness others have developed during past operations that involved contractor support. For example, the deputy commander of a corps support command responsible for much of the contractor-provided logistics support in Iraq told us that without ensuring that lessons learned are shared as units rotate into and out of Iraq, each new unit essentially starts at ground zero, creating a number of difficulties until they familiarize themselves with their roles and responsibilities. Similarly, lessons learned using logistics support contracts in the Balkans were not easily accessible to military commanders and other individuals responsible for contract oversight and management in Iraq, an issue we also identified in 2004. For example, during our visit to Iraq we found that a guidebook developed by U.S. Army, Europe on the use of a logistical support contract almost identical to LOGCAP for operations in the Balkans was not made available to military commanders in Iraq until mid- 2006. According to one official, U.S. Army Central Command was aware of this guidebook in Iraq as early as late 2003; however, the guidebook was not made available to commanders in Iraq until 2006. According to the official, if the guidebook had been made available sooner to commanders in Iraq it could have helped better familiarize them with the LOGCAP contract and build on efficiencies U.S. Army, Europe had identified. Similarly, U.S. Army, Europe included contract familiarization with its logistical support contractor in mission rehearsal exercises of units preparing to deploy to the Balkans. However, we found no similar effort had been made to include familiarization with LOGCAP in the mission rehearsal exercises of units preparing to deploy to Iraq. Failure to share other kinds of institutional knowledge on the use of contractor support to deployed forces can also impact military operations or result in confusion between the military and contractors. Several officials we met with from combat units that deployed to Iraq as well as contractors supporting U.S. forces in Southwest Asia told us that redeploying units do not always share important information with new units that are rotating into theater, including information on contractors providing support to U.S. forces at the deployed location. Such information could include the number of contractors and the services they provide a unit or installation, existing base access procedures, and other policies and procedures that have been developed over time. In addition, representatives from several contractor firms we met with told us that there can be confusion when new units rotate into Iraq regarding such things as the procedures contractors should follow to access an installation or in dealing with contractors. In some instances, such confusion can place either contractors or the military at risk. For example, a contractor providing transportation services in Iraq told us that a unit responsible for providing convoy security that had just deployed to Iraq had not been informed of the existing procedures for responding to incidents involving the contractor. The existing procedures required the unit to remain with the contractor until its equipment could be recovered. However, following an actual incident in which a vehicle rolled over, there was confusion between the contractor and the unit as to what the required actions were. DOD does not routinely incorporate information about contractor support to deployed forces in its pre-deployment training of military personnel, despite the long-standing recognition of the need to provide such information. Military commanders continue to deploy with limited or no pre-deployment training on the contractor support they will rely on or on their roles and responsibilities with regard to managing those contractors. Similarly, contract oversight personnel typically deploy without prior training on their contract management and oversight responsibilities and are often only assigned those responsibilities once arriving at a deployed location. Many DOD and service officials at various levels of command told us that ultimately the key to better preparing military personnel to effectively work with contractors in a deployed location is to integrate information on the use of contractors into DOD’s institutional training activities. We have been discussing the need for better pre-deployment training on the use of contractors to support deployed forces since the mid-1990s. Specifically, we reported that better training was needed because military commanders are responsible for incorporating the use of contractor support while planning operations. In addition, as a customer for contractor-provided services, military commanders are responsible for identifying and validating requirements to be addressed by the contractor as well as evaluating the contractor’s performance and ensuring the contract is used in an economical and efficient manner. Further, better training was needed for contract oversight personnel, including contracting officer’s representatives, because they monitor the contractor’s performance for the contracting officer and act as the interface between military commanders and contractors. Accordingly, we have made several recommendations that DOD improve its training. Some of our prior recommendations highlighted the need for improved training of military personnel on the use of contractor support at deployed locations, while others focused on training regarding specific contracts, such as LOGCAP. In each instance, DOD concurred with our recommendation. Figure 2 shows the recommendations we have made since 1997. In addition, according to DOD policy, personnel should receive timely and effective training to ensure they have the knowledge and other tools necessary to accomplish their missions. For example, a March 2006 instruction on joint training policy issued by the Chairman of the Joint Chiefs of Staff stated in part that DOD components are to ensure their personnel and organizations are trained to meet combatant commanders’ requirements prior to deploying for operations. It further identified management of contractors supporting deployed forces as a training issue to be focused on. Nevertheless, we continue to find little evidence that improvements have been made in terms of how DOD and its components train military commanders and contract oversight personnel on the use of contractors to support deployed forces prior to their deployment. As we have previously reported, limited or no pre-deployment training on the use of contractor support can cause a variety of problems for military commanders in a deployed location. With limited or no pre-deployment training on the extent of contractor support to deployed forces, military commanders may not be able to adequately plan for the use of those contractors in a deployed location. Several military commanders— including the major general responsible for logistics for Multi-National Force-Iraq, the deputy commander of a corps support command, a base commander, and commanders of combat units deployed to Iraq—told us that their pre-deployment training did not provide them with sufficient information regarding the extent of contractor support they would be relying on in Iraq. Although some of these officials were aware of large contracts such as LOGCAP, almost all of them told us they were surprised by the large number of contractors they dealt with in Iraq and the variety of services that contractors provided. As a result, they could not incorporate the use of contractors into their planning efforts until after they arrived in Iraq and acquired a more complete understanding of the broad range of services provided by contractors. Similarly, several commanders of combat units that deployed to Iraq told us their pre- deployment training included limited or no information on the contractor- provided services they would be relying on or the extent to which they would have to provide personnel to escort contractor personnel. They were therefore unable to integrate the need to provide on-base escorts for third country and host country nationals, convoy security, and other force protection support to contractors into their planning efforts. As a result, the commanders were surprised by the substantial portion of their personnel they had to allocate to fulfill these missions; personnel they had expected to be available to perform other functions. Limited or no pre-deployment training for military commanders on the use of contractor support to deployed forces can also result in confusion regarding their roles and responsibilities in managing and overseeing contractors. As discussed above, military commanders are responsible for incorporating the use of contractor support in their operations planning and, in some instances, for evaluating a contractor’s performance. However, many officials responsible for contract management and oversight in Iraq told us military commanders who deployed to Iraq received little or no training on the use of contractors prior to their deployment, leading to confusion over their roles and responsibilities. For example: Staff officers with the 3rd Infantry Division told us they believed the division was poorly trained to integrate and work with contractors prior to its deployment. According to these officers, this inadequate training resulted in confusion among the officers over the command and control of contractors. Army Field Support Command officials told us many commanders voiced concerns that they did not want to work with contractors and did not want contractors in their area of operations. According to the officials, these commanders did not understand the extent of contractor support in Iraq and how to integrate LOGCAP support into their own planning efforts. The officials attributed this confusion to a lack of pre-deployment training on the services LOGCAP provided, how it was used, and commanders’ roles and responsibilities in managing and overseeing the LOGCAP contractor. Several Defense Contract Management Agency officials told us that although they were only responsible for managing and overseeing the LOGCAP contractor, military commanders came to them for all contracting questions because they had not been trained on how to work with contractors and did not realize that different contractors have different contract managers. In addition, some contractors told us how crucial it was that commanders receive training in their roles and responsibilities regarding contractors prior to their deployment because, although they do not have the authority to, commanders sometimes direct contractors to perform activities that may be outside the scope of work of the contract. We found some instances where a lack of training raised concerns over the potential for military commanders to direct contractors to perform work outside the scope of the contract. For example, one contractor told us he was instructed by a military commander to release equipment the contractor was maintaining even though this action was not within the scope of the contract. The issue ultimately had to be resolved by the contracting officer. As another example, a battalion commander deployed to Iraq told us that although he was pleased with the performance of the contractors supporting him, he did not know what was required of the contractor under the contract. Without this information, he ran the risk of directing the contractor to perform work beyond what was called for in the contract. As Army guidance makes clear, when military commanders try to direct contractors to perform activities outside the scope of the contract, this can cause the government to incur additional charges because modifications would need to be made to the contract and, in some cases, the direction may potentially result in a violation of competition requirements. We found that many military commanders we spoke with had little or no prior exposure to contractor support issues in deployed locations, exacerbating the problems discussed above. Many of the commanders we met with from combat units deployed to Iraq told us this was their first experience working with contractors and that they had had little or no prior training or exposure to contract management. According to officials responsible for contract management and oversight in Iraq as well as several contactor representatives we met with, it can take newly deployed personnel, including military commanders, several weeks to develop the knowledge needed to effectively work with contractors in a deployed location. For complex contracts such as LOGCAP, these officials told us that it can take substantially longer than that. This can result in gaps in oversight as newly deployed personnel familiarize themselves with their roles and responsibilities in managing and overseeing contracts. We also found that contract oversight personnel such as contracting officer’s representatives continue to receive limited or no pre-deployment training regarding their roles and responsibilities in monitoring contractor performance. Although DOD has created an online training course for contracting officer’s representatives, very few of the contracting officer’s representatives we met with had taken the course prior to deploying to Iraq. In most cases, individuals deployed without knowing that they would be assigned the role of a contracting officer’s representative until after they arrived at the deployed location, precluding their ability to take the course. Moreover, some of the individuals who took the course once deployed expressed concerns that the training did not provide them with the knowledge and other tools they needed to effective monitor contractor performance. Other officials told us it was difficult to set aside the time necessary to complete the training once they arrived in Iraq. DOD’s acquisition regulations require that contracting officer’s representatives be qualified through training and experience commensurate with the responsibilities delegated to them. However, as was the case with military commanders, we found that many of the contract oversight personnel we spoke with had little or no exposure to contractor support issues prior to their deployment, which exacerbated the problems they faced given the limited pre-deployment training. We found several instances where the failure to identify and train contract oversight personnel prior to their deployment hindered the ability of those individuals to effectively manage and oversee contractors in Iraq, in some cases negatively affecting unit morale or military operations. The following are examples of what we found: The contracting officer’s representative for a major contract providing intelligence support to U.S. forces in Iraq had not been informed of his responsibilities in managing and overseeing this contract prior to his deployment. As a result, he received no training on his contract oversight responsibilities prior to deploying. Moreover, he had no previous experience working with contractors. The official told us that he found little value in DOD’s online training course and believed this training did not adequately prepare him to execute his contract oversight responsibilities, such as reviewing invoices submitted by the contractor. According to officials from a corps support group deployed to Iraq, the group deployed with 95 Army cooks even though their meals were to be provided by LOGCAP. However, prior to deploying, the unit had neither identified nor trained any personnel to serve as contracting officer’s representatives for the LOGCAP contract. According to unit officials, they experienced numerous problems with regard to the quality of food services provided by LOGCAP, which impacted unit morale, until individuals from the unit were assigned as contracting officer’s representatives to work with the contractor to improve the quality of its services. According to officials with the Army’s Intelligence and Security Command, quality assurance representatives responsible for assessing the performance of a linguist support contractor did not speak Arabic. As a result, it was unclear how they could assess the proficiency of the linguists. Some units that used interpreters under this contract told us they experienced cases where they discovered that their interpreters were not correctly translating conversations. Intelligence officials with a Stryker brigade told us a lack of contractor management training hindered their ability to resolve staffing issues with a contractor conducting background screenings of third country nationals and host country nationals. Shortages of contractor-provided screeners forced the brigade to use their own intelligence personnel to conduct these screenings. As a result, those personnel were not available to carry out their primary intelligence-gathering responsibilities. The frequent rotations of contract oversight personnel, who can deploy for as little as 3-4 months, can also hinder DOD’s management and oversight of contractors in a deployed location. Several contractors told us the frequent rotation of contracting officer’s representatives was frustrating because the contractors continually had to adjust to the varying extent of knowledge those personnel had regarding the contractor support they were responsible for. Moreover, several contractors told us that frequent rotations meant that by the time contract oversight personnel had familiarized themselves with their responsibilities they were preparing the leave the country. If these personnel were replaced by individuals who were not familiar with the contract or had not received training in their roles and responsibilities, problems could occur. For example, a contractor providing food services in Iraq told us that while the contract specified a 21-day menu rotation, some of the newly deployed contracting officer’s representatives assigned to monitor the contract directed the contractor to modify the menu rotation, which affected the contractor’s inventory of food stores and ran the risk of directing the contractor to perform work outside the scope of the contract. Many contractors told us that a consistent level of pre-deployment training would help to ensure some continuity as individuals rotate into and out of deployed locations. In addition, several contractors, as well as military officials responsible for contract management and oversight, told us that the length of deployment for contracting officer’s representatives is too short and that by the time individuals have acquired the knowledge to effectively monitor a contract, they are preparing to redeploy. For example, senior Defense Contract Management Agency officials told us that the current 6-month deployments of contract oversight personnel monitoring the LOGCAP contract in Iraq were too short to make the most efficient use of personnel who had developed the expertise to effectively manage that contract. As a result, senior Defense Contract Management Agency officials told us they are considering extending the length of deployment for their contract oversight personnel assigned to monitor the LOGCAP contract from 6 months to 1 year. We found that contract oversight personnel who had received training in their roles and responsibilities prior to their deployment appeared better prepared to manage and oversee contractors once they arrived at a deployed location. For example, the program office for the Army’s C-12 aircraft maintenance contract developed a 3-day training course that all contracting officer’s representatives for this contract are required to take prior to deploying. This training provides contracting officer’s representatives with information regarding recurring reporting requirements, processes that should be followed to resolve disputes with the contractor, and the variety of technical and administrative requirements these individuals should be familiar with to monitor the contractor’s performance. Officials familiar with this training course told us that they found the course to be very helpful in providing contracting officer’s representatives with the knowledge and tools necessary to effectively execute their responsibilities. As a result, the program office developed a similar course for another of its aviation maintenance contracts. Similarly, Defense Contract Management Agency officials responsible for overseeing LOGCAP told us they are developing a standardized process for evaluating the contractor’s performance in Iraq, which includes ensuring units deploying to Iraq identify and train contract oversight personnel for the LOGCAP contract. Our review of DOD and service guidance, policies, and doctrine found no existing criteria or standards to ensure that all military units incorporate information regarding contractor support to deployed forces in their pre- deployment training. According to a official with the Army’s Training and Doctrine Command, while some steps have been taken to create elective courses on issues related to contractor support to deployed forces, it is important that all DOD components incorporate this information into their existing institutional training so that military personnel who may interact with contractors at deployed locations have a basic awareness of contractor support issues prior to deploying. Moreover, most of the military commanders and officials responsible for contract management and oversight we met with in deployed locations told us that better training on the use of contractors to support deployed forces should be incorporated into how DOD prepares its personnel to deploy. Some officials believed that additional training should address the specific roles and responsibilities of military personnel responsible for managing and overseeing contractors in deployed locations. For example, the base commander of Logistical Support Area Anaconda told us there should be a weeklong pre-deployment course for all base commanders specific to contractor support to deployed forces. Similarly, the commander of a unit operating Army C-12 aircraft stated that the contracting officer’s representative training developed by the program office, as discussed above, should not only be required for all contract oversight personnel but also for military commanders of units operating the aircraft. Other officials believed that their pre-deployment preparations, such as mission rehearsal exercises, should incorporate the role that contractors have in supporting U.S. forces in a deployed location. However, we found that most units we met with did not incorporate the role of contractor support into their mission rehearsal exercises. Moreover, we found no existing DOD requirement that mission rehearsal exercises should include such information, even for key contracts such as LOGCAP. Several officials told us that including contractors in these exercises could enable military commanders to better plan and prepare for the use of contractor support prior to deploying. For example, when a Stryker brigade held its training exercise prior to deploying to Iraq, the brigade commander was surprised at the number of contractors embedded with the brigade. Initially, he wanted to bar all civilians from the exercise because he did not realize how extensively the brigade relied on contractor support. By including contractors in the exercise, their critical role was made clear early on and the brigade’s commanders were better positioned to understand their contract management roles and responsibilities prior to deploying to Iraq. In addition, officials responsible for the LOGCAP contract told us they were undertaking efforts to include basic information on how to work with LOGCAP into the mission rehearsal exercises of units deploying to Iraq. Many officials we met with in the United States and at deployed locations told us that ultimately the issue of better preparing military commanders and contract oversight personnel for their contract management and oversight roles at deployed locations lies with including training on the use of contractors as part of professional military education. Professional military education is designed to provide officers with the necessary skills and knowledge to function effectively and to assume additional responsibilities. However, several officials told us that the need to educate military personnel on the use of contractors is something the military has not yet embraced. As corps support command officials observed, the military does a good job training logisticians to be infantrymen, but does not require infantrymen to have any familiarity with contracting or the roles and responsibilities they may have in working with contractors at a deployed location. DOD’s reliance on contractor support to deployed forces has grown significantly since the 1991 Gulf War and this reliance continues to grow. In Iraq and other deployed locations, contractors provide billions of dollars worth of services each year and play a role in most aspects of military operations—from traditional support roles such as feeding soldiers and maintaining equipment to providing interpreters who accompany soldiers on patrols and augmenting intelligence analysis. The magnitude and importance of contractor support demands that DOD ensure military personnel have the guidance, resources, and training to effectively monitor contractor performance at deployed locations. In prior reports, we made a number of recommendations aimed at strengthening DOD’s management and oversight of contractor support at deployed locations, and the department has agreed to implement many of those recommendations. However, DOD has failed to implement some of our key recommendations, in part because it has not yet institutionally embraced the need to change the way it prepares military personnel to work with contractors in deployed locations. While we found no contractor performance problems that led to mission failure, problems with management and oversight of contractors have negatively impacted military operations and unit morale and hindered DOD’s ability to obtain reasonable assurance that contractors are effectively meeting their contract requirements in the most cost-efficient manner. The difficulties DOD faces regarding contractor support to deployed forces are exacerbated by the fragmented nature of contracting, with multiple agencies in multiple locations able to award and manage contracts that may all provide services to a particular military unit or installation. However, DOD’s actions to date have largely been driven by individual efforts to resolve particular issues at particular moments. A lack of clear accountability and authority within the department to coordinate these actions has hindered DOD’s ability to systematically address its difficulties regarding contractor support—difficulties that currently affect military commanders in Iraq and other deployed locations and will likely affect commanders in future operations unless DOD institutionally addresses the problems we have identified. When faced with similar challenges regarding the department’s antiterrorism efforts, DOD designated an office within the Joint Staff to serve as a single focal point to coordinate its efforts, which helped improve its protection of military forces stationed overseas. Moreover, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics has established dedicated organizations to coordinate efforts to address departmentwide problems in areas such as supply chain management. Unless a similar, coordinated, departmentwide effort is made to address long-standing contract management and oversight problems at deployed locations, DOD and its components will continue to be at risk of being unable to ensure that contractors are providing the services they are required to in an effective and efficient manner. To improve DOD’s management and oversight of contractors at deployed locations, we are recommending that the Secretary of Defense appoint a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, at a sufficiently senior level and with the appropriate resources, dedicated to leading DOD’s efforts to improve contract management and oversight. The entity that functions as the focal point would act as an advocate within the department for issues related to the use of contractors to support deployed forces, serve as the principal advisor for establishing relevant policy and guidance to DOD components, and be responsible for carrying out actions, including the following six actions: oversee development of the joint database to provide visibility over all contractor support to deployed forces, including a summary of services or capabilities provided and by-name accountability of contractors; develop a strategy for DOD to incorporate the unique difficulties of contract management and oversight at deployed locations into DOD’s ongoing efforts to address concerns about the adequacy of its acquisition workforce; lead and coordinate the development of a departmentwide lessons-learned program that will capture the experiences of units that have deployed to locations with contractor support and develop a strategy to apply this institutional knowledge to ongoing and future operations; develop the requirement that DOD components, combatant commanders, and deploying units (1) ensure military commanders have access to key information on contractor support, including the scope and scale of contractor support they will rely on and the roles and responsibilities of commanders in the contract management and oversight process, (2) incorporate into their pre-deployment training the need to identify and train contract oversight personnel in their roles and responsibilities, and (3) ensure mission rehearsal exercises include key contractors to increase familiarity of units preparing to deploy with the contractor support they will rely on; develop training standards for the services on the integration of basic familiarity with contractor support to deployed forces into their professional military education to ensure that military commanders and other senior leaders who may deploy to locations with contractor support have the knowledge and skills needed to effectively manage contractors; and review the services’ efforts to meet the standards and requirements established above to ensure that training on contractor support to deployed forces is being consistently implemented by the services. In commenting on a draft of this report, DOD concurred with our recommendation. DOD’s comments are reprinted in appendix II. DOD also provided several technical comments which we considered and incorporated where appropriate. DOD agreed with our recommendation that the Secretary of Defense appoint a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, at a sufficiently senior level and with the appropriate resources, dedicated to leading DOD’s efforts to improve contract management and oversight. DOD further stated that the Deputy Under Secretary of Defense for Logistics and Materiel Readiness established the office of the Assistant Deputy Under Secretary of Defense (Program Support) on October 1, 2006 to serve as the office of primary responsibility for issues related to contractor support. However, DOD noted in its comments that the office is not yet fully staffed. While we commend the department for taking the initiative to establish this office and believe that it is appropriately located within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, it is not clear that this office would serve as the focal point dedicated to leading DOD’s efforts to improve contract management and oversight. In our recommendation, we identified several actions that such a focal point would be responsible for implementing. In concurring with those recommended actions, DOD offered additional information on the steps it intended to take in order to address the recommended actions. However, none of these steps included information on the roles and responsibilities of the office of the Assistant Deputy Under Secretary of Defense (Program Support) in implementing and overseeing these corrective actions. For example, in concurring with our recommendation that the focal point develop requirements to ensure that mission rehearsal exercises include key contractors, DOD specified corrective actions that the Joint Staff, the Defense Acquisition University, and the Office of the Secretary of Defense would take. However, it is not clear what role the office of the Assistant Deputy Under Secretary of Defense (Program Support) would have in meeting this requirement, nor is it clear that this office would be the entity responsible for ensuring the requirement is met, as stated in our recommendation. As noted in the report, a lack of clear accountability and authority within the department to coordinate actions intended to improve contract management and oversight has hindered DOD’s ability to systematically address its difficulties regarding contractor support in the past. We continue to believe that a single focal point with clearly defined roles and responsibilities is critical if DOD is to effectively address these long- standing problems and we therefore encourage the department to clearly identify the roles and responsibilities of the office of the Assistant Deputy Under Secretary of Defense (Program Support) in implementing and overseeing each of the corrective actions discussed in our recommendation. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors include David A. Schmitt, Assistant Director; Vincent Balloon, Carole F. Coffey, Grace Coleman, Laura Czohara, Wesley A. Johnson, James A. Reynolds, Kevin J. Riley, and Karen Thornton. To determine the extent to which the Department of Defense (DOD) has improved its management and oversight of contractors supporting deployed forces, we met with DOD, Joint Staff, and service headquarters officials to obtain a comprehensive understanding of their efforts in addressing the issues raised in our June 2003 report. We also reviewed changes to key DOD and DOD component policies and other guidance. In some instances, guidance was not available. For example, guidance was not available on the appropriate number of personnel needed to monitor contractors in a deployed location. In those instances, we relied on the judgments and views of DOD officials and contract oversight personnel who had served in deployed locations as to the adequacy of staffing. We visited select DOD components and various military contracting commands in the United States based on their role and responsibility in managing and overseeing contracts that support deployed U.S. forces. Because there was no consolidated list of contractors supporting deployed forces available, we asked DOD officials at the components and commands we visited to identify, to the extent possible, the extent of contractor support to their deployed U.S. forces. We focused our efforts on contractors supporting military operations in Iraq and elsewhere in Southwest Asia because of the broad range of services contractors provide U.S. forces in support of the Global War on Terrorism. We held discussions with military commanders, staff officers, and other representatives from five Army divisions and one Marine Expeditionary Force as well as various higher headquarters and supporting commands that deployed to Iraq or elsewhere in Southwest Asia during the 2003-2006 time frame to discuss their experiences working with contractors and the challenges they faced managing and overseeing contractors in a deployed location. Specifically, we met with unit officials responsible for such functions as contracting and contract management, base operations and logistical support, and force protection and intelligence. These units were selected because, for the most part, they had recently returned from Southwest Asia and unit officials had not yet redeployed or been transferred to other locations within the United States. We also met with representatives from the Department of State and the U.S. Agency for International Development to discuss the extent to which they have visibility over contractors supporting their activities in Iraq. In addition, we traveled to deployed locations within Southwest Asia, including Iraq, to meet with deployed combat units and to discuss the use of contractor support to deployed forces with various military commanders, installation commanders, headquarters personnel, and other military personnel responsible for contracting and contract management at deployed locations. We met with 26 U.S. and foreign contractors who provide support to DOD in Southwest Asia to discuss a variety of contracting and contract management issues. For example, we held discussions with contractors to obtain an understanding of the types of services they provide deployed U.S. forces and the difficulties they have experienced providing those services to DOD in a deployed location. The contractors we met with reflected a wide range of services provided to deployed forces, including theater support, external support, and systems support, and represented both prime contractors and subcontractors. We visited or contacted the following organizations during our review: Defense Contract Management Agency, Alexandria, VA; Houston, TX; Defense Logistics Agency, Fort Belvoir, VA Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, DC Office of the Deputy Under Secretary of Defense for Logistics and Office of the Under Secretary of Defense for Intelligence, Washington, DC Office of the Under Secretary of Defense for Personnel and Readiness, U.S. Central Command, Tampa, FL U.S. Joint Forces Command, Norfolk, VA Chairman, Joint Chiefs of Staff: J-3 Operations, Washington, DC J-4 Logistics, Washington, DC J-7 Operational Plans and Interoperability, Washington, DC J-8 Force Structure, Resources, and Assessment, Washington, DC Department of the Army: Headquarters, Washington, DC Office of the Deputy Chief of Staff, G-1 Personnel Office of the Deputy Chief of Staff, G-4 Logistics Army Contracting Agency, Fort McPherson, GA; Fort Drum, NY; Fort Army Materiel Command, Fort Belvoir, VA Army Aviation and Missile Command, Redstone Arsenal, AL Program Executive Office, Aviation Program Executive Office, Missiles & Space Army Field Support Command, Rock Island, IL Program Office, Logistics Civil Augmentation Program Army Communications-Electronics Command, Fort Monmouth, NJ Army Tank-automotive and Armaments Command, Warren, MI Army Intelligence and Security Command, Fort Belvoir, VA Army Training and Doctrine Command, Fort Monroe, VA Combined Armed Support Command, Fort Lee, VA Stryker Brigades, Fort Lewis, WA 3rd Brigade, Stryker Brigade Combat Team 1st Brigade, Stryker Brigade Combat Team Task Force Olympia 593rd Corps Support Group U.S. Army Central Command, Fort McPherson, GA 3rd Infantry Division, Fort Stewart, GA 703rd Brigade Support Battalion 10th Mountain Division, Fort Drum, NY Department of the Navy: Headquarters, Washington, DC Office of the Deputy Assistant Secretary of the Navy for Acquisition 1st Marine Expeditionary Force, Camp Pendleton, CA Department of the Air Force: Air Force Materiel Command, Wright-Patterson Air Force Base, OH Program Office, Air Force Contract Augmentation Program, Tyndall Air Department of State, Washington, DC U.S. Agency for International Development, Washington, DC CACI International, Arlington, VA Dimensions International, Inc. Sterling Heights, MI DUCOM, Inc., Sterling Heights, MI DynCorp International, Irving, TX General Dynamics Land Systems, Fort Lewis, WA Kellogg, Brown and Root, Houston, TX; Arlington, VA L-3 Communications Corp. L-3 Titan Linguist Operations and Technical Support, Reston, VA Lockheed Martin Missile and Fire Control, Dallas, TX Mantech International, Chantilly, VA M7 Aerospace, San Antonio, TX PWC Logistics, Kuwait Readiness Management Support, Panama City, FL SEI Group, Inc., Huntsville, AL Triple Canopy, Inc., Herndon, VA The overseas activities and contractors we visited, by country, were: Camp Victory, U.S. Military Multi-National Force-Iraq Multi-National Corps-Iraq Defense Contract Management Agency 4th Infantry Division Kellogg, Brown and Root L-3 Communications Corp. L-3 Communications ILEX Systems, Inc. L-3 Government Services, Inc. International Zone, U.S. Military Multi-National Force-Iraq Office of the Under Secretary of Defense for Acquisition, Technology, Army Corps of Engineers, Gulf Regional Division Joint Contracting Command Iraq/Afghanistan International Zone, Contractors L-3 Communications Corp. L-3 Titan Linguist Operations and Technical Support Private Security Company Association of Iraq Logistics Support Area Anaconda, U.S. Military Logistics Support Area Anaconda Garrison Command 3rd Corps Support Command Aerial Port of Debarkation operations Program Management Office, Unmanned Aerial Vehicles Logistics Support Area Anaconda, Contractors AAI Corporation DynCorp International General Atomics Aeronautical Systems General Dynamics Land Systems L-3 Communications Corp. Camp Arifjan, U.S. Military Coalition Forces Land Component Command Area Support Group, Kuwait Army Contracting Agency, Southwest Asia Army Field Support Brigade, Southwest Asia Army Materiel Command U.S. Embassy, Kuwait City Ahmadah General Trading & Contracting Co. British Link Kuwait Combat Support Associates Computer Sciences Corporation IAP World Services ITT Industries Kellogg, Brown and Root Kuwait & Gulf Link Transport Co. Tamimi Global Co. Kellogg, Brown and Root Prime Projects International We conducted our review from August 2005 through October 2006 in accordance with generally accepted government auditing standards. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risks Contractors May Pose. GAO-06- 999R. Washington, D.C.: September 22, 2006. Rebuilding Iraq: Actions Still Needed to Improve the Use of Private Security Providers. GAO-06-865T. Washington, D.C.: June 13, 2006. Rebuilding Iraq: Actions Needed to Improve Use of Private Security Providers. GAO-05-737. Washington, D.C.: July 28, 2005. Interagency Contracting: Problems with DOD’s and Interior’s Orders to Support Military Operations. GAO-05-201. Washington, D.C.: April 29, 2005. Defense Logistics: High-Level DOD Coordination Is Needed to Further Improve the Management of the Army’s LOGCAP Contract. GAO-05-328. Washington, D.C.: March 21, 2005. Military Operations: DOD’s Extensive Use of Logistics Support Contracts Requires Strengthened Oversight. GAO-04-854. Washington, D.C.: July 19, 2004. Military Operations: Contractors Provide Vital Services to Deployed Forces but Are Not Adequately Addressed in DOD Plans. GAO-03-695. Washington, D.C.: June 24, 2003. Contingency Operations: Army Should Do More to Control Contract Cost in the Balkans. GAO/NSIAD-00-225. Washington, D.C.: September 29, 2000. Contingency Operations: Opportunities to Improve the Logistics Civil Augmentation Program. GAO/NSIAD-97-63. Washington, D.C.: February 11, 1997.
|
Prior GAO reports have identified problems with the Department of Defense's (DOD) management and oversight of contractors supporting deployed forces. GAO issued its first comprehensive report examining these problems in June 2003. Because of the broad congressional interest in U.S. military operations in Iraq and DOD's increasing use of contractors to support U.S. forces in Iraq, GAO initiated this follow-on review under the Comptroller General's statutory authority. Specifically, GAO's objective was to determine the extent to which DOD has improved its management and oversight of contractors supporting deployed forces since our 2003 report. GAO reviewed DOD policies and interviewed military and contractor officials both at deployed locations and in the United States. DOD continues to face long-standing problems that hinder its management and oversight of contractors at deployed locations. DOD has taken some steps to improve its guidance on the use of contractors to support deployed forces, addressing some of the problems GAO has raised since the mid-1990s. However, while the Office of the Secretary of Defense is responsible for monitoring and managing the implementation of this guidance, it has not allocated the organizational resources and accountability to focus on issues regarding contractor support to deployed forces. Also, while DOD's new guidance is a noteworthy step, a number of problems we have previously reported on continue to pose difficulties for military personnel in deployed locations. For example, DOD continues to have limited visibility over contractors because information on the number of contractors at deployed locations or the services they provide is not aggregated by any organization within DOD or its components. As a result, senior leaders and military commanders cannot develop a complete picture of the extent to which they rely on contractors to support their operations. For example, when Multi-National Force-Iraq began to develop a base consolidation plan, officials were unable to determine how many contractors were deployed to bases in Iraq. They therefore ran the risk of over-building or under-building the capacity of the consolidated bases. DOD continues to not have adequate contractor oversight personnel at deployed locations, precluding its ability to obtain reasonable assurance that contractors are meeting contract requirements efficiently and effectively at each location where work is being performed. While a lack of adequate contract oversight personnel is a DOD-wide problem, lacking adequate personnel in more demanding contracting environments in deployed locations presents unique difficulties. Despite facing many of the same difficulties managing and overseeing contractors in Iraq that it faced in previous military operations, we found no organization within DOD or its components responsible for developing procedures to systematically collect and share its institutional knowledge using contractors to support deployed forces. As a result, as new units deploy to Iraq, they run the risk of repeating past mistakes and being unable to build on the efficiencies others have developed during past operations that involved contractor support. Military personnel continue to receive limited or no training on the use of contractors as part of their pre-deployment training or professional military education. The lack of training hinders the ability of military commanders to adequately plan for the use of contractor support and inhibits the ability of contract oversight personnel to manage and oversee contractors in deployed locations. Despite DOD's concurrence with our previous recommendations to improve such training, we found no standard to ensure information about contractor support is incorporated in pre-deployment training.
|
A DOD-owned electric, water, wastewater, or natural gas system is composed of multiple components — the equipment, fixtures, pipes, wires, and other structures used in the generation and distribution of electric power, the supply of natural gas, the treatment and distribution of potable water, or the collection and treatment of wastewater. According to our review of records maintained by the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment, as of January 1, 2015, the military services own or have been operating and maintaining as many as 1,954 electric, potable water, wastewater, and natural gas utility systems located in the United States, in its territories, or overseas (see table 1). From these 1,954 systems, we determined that 1,075 of these electric, water, wastewater and natural gas utility systems were owned by the active component of one of the four military services and located on an installation with a plant replacement value of $100 million or more. In addition, the records maintained by the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment indicate that DOD has privatized 570 electric, water, wastewater and natural gas utility systems. According to DOD, since 1997 the department has been attempting to privatize its utility systems because military installations have been unable to maintain reliable utility systems due to inadequate funding and competing installation management priorities. DOD officials stated that privatization is the preferred method for modernizing and recapitalizing utility systems and services by allowing military installations to benefit from private-sector financing and efficiencies. We previously reported that with private-sector financing, installations obtain major upgrades to their utility systems and pay for these improvements over time through the utility services contracts using operation and maintenance funds. Furthermore, in 2005, that while utility privatization may have provided for quicker system improvements than otherwise might have been available, the services’ economic analyses of the costs of privatization gave an unrealistic sense of savings. To promote efficient and economical use of America’s real property assets and ensure management accountability for implementing federal real property management reforms, the President on February 4, 2004, signed Executive Order 13327, Federal Real Property Asset Management. This executive order created the Federal Real Property Council, established the role of the senior real property officer, and authorized the creation of a centralized real property database. The Federal Real Property Council worked with the General Services Administration to develop and enhance an inventory system known as the Federal Real Property Profile, which was designed to meet the executive order’s requirement for a centralized database that includes all real property under the control of executive branch agencies. The 2013 Federal Real Property Council guidance for real property inventory reporting defines 25 real property data elements. One data element is the facility condition index (FCI). The FCI of real property under the control of executive branch agencies is collected in the Federal Real Property Profile database. The FCI provides a general measure of a building’s or structure’s condition at a specific point in time, see figure 1. Repair needs, as defined by the Federal Real Property Council, signify the amount necessary to restore a building or structure to a condition substantially equivalent to the original condition. Plant replacement value, as defined by the Federal Real Property Council, signifies the cost of replacing an existing building or structure so that it meets today’s standards. The FCI is reported on a scale from 0 to 100 percent, in which the higher the FCI, the better the condition of the building or structure. According to a DOD official, the FCI is used to understand the health of the department’s portfolio so that decision makers can be better informed when making investment decisions. DOD guidance requires that each service maintain a current inventory count and up-to-date information regarding, among other things, the FCI of each facility in its inventory. DOD calculates the FCI as defined by the Federal Real Property Council, and records the FCI in its Real Property Assets Database. DOD grouped FCI calculated ratings into four bands, ranging from good to failing condition, to allow the services and defense agencies to group facilities by condition for the purpose of developing investment strategies. The four FCI categories are shown in table 2. Since 2003 we have issued several reports on federal real property issues such as repair and maintenance backlogs, among other things. For example, in October 2008 we reported that six real property holding agencies, including DOD, respectively use different methods to define and estimate their repair and maintenance backlogs. Further, we reported that the backlog estimates do not necessarily reflect the costs that agencies expect to incur to repair and maintain assets essential to their missions or to avert risks to their missions. For example, the General Services Administration identified $7 billion in repair needs for work to be done from fiscal year 2007 and within the next 10 years on its facilities, and DOD provided an FCI value for its facilities. We recommended that the Office of Management and Budget, in conjunction with the Federal Real Property Council and in consultation with the Federal Accounting Standards Advisory Board, should explore the potential for developing a uniform reporting requirement in the Federal Real Property Profile that would capture the government’s fiscal exposure related to real property repair and maintenance. We further recommended that such a reporting requirement should include a standardized definition of repair and maintenance costs related to all assets that agencies determine to be important to their mission, and therefore capture the government’s fiscal exposure related to its real property assets. The Office of Management and Budget generally concurred with the report and agreed with our recommendation. Our recommendation was implemented in 2011 when the Federal Accounting Standards Advisory Board, as supported by the Office of Management and Budget and in coordination with other federal agencies, amended existing standards for financial reporting of deferred repairs and maintenance to establish uniformity across reporting agencies. We also previously reviewed DOD’s efforts to manage its real property inventory, including the need for continued management attention to support installation facilities and operations, among other things. In 2011 we reported that within the DOD Support Infrastructure Management high risk area, the management and planning for defense facilities sustainment—maintenance and repair activities necessary to keep facilities in good working order—no longer remained on the high risk list because DOD had made significant progress in this area at that time. Specifically, we found that DOD took steps to verify the accuracy of its inventory of real property and to develop a facilities sustainment model that provides a consistent and reasonable framework for preparing estimates of DOD’s annual facility sustainment funding requirements. In addition, since 2011 DOD has continued to take steps to improve its ability to assess and record the condition of its infrastructure. One improvement is the development of a standardized process for assessing facility conditions. In 2016 we reported that individual services have reported varying levels of progress in implementing this process. We recommended that DOD revise its guidance to clarify how the services are to indicate when a facility condition rating recorded in DOD’s Real Property Assets Database is based on the standardized process. DOD partially concurred with our recommendation and stated that the OSD conducts periodic reviews of the service’s implementation of the standardized process to ensure they are making progress. Respondents to our survey of DOD-owned utility systems identified 4,393 instances of utility disruptions caused by the failure of DOD-owned equipment for fiscal years 2009 through 2015, and the results of our survey and interviews with DOD installation officials indicated that these disruptions have caused a range of financial and operational impacts. Several factors contributed to the equipment failures that lead to disruptions to DOD-owned utility systems, such as the utility equipment operating beyond its intended life. Of the 364 respondents to our survey, 143 reported a total of 4,393 utility disruptions caused by equipment failure for fiscal years 2009 through 2015. Table 3 shows the number of survey respondents, respondents reporting disruptions, and the total number of disruptions reported for fiscal years 2009 through 2015, by service. Of the 4,393 reported disruptions, the majority were on electric and water utility systems. Specifically, 1,838 disruptions were on electric utility systems and 1,942 were on water utility systems. In addition, 270 disruptions were on natural gas utility systems and 343 were on wastewater systems. Figure 2 shows the number of reported disruptions for fiscal years 2009 through 2015, by utility system type and by service. According to our survey results and interviews with installation officials, several factors contribute to causing equipment failures that lead to disruptions of DOD-owned utility systems. Survey respondents indicated that some causes of equipment failures that led to utility disruptions between fiscal years 2009 and 2015 included: the equipment was used beyond its intended life; the condition of the equipment was poor; the equipment had not been properly maintained; or the equipment was handling service volumes beyond its intended capacity. According to installation officials, some utility systems are experiencing or are at risk of experiencing disruptions because the equipment is operating beyond its intended life. For example, an official from Naval Station Great Lakes, Illinois, stated that the water system is more than 90 years old — beyond its serviceable life which she estimates at about 50-60 years. The increasing age of the system causes the system’s condition to deteriorate and results in more unplanned disruptions. In another example, Air Force officials from Joint Base Elmendorf-Richardson, Alaska, stated that the majority of the installation’s water distribution pipes were originally installed in the 1940’s and due to the age of these pipes there is an increased risk for a significant disruption. However, officials stated that they are currently not experiencing frequent or severe disruptions to the water system due to equipment failure. Based on our survey results, the majority of DOD-owned utility systems are between 55 and 65 years old but have also completed a repair project that replaced a significant part or parts of the system in the last 15 years. Specifically, we estimate, based on information reported in our survey responses, that approximately 25 percent of DOD-owned utility systems were originally installed between 1941 and 1950 and approximately 24 percent between 1951 and 1960 (see figure 3). To supplement the information about the age of the utility systems, through our survey we also collected information on when a significant part or parts of the system was repaired or replaced. Over time parts of the utility systems are repaired and replaced through maintenance activities because certain parts have a shorter serviceable life than others. Describing the age of the system based on when the system was originally installed does not capture the fact that parts have been replaced over time and that certain components of a system may be newer than other components. Based on our survey results, more than half of DOD- owned utility systems have had a significant part or parts of the systems replaced in the last 15 years. Specifically, we estimate that approximately 16 percent of DOD-owned utility systems have most recently completed a significant repair between 2001 and 2010, and 37 percent between 2011 and 2015 (see figure 4). In addition, according to our survey results the poor condition of equipment is a contributing factor leading to disruptions. For example, officials from Naval Station Mayport, Florida, stated that some of the disruptions they reported in the survey were caused by electrical equipment that was in poor condition. Specifically, the officials reported that the existing distribution system serving the installation’s on-base housing is unreliable, not in compliance with code, poorly designed, and past its expected useful lifespan of 50 years. Furthermore, according to some installation officials we interviewed, the utility systems experienced failures because the systems have not been properly maintained. For example, officials from Joint Base Lewis- McChord, Washington, stated that some of the disruptions they reported resulted from the lack of expertise to perform maintenance. Specifically, these officials stated that a well failed in the summer of 2015 because prior repairs to the well were performed improperly, in part because they were performed by personnel without specialized training, and tools had been mistakenly left inside the well. In other examples, officials told us that they are aware of necessary repairs, however, they have been unable to complete them due to lack of funding. According to responses provided to our survey, we estimate that approximately 29, 32, and 35 percent of DOD-owned utility systems experienced funding shortfalls for fiscal years 2013, 2014, and 2015 respectively. To mitigate the funding shortfall, based on the survey responses we estimate that approximately 33 percent of utility managers deferred entire planned maintenance and repair projects and 41 percent deferred portions of planned maintenance and repair projects. In an interview with officials from Naval Station Bremerton, they stated that an electrical substation has experienced several failures, disrupting electricity to shipyard operations, because there are several condition deficiencies and it is poorly configured (i.e., the substation has a mixture of different parts and equipment that do not function efficiently together), making the operation and maintenance of the substation challenging. Officials reported that they have known about these issues with the substation for years, but they have not submitted a project to update the system because they believed it would not compete well for funding. Officials said that a lack of available funding for the electric system has caused them to delay a utility infrastructure project on this substation, a critical component of the electric system. In another example, an official from Naval Station Great Lakes, Illinois, stated that an assessment study of the water system recommended a phased recapitalization of the system, however, these repairs have been deferred due to lack of funding. In another example, officials from Marine Corps Air Station Yuma stated that the installation’s wastewater 50-year old infrastructure does not comply with current standards and guidelines, but due to funding shortfalls repairs or replacements have not been completed. Based on our survey responses and follow-up interviews with installation officials, disruptions of utility systems caused by the failure of DOD- owned equipment caused a range of financial and operational impacts. Of the 143 respondents who reported experiencing one or more utility disruptions, 100 reported information about financial impacts – the money spent repairing the disruption and mitigating its effects. These respondents reported experiencing a total of over $29 million in financial impacts for fiscal years 2009 through 2015 (see table 4). Respondents reported experiencing financial impacts that ranged from no financial impacts, or zero, to those indicating as much as $7.5 million in impacts in 1 year. Table 4 shows the total financial impacts by survey respondents for utility disruptions caused by equipment failure for fiscal years 2009 through 2015 by service and utility type. In our follow-up interviews with survey respondents, some officials explained that they were unable to estimate the financial impacts of disruptions. For example, an official from MacDill Air Force Base, Florida, stated that they did not report any financial impacts of disruptions because it would have been too difficult and time consuming to manually search through all of the records to identify the costs. In addition, officials from Naval Station Bremerton explained that any estimate of the costs associated with the fiscal impacts of the disruptions would be unreliable because they could not definitively calculate the total costs of all of the repair work performed for each disruption. However, they stated that the Navy conducted an in-depth study of unplanned utility outages on the four major Navy shipyards, in part to determine the causes of the outages and the impacts of the outages on the Navy’s ship repair and maintenance efforts. According to Navy officials, the study determined that the unplanned outages were mostly caused by the equipment failure of Navy- owned utility equipment and that the outages had led to delays in repair efforts and approximately $58 million in lost productivity. In addition, based on our survey responses, disruptions caused by the failure of DOD-owned equipment cause a range of operational impacts. In our survey, we asked the respondents who reported one or more utility disruptions to report how common various operational impacts were. Based on their responses, in fiscal year 2015, we estimate that approximately 39 percent of DOD-owned utility managers commonly or very commonly experienced no operational impacts from disruptions, approximately 51 percent commonly or very commonly experienced minor operational impacts, and approximately 27 percent commonly or very commonly experienced moderate operational impacts, such as delays or reduced capability of some assets. Further, major operational impacts were less common. Also in fiscal year 2015, we estimate that approximately 9 percent of DOD-owned utility managers commonly or very commonly experienced major operational impacts. Our interviews with installation officials provided additional examples of operational impacts of disruptions. For example, an official from Joint Base McGuire-Dix-Lakehurst, New Jersey, provided an example of a moderate operational impact. He stated that a power line exploded on the Lakehurst annex and caused an electric disruption to a major Army facility. The official explained that the power line that exploded was installed in 1945 and was past its expected service life. Operations at the Army facility were shut down for an entire week while staff arranged to have several large generators installed at the facility. The facility ran on generator power for the next 3 weeks while contracted repairs to the line were completed. Figure 5 shows a burnt electrical feeder cable that caused a major disruption to this Army facility. Similarly, in another example the Naval Undersea Warfare Center located in Keyport, Washington, officials stated that in 2013 a complete base electrical disruption was caused when a battery failed at a switching station and then led to cascading failures across the base. Officials stated that operations at the Naval Undersea Warfare Center stopped because there was minimal back-up electricity generating capability at the time. In addition, the lack of preventive maintenance has led to disruptions. Officials from Naval Auxiliary Landing Field San Clemente Island, California, stated that the installation experienced an 8-hour island-wide electrical disruption because seven utility poles caught fire in May 2014. Officials were able to re-route power to some areas of the island, but some areas were without power for the full 8 hours. The utility poles caught fire because the insulator – a specific type of support used to attach an electrical distribution line to the utility pole that prevents the electricity from flowing to the pole itself – was corroded and covered with salt, dust and debris. The salt and dirt formed a conductive layer on the insulator that can create a “flashover” where the electricity flashes over the corroded and polluted insulator and can lead to a fire on the utility pole. Officials stated that these insulators can be washed to mitigate the potential for such incidents. However, the system needs to be shut down in order to perform the work, and, because of the installation’s continuous training operation schedule, it is difficult to schedule this maintenance. In another example, Navy officials from Naval Station Mayport, Florida, stated that a series of electric disruptions in enlisted housing resulted in a proposed $2.9 million project for improvements to the distribution system. According to the project documentation from April 2015 we reviewed, the poor condition of the infrastructure had caused 20 disruptions in the past two years. Some of the disruptions affected the entire neighborhood, and the disruptions lasted between 6 and 20 hours each. Navy officials from Naval Support Facility Indian Head, Maryland, stated that in 2012 the installation’s water system experienced a major rupture to a segment of pipe that typically carries approximately 4,000 gallons per minute. The rupture caused a drop in pressure that decreased the volume of water going through the pipe to about 700 to 800 gallons per minute. This disruption caused a temporary shut-down in mission activities because the drop in water pressure impacted the fire suppression capabilities. The officials stated that they ultimately replaced 5 of their 60 miles of water pipe due to this incident which cost approximately $2.0 million. Figure 6 shows a water pipe rupture at Naval Support Facility, Indian Head, Maryland. In situations with smaller leaks in the water pipes, it may be more difficult to find the problem. Figure 7 below demonstrates an example of repair work associated with a leak or break in a water pipe at Naval Station Great Lakes, Illinois. Officials explained that the trench is not typically this large, but the leak could not be found initially. The maintenance workers had to dig the trench where the water was initially seen to be coming out of the ground and had to continue expanding the trench until the leak was found. Based on our analysis of survey responses and our follow-up interviews, we determined that information on utility disruptions is not consistently available to owners and managers of utilities at the installation level. According to our survey responses, 151 out of 364 survey respondents reported that they did not have information on utility disruptions for any fiscal year from 2009 through 2015. By contrast, 213 out of 364 survey respondents stated that they had information on disruptions for at least one fiscal year, and the availability of information on disruptions increased for the more recent years. We followed up with the respondents who reported not having information on disruptions to confirm their responses and to determine why such information was not available. We confirmed that 53 respondents did not have information, 52 stated that they did have information, several of whom said that they misread the question and their answer should have been that they had information but experienced no disruptions, and 38 did not respond to our follow-up. In addition, we did not follow up with 8 respondents, 6 of whom said that they were unfamiliar with the system or whom did not believe they had the information necessary to complete the survey, and 2 of whom submitted survey responses after we began our follow up efforts. The 53 respondents who reported not having disruption information provided various reasons why the information was not available. Some reasons include that the maintenance of the system is provided by a contractor and the contract does not require the collection and reporting of the disruption information; that the maintainers of utilities do not always indicate in the records they keep the cause of the outage, such as disruptions caused by equipment failure, versus other causes, such as storm damage; and that the maintenance history is not always available due to personnel turnover. In addition, some respondents reported that they might be able to determine the number of disruptions caused by equipment failure, but that they would need to manually search through the maintenance records which is a time-consuming task. An overarching reason we found for disruption information not being available is that the services vary in the extent to which each has issued guidance to collect and retain utility disruption information at the installation level. Specifically, The Army has an annual requirement for utility managers to report a wide range of information about utility systems through the Installation Status Report process. This process requires utility managers to report unplanned electric utility disruptions and interruptions to water distribution infrastructure. Further, the process has requirements to report instances of equipment failure for water treatment and distribution equipment and wastewater treatment and collection equipment. There is not a specific requirement to report disruptions of natural gas systems, but there is a requirement to report on surveys done to detect the presence of leaks in the distribution piping. However, we found that some of the Army installations did not consistently have information about disruptions. The Air Force does not have a requirement for installations to collect and retain utility disruption data. Air Force installation officials stated that there used to be an instruction from a major Air Force command that required the reporting of utility disruption information, but that this instruction was superseded and the reporting requirement for utility disruptions was not included in the new guidance. The Marine Corps also does not have a requirement for installations to collect and retain utility disruption data. A Marine Corps headquarters official stated that he was considering developing such guidance. The Navy issued guidance in September 2015 to improve its ability to collect timely and accurate information about utility disruptions that occur on Navy installations by requiring the collection and reporting of disruption data beginning in fiscal year 2016. According to the guidance, the Navy needs accurate utility disruption data in order to make informed decisions for utility investments because disruption data is a key factor utilized in prioritizing utility repair projects, among other things. In the guidance, the Navy included specific instructions for how the utility disruption data were to be documented at the installation level. Specifically, the guidance instructs the public works departments or base operations and support contractors to track all utility outages in the Navy’s maintenance work order information system known as “MAXIMO”. For example, for unplanned utility outages lasting greater than 5 minutes, the installation officials or contracting staff are to enter information about the incident, response and repair in a MAXIMO work order outage log. In addition, installation officials or contracting staff are required to identify the cause of the utility outage and to enter that numerical code into MAXIMO (that is, 0 for false alarm, 1 for loss of commercial power/utility, 2 for weather-related disruptions, 3 for equipment failures, and so on). Furthermore, the guidance states that any new base operating and support contracts should include a provision for the contractors to report utility disruption information into MAXIMO and to include instructions on how to report that information. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving the defined objectives, and that analyzing and estimating the significance of risks provides the basis for responding to the risks. In addition, we reviewed reports from federal agencies and utility management organizations that recommend that utility system managers record and use information about the disruptions that occur on their systems in order to manage their systems effectively. For example, according to the American Public Power Association, reliability statistics calculated by using data on disruption frequency and duration constitute a quantitative basis for good decision making. The collection and retention of utility disruption information is useful for two reasons. First, installation-level officials stated that disruption information is useful in operating and maintaining the utility system. Based on the responses to our survey, we estimate that 82 percent of utility managers considered this information to be somewhat or very useful. In addition, installation officials we interviewed identified several ways in which they used disruption information. For example, at Naval Station Great Lakes, Illinois, an official stated that while she was not aware of a policy requiring that she track disruptions to the utility systems, she did track disruptions on the water system, including information on the disruption’s location and date. She stated that she used the information to focus on areas of the water system that were experiencing multiple disruptions, to plan maintenance, and to inform funding decisions. In addition, an official from Fort Campbell, Kentucky, stated that he tracks outages because it is considered a good engineering practice. He stated that tracking disruptions on the electric system helped him to determine reliability, operations and maintenance budgets, preventative maintenance requirements, and areas of the system that needed more attention. Second, utility disruption information may help installations compete for project repair funding. According to Army, Navy, and Air Force officials, they use disruption information, among other information, when prioritizing funding for utility repairs in a particular budget year. For example, the Air Force’s risk-based project funding model uses utility outage information, among other variables, to prioritize projects. Also, as discussed above, the Navy’s utility project prioritization process to make risk-based investment decisions uses utility disruption information, among other variables, to determine the highest priority projects. According to the Navy’s guidance, the prioritization process helps them ensure that limited repair funding is directed to the most important projects. Installations that collect and retain information about utility disruptions may be better able to manage and operate the utility system and compete for scarce project funds because they have the available data to justify the project. A Marine Corps official stated that he was considering developing a requirement for installations to track utility disruption information. In addition, as stated above, the Navy recently issued guidance to improve its ability to track utility disruptions because it needs this information to make informed decisions. The Navy’s guidance, if implemented as directed, may help installations track utility disruption information and thus enable them to make sound decisions. On the other hand, installation-level utility system owners and managers who do not have access to information about disruptions may not have the information they need to make informed decisions or to compete effectively for limited repair funds. DOD is currently implementing a standardized condition assessment process to improve the data reliability of its facility condition data. DOD’s standardized assessment process for utility systems is currently in development, and the initial version has limited capabilities to assess the condition of the utility infrastructure. Further, the military services are allowed to customize certain settings within the process which could result in differences in the FCI across the services. In 2013, the Office of the Secretary of Defense (OSD) directed the services to implement a standardized condition assessment process in order to improve data reliability, and specifically the credibility of the FCI. Prior to 2013 the guidance issued by OSD did not require a standardized condition assessment process, and the respective services used different methodologies to assess the condition of their facilities, including utility systems. As a result of the services’ nonstandardized approach, OSD determined that the FCI data lacked credibility as a measure of DOD facility quality. According to the 2013 OSD memorandum, the department needed to implement the standardized assessment process to ensure that it had consistent and reliable condition data in order to make sound strategic investment decisions. According to an OSD official, the department relies on the FCI to make these decisions, in part, because the FCI allows OSD to assess the department’s and the individual services’ abilities to maintain the facilities at the condition necessary to achieve the department’s missions. In addition, decision makers use the FCI to monitor progress toward department-wide goals and to prevent further accumulation of deferred maintenance. Those goals include the establishment of an inventory-wide 80 percent minimum FCI score for each military service to meet annually for the facilities they manage, beginning in fiscal year 2016. Another goal is the identification of facilities in failing condition, with an FCI of below 60, in support of the department’s efforts to reduce the inventory of failing facilities. Our survey results indicate that operators of DOD-owned utility systems stated that knowledge about the condition of the infrastructure is useful. Specifically based on our survey responses, we estimate that utility managers consider knowledge about the condition of the system to have a somewhat or very positive effect on the ability to avoid or prevent equipment failure (68 percent); to manage risk associated with equipment failure (72 percent); to identify funding needs (76 percent); and to extend the utility system’s usable service life (71 percent), among other things. The 2013 OSD memorandum directed the services to use the Sustainment Management System (SMS) software, developed by the U.S. Army Corps of Engineers, Construction Engineering Research Laboratory, as the standardized condition assessment process. SMS is a suite of web-based software modules designed to help facility engineers, technicians, and managers make asset management decisions regarding when, where, and how best to maintain facilities and their key components. According to the 2013 OSD memorandum, the services are required to use SMS both to derive and to record the FCIs of facilities supported by SMS in their respective real property databases by September 2017. For assets not yet supported by SMS, such as utilities, the 2013 OSD memorandum directed the services to perform inspections with qualified personnel to determine existing physical deficiencies and to estimate the cost of maintenance and repairs using industry cost guides. According to U .S. Army Corps of Engineers officials, they are still in the process of developing modules that will respectively cover the following utilities: water, sewer, storm sewer, electrical, gas, and thermal systems. According to officials from the U.S. Army Corps of Engineers, DOD’s standardized process for assessing the condition of utility infrastructure is currently under development, and the initial version has limited capabilities to assess the condition of the utility infrastructure. U.S. Army Corps of Engineers officials stated that the initial version of the SMS module for electric and water utility systems has been under development since 2014 and is scheduled to undergo initial testing in November 2016. Further, according to U.S. Army Corps of Engineers officials, their organization and the Air Force are the two organizations working on development of the utilities SMS module, but representatives from the other services have participated in the utilities SMS working committee meetings. In addition, according to Air Force Officials, the Air Force has provided funding to the U.S. Army Corps of Engineers to fund the development of the initial version of the SMS utilities module for electric and potable water utility systems. However, according to U.S. Army Corps of Engineer officials, additional funding from the other services is needed to further develop the capabilities of the electric and water modules and to develop additional modules for other utility systems, such as for wastewater or natural gas systems. U.S. Army Corps of Engineers officials responsible for developing DOD’s initial version of the utilities module stated that the initial version uses a simplified condition assessment process. The simplified condition assessment process uses two variables, age and expected service life, to determine the condition of the utility infrastructure. Alternatively, in SMS modules for other facilities, such as buildings, more comprehensive assessment processes are used to determine the condition of the facility. These comprehensive assessment processes provide objective and repeatable inspections on various facility components based on knowledge of component criticality, the expected and observed deterioration of components, among other things. Upon completion of the inspection any identified defects are recorded and categorized by distress type (for example, blistered, broken, damaged, cracked, or corroded), among other things. According to U.S. Army Corps of Engineers officials, the rating criteria for future versions of the SMS utilities module will be established with consideration of existing rating systems from within DOD and industry. For example, Navy officials told us that they provided funding to U.S. Army Corps of Engineers in fiscal year 2015 to evaluate the integration of Navy utility condition assessment rating methods into the SMS utilities module. This study examines the Navy’s utility condition assessment and risk-based rating methods for integration into the SMS condition assessment process for utilities. According to Navy officials, as of May 2016, the service has not received the results of the U.S. Army Corps of Engineer integration study. Navy officials indicated that they use disruption information as one variable in their rating methodology. As discussed previously in this report, installation officials and our survey respondents have stated that disruption information is useful when making decisions about the utility system. The Army, Navy, and Air Force use disruption information as one variable in their frameworks for prioritizing funding for utility projects. According to one installation official at Altus Air Force Base, Oklahoma, the official would like to be able to use the disruption information with the SMS module to support repair and investment decisions. The installation officials stated that age may not always be a good indicator of condition for a utility system, as a component or part might be relatively new but causing disruptions nonetheless. Furthermore, Navy installation officials from Naval Station Mayport, Florida, told us that information about disruptions is especially useful when much of the utility infrastructure is below ground and cannot be easily observed. DOD’s standardized process allows the military services to customize certain settings in the SMS system that affect repair need decisions, which can result in differences in the FCI. The customizable settings are called “condition standards,” and these are the standards at which the service wishes to maintain the facility’s components or equipment. These condition standards may vary depending upon how critical a particular component is to the overall facility or mission and each service develops their own condition standards. For example, U.S. Army Corps of Engineers hypothetically explained that on the one hand the Navy may want to set a high condition standard for a water system that is used to supply water to cool nuclear reactors for its home-ported nuclear submarines because this is a critical mission. On the other hand, the Army may want to set a lower condition standard for its water system that is used to supply water for grounds maintenance because this is a lower priority. These standards are compared to the current condition assessment of the facility. Differences between the standards and the assessment determine when repair work is needed for a particular piece of infrastructure, and whether or not repair work is needed affects the FCI calculation. If the inspected condition is above the condition standard then the SMS system does not identify any repair work. If the inspected condition falls below the condition standard then the SMS system identifies the necessary repair work. SMS estimates the costs of the identified repair work and then the system users determine if they want to conduct the repairs. The SMS system uses the estimated cost of the repair as the numerator in the FCI equation. According to U.S. Army Corps of Engineers officials, the services have not yet developed condition standards for their utilities because the SMS module for utilities is still being developed. However, the services have developed condition standards for use in other SMS modules and U.S. Army Corps of Engineers officials stated that the design of the SMS module for utilities will be similar to other existing SMS modules. Further, the officials stated that the condition standards for the utilities module will operate similarly to how the condition standards operate in existing SMS modules. Therefore, to conduct our analysis we reviewed the condition standards used by the services in an existing SMS module for buildings, called “BUILDER”. The services have grouped condition standards into categories, such as high, medium, and low. According to U.S. Army Corps of Engineer officials, condition standards in the high category would be assigned to facilities that are mission-critical or generally more important to maintain. For example, officials at Cape Canaveral Air Force Station, Florida, stated that the installation’s electric and water systems are critical to supporting the launch mission, however, the wastewater system is not as essential. Specifically, the electric system powers equipment for communication and radar tracking and the water system provides water to the launch pads to absorb excess heat and noise generated during launches. If the utility SMS module is implemented at Cape Canaveral, an Air Force official indicated that they would likely assign high condition standards to the electric and water systems and a lower condition standard to the wastewater system. We found that while the four services generally use similar categories of condition standards – such as high, medium, and low – they respectively assign different numerical values to standards within the same category. For example, each service has a category called “medium,” but the values range from 60 to 75 depending on the service. Figure 8 depicts the service condition standards for the BUILDER SMS module. To illustrate how different condition standards affect the FCI calculation, we developed a notional example, as illustrated in table 5, showing an electric distribution system. The example assumes that each hypothetical organization owns and operates an electric distribution system, A through D, and each system has exactly the same infrastructure – overhead power lines, a transformer, and a switching station — was installed at the same time, and has the exact same plant replacement value ($500,000). Also, each part of the system has the same assessed physical condition from SMS. However, each hypothetical organization has different condition standards for this notional electric distribution system. We used the “Medium/Intermediate” condition standard found in figure 8 for this notional example. We created notional maintenance and repair costs for cases when the assessed physical condition from SMS was lower than the condition standard. As shown in our example, the result of differences in the condition standards is that the FCIs are different, even though the assessed physical condition is the same. In this notional example, hypothetical organizations A, B, and D appear to have repair needs, while hypothetical organization C does not appear to have any repair needs. Table 5 illustrates how different condition standards from four hypothetical organizations produce different FCI values. According to the 2013 OSD memorandum, the department requires reliable condition information, in the form of the FCI, to manage the department’s facilities and to make informed investment decisions. OSD officials stated that the FCI is one of multiple sources of information that can be used to support the department’s investment decisions concerning a single asset or portfolio of assets. Further, according to Standards for Internal Control in the Federal Government, to be useful, information should be accurate, complete, and credible, among other factors. However, DOD has not taken action to ensure that the condition standards to be developed by the services for the utilities module will provide the department with comparable and reliable FCI data. According to DOD officials, the services should have the flexibility to set the condition standards for their utility infrastructure and other facilities as they deem appropriate based on mission criticality and other factors. DOD officials stated that the services need to have the flexibility to prioritize the condition of some utility systems and facilities above others so that they can direct their limited repair and maintenance budgets to the most important needs. We agree that some facilities may need to be put in the high standard versus the low or medium standard based on mission criticality, but it is unclear why the standards vary within the same category (i.e., high, medium low). Further, according to the OSD 2013 memorandum, DOD is implementing a new standardized process to assess the condition of its facilities because its previous guidance allowed the services to implement an unstandardized approach to assessing the condition of their facilities, which resulted in a FCI that lacked credibility. OSD officials also stated that they had not compared the services’ existing condition standards and that they would consider looking into the differences of these standards across the services. Without taking steps to ensure that the services’ condition standards for the utilities module and other modules will provide the department with comparable and reliable FCI data, the SMS utilities module, currently under development, may not provide DOD information that is comparable across the department’s facilities. As a result, DOD may not be able to reliably assess progress toward meeting department-wide goals and DOD may continue to receive FCI data that lacks credibility as a measure of DOD facility quality. Disruptions to DOD-owned utility systems have caused financial impacts and impacts to DOD operations and missions. Information about these disruptions can help DOD operate and maintain the utility systems, including identify these impacts and take steps to prevent or mitigate such disruptions. However, utility disruption information is not consistently available at the installation-level. We determined that some military services had guidance in place that required installations to collect and report some utility disruption data, and others did not. The Army has a service-wide requirement to collect and report electric and water utility disruption data, instances of equipment failure for water and wastewater systems, and to perform leak detection surveys for natural gas systems. However, we found that some of the Army installations did not consistently have information about disruptions available. The Air Force and Marine Corps do not have a service-wide requirement to collect and report utility disruption data. The Navy issued new reporting guidance beginning in fiscal year 2016 that if implemented as directed may provide the Navy installations with the guidance and procedures necessary to collect disruption information to make informed decisions for utility investments. The majority of DOD-owned utility system owners and managers consider this type of information to be beneficial, for example some officials stated they use this information to determine where resources need to be focused to maintain the utility infrastructure. As a result, those who do not have such information may be at a disadvantage when making maintenance decisions or competing effectively for limited repair funds. The current standardized process for assessing condition in the SMS modules already developed allows the military services to customize certain settings – called condition standards. The military services have developed different thresholds for the various categories of condition standards, which can result in different FCI ratings across the services for facilities assessed in the same condition. OSD’s goal for implementing the SMS assessment system is to have consistent, comparable and reliable FCIs across its portfolio of assets to make informed management decisions. Without taking steps to ensure that the services’ condition standards for the utilities module will provide the department with comparable and reliable FCI data, the SMS utilities module, currently under development, may not provide DOD information that is comparable across the department’s facilities. As a result, DOD may not be able to reliably assess progress toward meeting department-wide goals. Further, DOD risks continuing to receive FCI data that lacks credibility as a measure of DOD facility quality. To improve the information that DOD, military service officials, and installation-level utility system owners and maintainers need to make maintenance or other investment decisions, we recommend that the Secretary of Defense take the following three actions: Direct the Secretary of the Army to take steps to implement existing guidance so that disruption information is consistently available at the installation level; Direct the Secretary of the Air Force to issue guidance to the installations to require the collection and retention of disruption; and Direct the Commandant of the Marine Corps to issue guidance to the installations to require the collection and retention of disruption information. To provide DOD with more consistent information about the condition of DOD-owned utility systems as DOD continues to develop the SMS module for utility systems, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations, and Environment, in coordination with the military services, take actions to govern the consistent use of condition standards of utility systems to be assessed using the SMS utilities module, and if applicable, for other facilities assessed using other SMS modules. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix III, DOD concurred with our first three recommendations that the Secretary of Defense direct the Army, Air Force, and Marine Corps to take steps or provide guidance to consistently collect disruption information. DOD partially concurred with our fourth recommendation that the Secretary of Defense take steps to implement the consistent use of condition standards for utility systems to be assessed using the SMS utilities module. DOD stated it will continue to work with the Military Departments to determine if further opportunities exist to establish consistent condition standards within the SMS for utility systems. We continue to believe, by taking such steps the department will have assurances that the SMS utilities module will provide the department with comparable and reliable FCI data, which decision makers use to monitor progress towards department-wide goals and prevent further accumulation of deferred maintenance. We are providing copies to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Assistant Secretary of Defense for Energy, Installations, and Environment; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the number of disruptions of DOD-owned utility systems that occurred between fiscal years 2009 and 2015, their causes, and the impact of the disruptions, we administered a survey to a representative sample of 453 DOD-owned utility systems located in the United States and overseas, producing results generalizable to the DOD-owned utility population. A copy of the full questionnaire and aggregate responses for all close- ended questions are included in appendix II. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error (see below). Using records maintained to manage and oversee DOD’s Utility Privatization Program within the Office of the Secretary of Defense of Energy, Installations, and Environment, we took several steps to identify the utility systems included in our study population and our sample design. Our scope included electric, water, wastewater, and natural gas utility systems that were owned by the active component of one of the four military services and located on a U.S. or overseas installation with a plant replacement value of $100 million or more. Some utility systems, mostly those located overseas, may not be owned by the military service but the military service may be responsible for funding the operation and maintenance of the system. When we refer to DOD-owned utility systems in this report we are including both systems that are owned by one of the military services and systems where the military service pays for the majority of the operation and maintenance of the utility system. To determine the electric, water, wastewater and natural gas systems owned by DOD, we reviewed records maintained by the Installation Energy Office under the Assistant Secretary of Defense for Energy, Installations, and Environment and we identified 1,954 systems located within and outside the United States. Next, we compared this list of utility systems with the fiscal year 2015 Base Structure Report to determine which systems resided on installations with a plant replacement value of $100 or more and that were owned by the active component of one of the military services. This resulted in a total of 1,075 systems — 770 systems located in the United States and 305 systems located outside the United States – that made up our study population (see table 6). We drew a random stratified sample of 469 utility systems from the population frame of 1,075 systems (see table 7). In order to be able to make generalizable statements about each of the four types of utilities, we did the following. First, we split the sample population into five strata; the first four correspond to the four types of utilities located in the United States. The fifth stratum comprises all utilities located on U.S. military installations outside the United States. The reason we used a fifth stratum for the systems outside of the United States was because the ownership status of these systems was not clear from the records maintained by the department. By separating these systems into their own strata we could draw our sample in such a way that we would still be able to generalize the survey results for the utility systems within the United States even if all of the overseas systems were in fact not owned by one of the military services nor did the services pay for the majority of the operation and maintenance of the utility system. Furthermore, in order to verify that the systems we included in our sample were within our scope, we included a question in the survey which asked respondents to state if the system was owned by the military service and if the service was responsible for paying the majority of the operation and maintenance of the system, as discussed below. In each stratum we used systematic random selection to identify the systems to include in the sample. Each armed service was represented in the sample in proportion to the total number of each type of utility system which they operate. In addition, the sample from each stratum received an allocation large enough to support an estimate with a margin of error no larger than plus or minus 10 percentage points at the 95 percent level of confidence. This was then adjusted for an expected response rate of 70 percent. See table 7 for the original sample size adjusted for an assumed 70 percent response rate. To identify the survey respondents, we supplied a list of the sampled utility systems to each of the military services, which reviewed the list and identified the appropriate official at the installation to respond to our survey. During this process, 16 of the systems were removed because, for example the military service officials informed us that the system had been privatized, or that the installation on which the survey was located had been closed, among other things. We removed these 16 systems from our original sample of 469 systems, which left 453 systems. To inform the design of our survey instrument and help ensure the validity and reliability of our results, we met with officials from OSD and the military services and explained the intent and design of the survey to ensure that, in general, the intended survey recipients would have the knowledge and resources to respond to our survey. GAO analysts and technical survey experts designed the survey and conducted four pretests, one with each military service, with officials who had work experience managing and operating DOD-owned utility systems at the installation level to ensure that survey questions collected the expected information and to obtain any suggestions for clarification. Furthermore, the survey instrument was independently reviewed by a survey design expert within GAO. Our survey included questions about the number of disruptions that occurred on the installation for fiscal years 2009 through 2015 caused by equipment failure, the impacts of those disruptions, and the characteristics of DOD-owned utility systems, among other things. To distribute the survey, we sent an email to each respondent with a link to the web-based version of the survey with a unique user name and a password. To ensure the most possible responses, we kept the military services informed of the completion status and we also kept the survey open from December 18, 2015, through March 31, 2016. In total, we distributed 453 surveys. Out of the 453 surveys distributed, 379 managers or operators of DOD- owned utility systems completed the survey for a response rate of 84 percent. To verify that the completed surveys were within our scope, we analyzed the results of a question in the survey which asked respondents to state if the system was owned by the military service and if the service was responsible for paying the majority of the operation and maintenance of the system. We determined that 15 respondents reported that the utility system was neither owned by the military service nor operated and maintained using a majority of appropriated funds. We removed these 15 surveys from our list of completed surveys, which resulted in a list of 364 completed and in-scope surveys. The analysis in this report is based on those 364 survey responses. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. Generally in this report the results of this survey are presented as statistical estimates about the population of 1,075 electric, water, wastewater, or natural gas utility systems described above. In cases where we are using these estimates, we describe the results as estimates and generally refer to the entire population of “utility systems” or “utility managers.” Because some questions did not apply to all respondents, some of the questions in our survey were answered by an insufficient number of respondents to reliably generate an estimate of the overall population. In these cases, rather than presenting a population estimate, we reported on the number of respondents in our sample who answered that question. To obtain additional information about the impact of utility disruptions caused by the failure of DOD-owned utility infrastructure, we conducted follow-up interviews with a selected set of respondents who reported the most disruptions. We asked respondents to describe the impacts of specific disruptions and we also collected and reviewed documentation, such as records in maintenance information systems and project proposals. To assess the extent to which owners and managers of DOD-owned utility systems have information about disruptions caused by equipment failures, we included a question in our survey regarding the availability of information on disruptions from fiscal year 2009 through 2015 and a question about the usefulness of disruption information in managing utility systems. Based on the survey responses, we followed-up with all 146 survey respondents who reported not having any information on disruptions for any fiscal year to confirm their responses and to determine the reasons why information was not available. We received responses from 89 survey respondents. We also interviewed service officials regarding policies and practices related to the collection and use of utility disruption information. Finally, we compared installation practices to standards regarding the identification, analysis, and response to risks as described in Standards for Internal Control in the Federal Government. In addition, we reviewed reports from federal agencies and utility management organizations, such as management guides issued by the Environmental Protection Agency and the American Public Power Association, which describe the information that is useful in the management and operation of utility systems. To assess the extent to which the department’s implementation of a standardized facility condition assessment process provides DOD consistent information about the condition of utility systems, we reviewed policy documents and reports regarding DOD’s efforts to improve the reliability of the condition information it collects to manage its infrastructure. We reviewed policies and documents describing the development and implementation of a new standardized condition assessment process, called the Sustainment Management System, developed by the U.S. Army Corps of Engineers, and how DOD plans to use the condition information to monitor and oversee the achievement of department-wide goals. Additionally, we collected and reviewed documents such as briefings, training documents, and user guides that describe how the new standardized condition assessment process will assess and rate the condition of utility systems and related infrastructure. We also conducted interviews with DOD officials and the military services regarding the development of the standardized process and how the department intends to use the information to inform decisions. Finally, we compared DOD’s process for generating the condition information with standards regarding the use and management of data as described in Standards for Internal Control in the Federal Government. We conducted this performance audit from July 2015 to November 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The questions that we asked in our survey on DOD-owned utility systems are shown below. Our survey was comprised of mostly close-ended questions. In this appendix, we include all survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. See appendix I for details of the analysis that led to the results reported here. 1. What is your current role with the utility system? 2. How long have you been in this role? Estimated average number of months 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 3. How long have you worked with the utility system? Estimated average number of months 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 4. Does your military service own the infrastructure of this utility system? (Check one.) 95 percent confidence interval – lower bound (percentage) a) Does your military service pay for the majority of the operation and maintenance of this utility system through appropriated sustainment, restoration and modernization (SRM) funding? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) This section asks about some of the characteristics of this utility system. Please answer only for utility infrastructure that is DOD-owned. 5. Does the utility system perform the following functions? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 6. When was this utility system originally installed? (Check one.) 95 percent confidence interval – lower bound (percentage) 7. When was the most recent recapitalization project completed on this utility system, which replaced a significant part or parts of the system? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 8. Which of the following best describes the types of employees that conduct maintenance on this utility system, as of September 30, 2015? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 9. How many full-time equivalent (FTE) government employees operate and maintain this utility system, as of September 30, 2015? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 10. For fiscal year 2015, what was the size of this utility system in terms of the amount of commodity delivered on a typical day? (Enter number.) 11. How many people use this utility system during a typical weekday? (Check one.) 95 percent confidence interval – lower bound (percentage) 12. In which fiscal year (FY) were the facility condition index ratings for the infrastructure associated with this utility system last updated? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 13. How frequently is the facility condition index rating for the infrastructure associated with this utility system updated? (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 14. Did you use any of the following to update the facility condition index rating for the infrastructure associated with this utility system? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 15. To what extent do the following represent challenges in updating the facility condition index of the utility system? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Lack of time to conduct an assessment Lack of trained or qualified personnel Lack of the necessary equipment to perform the assessment Infrastructure is underground and difficult to access 21.3 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Conducting the assessment requires that the utility system be shut down Conducting the assessment may damage the utility infrastructure Assessment results do not provide useful information 5.2 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 16. Does information about the condition of the utility system positively or negatively effect your ability to do the following? (Check one per row.) Very or somewhat positive effect Don’t know Very or somewhat negative effect No Effect Very or somewhat positive effect Don’t know 7.3 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 17. How confident are you about the current reliability of this utility system? (For the purposes of this survey, reliability is the ability of a utility system to perform its functions under normal and extreme operating conditions.) (Check one.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 18. Do the following issues negatively impact your confidence in the current reliability of this utility system? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 44.8 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Poor condition of the infrastructure Excessive demand beyond designed system capabilities Age (the system is nearing or has reached its expected serviceable life) 19. How many major maintenance and repair projects (projects costing more than $250,000) were completed on this utility system in the following fiscal years? (Please only include those major maintenance and repair projects that were planned projects, please do not include unplanned projects.) (Check one per row.) 95 percent confidence interval – lower bound (percentage) 20. From fiscal years 2013 to 2015, were there funding shortfalls for this utility system? 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 21. From fiscal years 2013 to 2015, did the following factors contribute to a shortfall of funding for this utility system? 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Other funding needs within the service had a higher priority Other funding needs on the installation had a higher priority Increase in unplanned maintenance needs 35.9 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 22. From fiscal years 2013 to 2015, did you take any of the following actions to mitigate the shortfall? 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Deferred entire planned maintenance and repair projects Deferred portions of planned maintenance and repair projects 18.4 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Sought opportunities to obtain alternative funding sources (i.e., 3rd party financed projects) a. If you deferred entire maintenance and repair projects due to funding shortfalls then to what extent did this deferred maintenance effect the reliability of this utility system? 95 percent confidence interval – lower bound (percentage) b. If you deferred portions of maintenance and repair projects due to funding shortfalls then to what extent did this deferred maintenance effect the reliability of this utility system? 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) For the purposes of this survey, please report the following type of disruptions on this utility system. Include: Disruptions in this utility system to users or to (a) mission-reliant asset(s) lasting more than 5 minutes due to the failure of DOD-owned equipment or the under-performance of utility infrastructure based on operating environment standards Do not include: Disruptions of less than 5 minutes The failure of a commercial or privatized electricity generation system Natural events such as a storm, earthquake, fire, etc. that damage the Intentional or planned disruptions 23. To what extent is information about utility disruptions due to equipment failures useful in operating and maintaining the utility system? (Check one.) 24. For which of the following fiscal years do you have information on the disruptions cause by equipment failure on this utility system? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 32.8 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) a. In each fiscal year, how many disruptions did this utility system experience? (For zero disruptions, check “no disruptions”.) Section completed (If there were no disruptions in this fiscal year, skip the rest of the questions in this fiscal year and go to next fiscal year) b. Approximately how many minutes was the utility service disrupted for during each fiscal year? c. Approximately what were the fiscal impacts of the utility disruptions reported for each fiscal year? (Fiscal impact is the money spent repairing the disruption and mitigating the effects. For example, the cost of the replacement parts and the cost of the personnel needed to complete the repair would be considered in the fiscal impact.) Dollars_____________ d. How common were the following operational impacts of the utility disruptions reported in each fiscal year? (Operational impacts are any impacts that the disruptions had on the ability of the installation to operate and to accomplish its mission.) No Operational Impacts. Very uncommon or uncommon Very common or common Don’t know Very uncommon or uncommon Very common or common Don’t know Very uncommon or uncommon Very common or common Don’t know Minor Operational Impacts, such as causing minimal delays. Very uncommon or uncommon Very common or common Don’t know Very uncommon or uncommon Very common or common Don’t know Very uncommon or uncommon Very common or common Don’t know Very uncommon or uncommon Very common or common Don’t know (Other) 25. How common are the following causes of disruptions on this utility system? (Check one per row.) Common or very common Don’t know 9.0 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) 26. How likely would any of the following prevented some of the disruptions on this utility system? (Check one per row.) 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) Improved preventative maintenance, inspections and repairs 10.5 95 percent confidence interval – lower bound (percentage) 95 percent confidence interval – upper bound (percentage) In addition to the contact named above, Laura Durland, Assistant Director; Michael Armes; Carl Barden; Tracy Barnes; Jon Ludwigson; Carolyn Cavanaugh; Randy De Leon; Steven Putansu; Amie Lesser; Cheryl Weissman; Erik Wilkins-McKee; and Tonya Woodbury made key contributions to this report. Defense Facility Condition: Revised Guidance Needed to Improve Oversight of Assessments and Ratings. GAO-16-662. Washington, D.C.: June 23, 2016. Facilities Modernization: DOD Guidance and Processes Reflect Leading Practices for Capital Planning. GAO-15-489. Washington, D.C.: July 27, 2015. Defense Infrastructure: Improvements in Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C.: July 23, 2015. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Federal Real Property: Government’s Fiscal Exposure from Repair and Maintenance Backlogs Is Unclear. GAO-09-10. Washington, D.C.: October 16, 2008. Defense Infrastructure: Continued Management Attention Is Needed to Support Installation Facilities and Operations. GAO-08-502. Washington, D.C.: April 24, 2008. Defense Infrastructure: Actions Taken to Improve the Management of Utility Privatization, but Some Concerns Remain. GAO-06-914. Washington, D.C.: September 5, 2006. Defense Infrastructure: Issues Need to Be Addressed in Managing and Funding Base Operations and Facilities Support. GAO-05-556. Washington, D.C.: June 15, 2005. Defense Infrastructure: Managing Issues Requiring Attention in Utility Privatization. GAO-05-433. Washington, D.C.: May 12, 2005. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003.
|
DOD installations rely on utilities, such as electricity, to accomplish their missions and disruptions can hamper military operations. Senate Report 114-49 included a provision for GAO to report on DOD-owned utility disruptions. This report (1) describes the number, causes and impacts of utility disruptions caused by the failure of DOD-owned utility infrastructure for fiscal years 2009 through 2015; (2) assesses the extent to which owners and managers of DOD-owned systems have access to utility disruption information; and (3) assesses the extent to which the implementation of a standardized facility condition assessment process provides DOD consistent information about its utility systems. GAO surveyed a representative group of 453 DOD-owned electric, water, wastewater, and natural gas utility systems, evaluated DOD policies and reports, interviewed officials, and conducted interviews with several survey respondents who experienced the most disruptions. Based on GAO's survey of Department of Defense (DOD) owned utility systems, there were 4,393 instances of utility disruptions caused by equipment failure for fiscal years 2009 through 2015 and the results of our survey and interviews with DOD installation officials indicated that these disruptions have caused a range of financial and operational impacts. Survey respondents identified several factors that contributed to equipment failures that led to disruptions, such as equipment operating beyond its intended life; poor equipment condition; and equipment not being properly maintained. Survey respondents reported over $29 million in financial impacts for fiscal years 2009 through 2015. Installation officials reported experiencing operational impacts such as a week-long shut down of operations at an Army facility on Joint Base McGuire-Dix-Lakehurst, New Jersey. Information about utility disruptions is not consistently available to DOD utility owners and managers at the installation level. Specifically, 151 out of 364 survey respondents stated that they did not have information on utility disruptions for any fiscal year from 2009 through 2015. An overarching reason GAO found for disruption information not being available is that the services vary in the extent to which each has issued guidance to collect and retain utility disruption information at the installation level. The Army has some guidance to report utility disruptions, but we found that some installations did not consistently have this information available. The Air Force and Marine Corps do not have current guidance directing the installations to track utility disruption information. The Navy issued new guidance in 2015 which, if implemented as directed, may improve the collection of utility disruption information. According to installation and headquarters officials, there are benefits to collecting utility disruption information since it can be used to identify repairs and to prioritize funding for those repairs. However, without guidance directing installations to collect information about all types of utility disruptions, service officials may not have the information needed to make informed decisions or to compete effectively for limited repair funds. DOD's implementation of the Sustainment Management System (SMS), a software tool to conduct standardized condition assessments, may not provide it with comparable and reliable facility condition index (FCI) data -- a metric used to make strategic investment decisions. In 2013, to improve the reliability of FCI data, DOD directed the services to use SMS which standardizes the way the services conduct condition assessments and calculate the FCI. According to officials, the SMS module for utility systems is still in development, but modules for other facilities, such as buildings, are complete and in use. While the SMS process is intended to provide DOD with credible FCI data, GAO found the process could result in differences in the FCI because the services are able to customize settings, called condition standards, within the process. Variation among the condition standards could result in facilities having differences in the FCI although the assessed physical conditions of the facilities are the same. As a result, the FCI data would not be comparable. Without taking steps to ensure that the services' condition standards for the utilities module, which is under development, will provide the department with comparable and reliable FCI data, the SMS utilities module may not provide DOD information that is comparable across the department. To improve utility system information, GAO is recommending that the Army, Air Force, and Marine Corps take steps or provide guidance to consistently collect disruption information, and that while the SMS utilities module is under development, DOD take steps to ensure that the services apply condition standards consistently. DOD concurred with the recommendations to collect disruption data and partially concurred with the other recommendation stating that it would determine if further consistent condition standards are needed.
|
An effective military medical surveillance system needs to collect reliable information on (1) the health care provided to service members before, during, and after deployment, (2) where and when service members were deployed, (3) environmental and occupational health threats or exposures during deployment (in theater) and appropriate protective and counter measures, and (4) baseline health status and subsequent health changes. This information is needed to monitor the overall health condition of deployed troops, inform them of potential health risks, as well as maintain and improve the health of service members and veterans. In times of conflict, a military medical surveillance system is particularly critical to ensure the deployment of a fit and healthy force and to prevent disease and injuries from degrading force capabilities. DOD needs reliable medical surveillance data to determine who is fit for deployment; to prepare service members for deployment, including providing vaccinations to protect against possible exposure to environmental and biological threats; and to treat physical and psychological conditions that resulted from deployment. DOD also uses this information to develop educational measures for service members and medical personnel to ensure that service members receive appropriate care. Reliable medical surveillance information is also critical for VA to carry out its missions. In addition to VA’s better known missions—to provide health care and benefits to veterans and medical research and education— VA has a fourth mission: to provide medical backup to DOD in times of war and civilian health care backup in the event of disasters producing mass casualties. As such, VA needs reliable medical surveillance data from DOD to treat casualties of military conflicts, provide health care to veterans who have left active duty, assist in conducting research should troops be exposed to environmental or occupational hazards, and identify service-connected disabilities, and adjudicate veterans’ disability claims. Investigations into the unexplained illnesses of service members and veterans who had been deployed to the Gulf uncovered the need for DOD to implement an effective medical surveillance system to obtain comprehensive medical data on deployed service members, including Reservists and National Guardsmen. Epidemiological and health outcome studies to determine the causes of these illnesses have been hampered due to incomplete baseline health data on Gulf War veterans, their potential exposure to environmental health hazards, and specific health data on care provided before, during, and after deployment. The Presidential Advisory Committee on Gulf War Veterans’ Illnesses’ and IOM’s 1996 investigations into the causes of illnesses experienced by Gulf War veterans confirmed the need for more effective medical surveillance capabilities. The National Science and Technology Council, as tasked by the Presidential Advisory Committee, also assessed the medical surveillance system for deployed service members. In 1998, the council reported that inaccurate recordkeeping made it extremely difficult to get a clear picture of what risk factors might be responsible for Gulf War illnesses. It also reported that without reliable deployment and health assessment information, it was difficult to ensure that veterans’ service-related benefits claims were adjudicated appropriately. The council concluded that the Gulf War exposed many deficiencies in the ability to collect, maintain, and transfer accurate data describing the movement of troops, potential exposures to health risks, and medical incidents in theater. The council reported that the government’s recordkeeping capabilities were not designed to track troop and asset movements to the degree needed to determine who might have been exposed to any given environmental or wartime health hazard. The council also reported major deficiencies in health risk communications, including not adequately informing service members of the risks associated with countermeasures such as vaccines. Without this information, service members may not recognize potential side effects of these countermeasures and promptly take precautionary actions, including seeking medical care. In response to these reports, DOD strengthened its medical surveillance system under Operation Joint Endeavor when service members were deployed to Bosnia-Herzegovina, Croatia, and Hungary. In addition to implementing departmentwide medical surveillance policies, DOD developed specific medical surveillance programs to improve monitoring and tracking environmental and biomedical threats in theater. While these efforts represented important steps, a number of deficiencies remained. On the positive side, the Assistant Secretary of Defense (Health Affairs) issued a health surveillance policy for troops deploying to Bosnia. This guidance stressed the need to (1) identify health threats in theater, (2) routinely and uniformly collect and analyze information relevant to troop health, and (3) disseminate this information in a timely manner. DOD required medical units to develop weekly reports on the incidence rates of major categories of diseases and injuries during all deployments. Data from these reports showed theaterwide illness and injury trends so that preventive measures could be identified and forwarded to the theater medical command regarding abnormal trends or actions that should be taken. DOD also established the U.S. Army Center for Health Promotion and Preventive Medicine—a major enhancement to DOD’s ability to perform environmental monitoring and tracking. For example, the center operates and maintains a repository of service members’ serum samples for medical surveillance and a system to integrate, analyze, and report data from multiple sources relevant to the health and readiness of military personnel. This capability was augmented with the establishment of the 520th Theater Army Medical Laboratory—a deployable public health laboratory for providing environmental sampling and analysis in theater. The sampling results can be used to identify specific preventive measures and safeguards to be taken to protect troops from harmful exposures and to develop procedures to treat anyone exposed to health hazards. During Operation Joint Endeavor, this laboratory was used in Tuzla, Bosnia, where most of the U.S. forces were located, to conduct air, water, soil, and other environmental monitoring. Despite the department’s progress, we and others have reported on DOD’s implementation difficulties during Operation Joint Endeavor and the shortcomings in DOD’s ability to maintain reliable health information on service members. Knowledge of who is deployed and their whereabouts is critical for identifying individuals who may have been exposed to health hazards while deployed. However, in May 1997, we reported that the inaccurate information on who was deployed and where and when they were deployed—a problem during the Gulf War—continued to be a concern during Operation Joint Endeavor. For example, we found that the Defense Manpower Data Center (DMDC) database—where military services are required to report deployment information—did not include records for at least 200 Navy service members who were deployed. Conversely, the DMDC database included Air Force personnel who were never actually deployed. In addition, we reported that DOD had not developed a system for tracking the movement of service members within theater. IOM also reported that the locations of service members during the deployments were still not systematically documented or archived for future use. We also reported in May 1997 that for the more than 600 Army personnel whose medical records we reviewed, DOD’s centralized database for postdeployment medical assessments did not capture 12 percent of those assessments conducted in theater and 52 percent of those conducted after returning home. These data are needed by epidemiologists and other researchers to assess at an aggregate level the changes that have occurred between service members’ pre- and postdeployment health assessments. Further, many service members’ medical records did not include complete information on in-theater postdeployment medical assessments that had been conducted. The Army’s European Surgeon General attributed missing in-theater health information to DOD’s policy of having service members hand carry paper assessment forms from the theater to their home units, where their permanent medical records were maintained. The assessments were frequently lost en route. We have also reported that not all medical encounters in theater were being recorded in individual records. Our 1997 report identified that this problem was particularly common for immunizations given in theater. Detailed data on service members’ vaccine history are vital for scheduling the regimen of vaccinations and boosters and for tracking individuals who received vaccinations from a specific lot in the event health concerns about the vaccine lot emerge. We found that almost one-fourth of the service members’ medical records that we reviewed did not document the fact that they had received a vaccine for tick-borne encephalitis. In addition, in its 2000 report, IOM cited limited progress in medical recordkeeping for deployed active duty and reserve forces and emphasized the need for records of immunizations to be included in individual medical records. Responding to our and others’ recommendations to improve information on service members’ deployments, in-theater medical encounters, and immunizations, DOD has continued to revise and expand its policies relating to medical surveillance, and the system continues to evolve. In addition, in 2000, DOD released its Force Health Protection plan, which presents its vision for protecting deployed forces. This vision emphasizes force fitness and health preparedness and improving the monitoring and surveillance of health threats in military operations. However, IOM criticized DOD’s progress in implementing its medical surveillance program and the failure to implement several recommendations that IOM had made. In addition, IOM raised concerns about DOD’s ability to achieve the vision outlined in the Force Health Protection plan. We have also reported that some of DOD’s programs designed to improve medical surveillance have not been fully implemented. IOM’s 2000 report presented the results of its assessment of DOD’s progress in implementing recommendations for improving medical surveillance made by IOM and several others. IOM stated that, although DOD generally concurred with the findings of these groups, DOD had made few concrete changes at the field level. For example, medical encounters in theater were still not always recorded in individuals’ medical records, and the locations of service members during deployments were still not systematically documented or archived for future use. In addition, environmental and medical hazards were not yet well integrated in the information provided to commanders. The IOM report notes that a major reason for this lack of progress is no single authority within DOD has been assigned responsibility for the implementation of the recommendations and plans. IOM said that because of the complexity of the tasks involved and the overlapping areas of responsibility involved, the single authority must rest with the Secretary of Defense. In its report, IOM describes six strategies that in its view demand further emphasis and require greater efforts by DOD: Use a systematic process to prospectively evaluate non-battle-related risks associated with the activities and settings of deployments. Collect and manage environmental data and personnel location, biological samples, and activity data to facilitate analysis of deployment exposures and to support clinical care and public health activities. Develop the risk assessment, risk management, and risk communications skills of military leaders at all levels. Accelerate implementation of a health surveillance system that completely spans an individual’s time in service. Implement strategies to address medically unexplained symptoms in populations that have deployed. Implement a joint computerized patient record and other automated recordkeeping that meets the information needs of those involved with individual care and military public health. DOD guidance established requirements for recording and tracking vaccinations and automating medical records for archiving and recalling medical encounters. While our work indicates that DOD has made some progress in improving its immunization information, the department faces numerous challenges in implementing an automated medical record. In October 1999, we reported that DOD’s Vaccine Adverse Event Reporting System, which relies on medical personnel or service members to provide needed vaccine data, may not have included information on adverse reactions because DOD did not adequately inform personnel on how to provide this information. Also, in April 2000, we testified that vaccination data were not consistently recorded in paper records and in a central database, as DOD requires. For example, when comparing records from the database with paper records at four military installations, we found that information on the number of vaccinations given to service members, the dates of the vaccinations, and the vaccine lot numbers were inconsistent at all four installations. At one installation, the database and records did not agree 78 to 92 percent of the time. DOD has begun to make progress in implementing our recommendations, including ensuring timely and accurate data in its immunization tracking system. The Gulf War revealed the need to have information technology play a bigger role in medical surveillance to ensure that the information is readily accessible to DOD and VA. In August 1997, DOD established requirements that called for the use of innovative technology, such as an automated medical record device that can document inpatient and outpatient encounters in all settings and that can archive the information for local recall and format it for an injury, illness, and exposure surveillance database. Also, in 1997, the President, responding to deficiencies in DOD’s and VA’s data capabilities for handling service members’ health information, called for the two agencies to start developing a comprehensive, lifelong medical record for each service member. As we reported in April 2001, DOD’s and VA’s numerous databases and electronic systems for capturing mission-critical data, including health information, are not linked and information cannot be readily shared. DOD has several initiatives under way to link many of its information systems—some with VA. For example, in an effort to create a comprehensive, lifelong medical record for service members and veterans and to allow health care professionals to share clinical information, DOD and VA, along with the Indian Health Service (IHS), initiated the Government Computer-Based Patient Record (GCPR) project in 1998. GCPR is seen as yielding a number of potential benefits, including improved research and quality of care, and clinical and administrative efficiencies. However, our April 2001 report describes several factors— including planning weaknesses, competing priorities, and inadequate accountability—that made it unlikely that DOD and VA would accomplish GCPR or realize its benefits in the near future. To strengthen the management and oversight of GCPR, we made several recommendations, including designating a lead entity with a clear line of authority for the project and creating comprehensive and coordinated plans for sharing meaningful, accurate, and secure patient health data. For the near term, DOD and VA have decided to reconsider their approach to GCPR and focus on allowing VA to view DOD health data. However, under the interim effort, physicians at military medical facilities will not be able to view health information from other facilities or from VA—now a potentially critical information source given VA’s fourth mission to provide medical backup to the military health system in times of national emergency and war. In October 2001, we met with officials from the Defense Health Program and the Army Surgeon General’s Office who indicated that the department is working on issues we have reported on in the past, including the need to improve the reliability of deployment information and the need to integrate disparate health information systems. Specifically, these officials informed us that DOD is developing a more accurate roster of deployed service members and enhancing its information technology capabilities. For example, DOD’s Theater Medical Information Program (TMIP) is intended to capture medical information on deployed personnel and link it with medical information captured in the department’s new medical information system, now being field tested. Developmental testing for TMIP has begun and field testing is expected to begin in spring 2002, with deployment expected in 2003. A component system of TMIP— Transportation Command Regulating and Command and Control Evacuation System—is also under development and aims to allow casualty tracking and provide in-transit visibility of casualties during wartime and peacetime. Also under development is the Global Expeditionary Medical System, which DOD characterizes as a stepping stone to an integrated biohazard surveillance and detection system. Clearly, the need for comprehensive health information on service members and veterans is very great, and much more needs to be done. However, it is also a very difficult task because of uncertainties about what conditions may exist in a deployed setting, such as potential military conflicts, environmental hazards, and frequency of troop movements. While progress is being made, DOD will need to continue to make a concerted effort to resolve the remaining deficiencies in its surveillance system. Until such a time that some of the deficiencies are overcome, VA’s ability to perform its missions will be affected. For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Individuals making key contributions to this testimony included Ann Calvaresi Barr, Karen Sloan, and Keith Steck.
|
GAO, the Institute of Medicine, and others have cited weaknesses in the Defense Department's (DOD) medical surveillance during the Gulf War and Operation Joint Endeavor. DOD was unable to collect, maintain, and transfer accurate data on the movement of troops, potential exposures to health risks, and medical incidents during deployment in the Gulf war. DOD improved its medical surveillance system under Operation Joint Endeavor, providing useful information to military commanders and medical personnel. However, GAO found several problems with this system. For example, incomplete or inaccurate information related to service members' health and deployment status. DOD's has not established a single, comprehensive electronic system to document, archive, and access medical surveillance data. DOD has begun several initiatives to improve the reliability of deployment information and to enhance its information technology capabilities, but some initiatives are several years away from full implementation. Nonetheless, these efforts reflect a commitment by DOD to establish a comprehensive medical surveillance system. The ability of the Department of Veterans Affairs to fulfill its role in serving veterans and providing backup to DOD in times of war will be enhanced as DOD increases its medical surveillance capability.
|
Our analysis of 2005 data found that over 21,000 physicians, health professionals, and suppliers who received Medicare Part B payments during the first 9 months of 2005 had over $1 billion in unpaid federal taxes as of September 30, 2005. This represents about 5 percent of the number of Medicare Part B physicians, health professionals, and suppliers paid during the first 9 months of calendar year 2005. Because the IRS database does not include amounts owed by taxpayers who have not filed tax returns and for which IRS has not assessed tax amounts due, the estimated amount of unpaid federal taxes is understated. As shown in figure 1, about 91 percent of the over $1 billion in unpaid taxes was comprised of federal individual income and payroll taxes. The other 9 percent of taxes included corporate income, excise, unemployment, and other types of taxes. Unlike our previous reports and testimonies on contractors with tax debts, a larger percentage of taxes owed by these physicians, health professionals, and suppliers was comprised of federal individual income taxes, which are unpaid amounts that individuals owe on their personal income. These taxpayers are typically either sole proprietors or certain limited liability companies that report income through individual income tax returns. As shown in figure 1, Medicare Part B physicians, health professionals, and suppliers, which are corporations or other kinds of businesses, owed about $430 million in federal payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a fiduciary responsibility to hold these amounts “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals within the business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty. Willful failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of up to 5 years, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the federal government’s general fund. Thus, personal income taxes, corporate income taxes, and other government revenues not specifically designated for the trust funds are used to pay for these shortfalls to the Social Security and Medicare trust funds. A substantial amount of the unpaid federal taxes shown in IRS records as owed by Medicare Part B physicians, health professionals, and suppliers had been outstanding for several years. As reflected in figure 2, about 85 percent of the over $1 billion in unpaid taxes were for tax periods prior to calendar year 2004, with about 41 percent of the unpaid taxes for tax periods prior to calendar year 2000. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding tax debt which, over time, can dwarf the original tax obligation. The amount of unpaid federal taxes we have identified does not include all tax debts owed by physicians, health professionals, and related suppliers due to statutory provisions that give IRS a finite period under which it can seek to collect on unpaid taxes. Generally, there is a 10-year statutory collection period beyond which IRS is prohibited from attempting to collect tax debt. Consequently, if these physicians, health professionals, and suppliers owe federal taxes beyond the 10-year statutory collection period, the older tax debt may have been removed from IRS’s records. We were unable to determine the amount of tax debt that had been removed. Although over $1 billion in unpaid federal taxes owed by Medicare Part B physicians, health professionals, and suppliers as of September 30, 2005, is a significant amount, it understates the full extent of unpaid taxes owed by these or other businesses and individuals. The IRS tax database reflects only the amount of unpaid federal taxes either reported by the individual or business on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. For example, during our audit, we identified instances from our case studies in which Medicare Part B physicians, health professionals, and suppliers failed to file tax returns for a particular tax period and IRS had not assessed taxes for these tax periods. Consequently, while these physicians, health professionals, and suppliers had unpaid federal taxes, they were listed in IRS records as having no unpaid taxes for that period. Further, our analysis did not attempt to account for businesses or individuals that purposely underreported income and were not specifically identified by IRS as owing the additional federal taxes. According to IRS, underreporting of income accounted for more than 80 percent of the estimated $345 billion annual gross tax gap. Consequently, the full extent of unpaid federal taxes for Medicare Part B physicians, health professionals, and suppliers is not known. In addition to the IRS tax database not reflecting all assessed tax amounts due, our past audits have also indicated that the IRS tax database contains coding errors that adversely affect IRS’s collection activities. IRS’s collection process is heavily dependent upon its automated computer system and the information that resides within this system. In particular, the codes in each taxpayer’s account in IRS’s tax database are critical to IRS in tracking the collection actions it has taken against a tax debtor and in determining what, if any, additional collection actions should be pursued. For example, IRS uses these codes to identify cases it should exclude from the continuous levy program, which is an automated method of collecting tax debt by offsetting certain federal payments made to individuals and businesses, as well as from other collection actions. While we did not evaluate the appropriateness of IRS’s exclusions for this testimony, the exclusions are only as good as the codes IRS has entered into its systems. In our previous work, we found that inaccurate coding at times prevented IRS collection action, including referral to the continuous levy program. Specifically, in November 2006, we estimated that about $2.4 billion in tax debt was erroneously excluded from the continuous levy program as of September 30, 2005. IRS did not identify and correct the coding errors we found because it did not sufficiently monitor the timely updating of the status and transaction codes or the effect of computer programming changes. In addition, we found that the design of IRS’s policies for monitoring the status of financial hardship cases was not sufficient to ensure the ongoing accuracy of such designations. Therefore, effective management of these codes is critical because if these codes are not accurately or appropriately updated to reflect changing circumstances, cases may be needlessly excluded from collection action, including the continuous levy program. For all 40 cases involving Medicare Part B physicians, health professionals, and suppliers with outstanding tax debt that we audited and investigated, we found abusive and/or potentially criminal activity related to the federal tax system. Of these cases, 25 involved physicians, health professionals, and suppliers that had unpaid payroll taxes dating as far back as the early 1990s. Rather than fulfill their role as “trustees” of this money and forward it to IRS as required by law, these physicians, health professionals, and suppliers diverted the money for other purposes. IRS had trust fund recovery penalties in effect for 16 of the 25 business cases at the time of our review. In addition, as discussed previously, willful failure to remit payroll taxes can be a criminal felony offense punishable by imprisonment up to 5 years, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The other 15 cases involved individuals who had unpaid individual income taxes dating as far back as the 1970s. Our review of selected Medicare Part B physicians, health professionals, and suppliers revealed significant challenges that IRS faces in its enforcement of tax laws, a continuing high-risk area for IRS. Although the nation’s tax system is built upon voluntary compliance, when businesses and individuals fail to pay voluntarily, IRS has a number of enforcement tools, including the use of levies, to compel compliance or elicit payment. Our review of the 40 physicians, health professionals, and suppliers found that IRS attempts to work with the businesses and individuals to achieve voluntary compliance, pursuing enforcement actions later rather than earlier in the collection process. Our review of IRS records with respect to our 40 cases showed that IRS did not issue paper levies to the Medicare contractors to levy the payments of physicians, health professionals, and suppliers for 28 of our 40 cases. As a result, most of the physicians, health professionals, and suppliers in our case studies continued to receive Medicare Part B payments while owing their federal taxes. Our investigations revealed that, despite owing substantial amounts of federal taxes to the IRS, some physicians, health professionals, and suppliers had substantial personal assets—including multimillion dollar homes and luxury cars. For example, one physician purchased a house for over $1 million while his business owed over $1 million in federal taxes. Another physician purchased a luxury vehicle, paid for partly with cash, and gambled millions of dollars while owing over $400,000 in taxes. In addition to failure to pay taxes, our investigations also revealed that several physicians associated with our case studies received Medicare Part B payments even though they had significant problems related to the practice of medicine. Six physicians had been previously excluded from the Medicare program for such things as professional incompetence, financial misconduct involving a government-operated program, and failure to pay health education loans. Further, 13 physicians in our cases had also been sanctioned by their state medical boards for such things as substandard care of their patients, drug abuse, abusive prescription writing, unprofessional conduct, lack of moral character, income tax evasion, embezzlement, aiding and abetting unlicensed practice, and illegible patient records. Table 1 highlights 15 of the 40 cases of Medicare physicians, health professionals, and suppliers with unpaid taxes. Appendix II provides details on the other 25 cases we examined. We are referring all 40 cases we examined to IRS for further collection activity and criminal investigation, if warranted. The following provides detailed information on three of the cases we examined. Case 1: Although in 2 recent years, the physician’s business reported a net income of over $300,000 and $100,000, respectively, the physician has not made any federal tax payments to IRS. In addition, the physician has been delinquent in child support during this time. As a result, the physician’s spouse had to sell the residence because the spouse could not afford the house. A hospital revoked the physician’s hospital privileges for substandard care and the state medical board also investigated the physician. The physician received over $100,000 in Medicare Part B payments for the first 9 months of calendar year 2005. Case 2: A physician was convicted of money laundering through offshore accounts. In addition to owing over $600,000 in federal individual income taxes, the physician owes tens of thousands of dollars in delinquent child support and also owns a related business that owes over $300,000 in federal taxes. Even though owing significant debts, the physician owns several residential properties, including an overseas house. HHS paid the physician nearly $100,000 in Medicare Part B payments during the first 9 months of calendar year 2005. Case 4: An ambulance business owner paid employees in cash and did not report this income to IRS. The ambulance business owner was convicted and incarcerated for defrauding the U.S. government. While the owner was in prison, a business officer used company funds to purchase property for the business officer instead of paying the federal payroll taxes to IRS. In 2004, the business negotiated and is paying on a repayment agreement of about $3,000 per month. These monthly payments are substantially less than the interest that would accrue on the debt. HHS paid the ambulance company over $100,000 in Medicare Part B payments during the first 9 months of calendar year 2005. HHS does not prevent physicians, health professionals, and suppliers with tax debts from enrolling in or receiving payments from the Medicare program. HHS has not developed Medicare regulations or HHS implementing policy to require HHS or their contractors to (1) screen physicians, health professionals, and suppliers for unpaid taxes and (2) require contractors to obtain consent for IRS disclosure of federal tax debts. However, because HHS has not participated in the continuous levy program, no tax debts owed by these physicians, health professionals and suppliers are being collected through the program. As a result, the federal government lost opportunities to collect between $50 million and $140 million in unpaid taxes in the first 9 months of calendar year 2005. HHS Medicare contractors are responsible for screening physicians, health professionals, and suppliers prior to enrollment into the Medicare program. However, as part of the screening process, neither HHS policies nor HHS regulations require Medicare contractors to consider tax debts or tax-related abuses of prospective physicians, health professionals, and suppliers. Medicare contractors are also not required to conduct any criminal background checks on these individuals. Medicare contractors are required to review the HHS Office of Inspector General (OIG) exclusion list and the General Services Administration (GSA) debarment lists; however, these lists do not include all individuals or businesses who have abused the federal tax system. The basis of exclusion of certain individuals and entities from participation in Medicare programs is made by statute. The statute provides for both mandatory and permissive exclusions. Mandatory exclusions are confined to health-related criminal offenses while permissive exclusions concern primarily non-health-related offenses. The Federal Acquisition Regulation cites conviction of tax evasion as one of the causes for debarment; indictment on tax evasion charges is cited as a cause for suspension. Consequently, the deliberate failure to remit taxes, in particular payroll taxes, while a felony offense, will likely not result in an individual or business being debarred or suspended unless there is an indictment or conviction of the crime. Moreover, while a felony offense, the deliberate failure to remit taxes, in particular payroll taxes, will likely not result in an individual or entity being placed on the Medicare exclusion or GSA debarment lists unless the taxpayer is convicted. Even if an individual or entity is convicted of tax evasion or other tax- related crime, the individual or business still may not be placed on the Medicare exclusion or GSA debarment lists. To be placed on these lists, federal agencies must identify those individuals and businesses and provide them with due process. As part of the due process, the agency must make a determination as to whether the exclusion or debarment is in the government’s interest. None of the 40 cases that we investigated, including those involving a conviction for tax-related crimes, are currently on the Medicare exclusion or GSA debarment lists. Further complicating HHS decision making on the consideration of tax debts for Medicare, federal law does not permit IRS to disclose taxpayer information, including tax debts, to HHS or Medicare contractor officials unless the taxpayer consents. HHS has not established a policy to obtain Medicare applicants’ consent to obtain tax information from IRS to consider in its Medicare eligibility decision making process. Thus, certain tax debt information can only be discovered from public records if IRS files a federal tax lien against the property of a tax debtor or a record of conviction for tax offense is publicly available. Consequently, HHS officials and their contractors do not have ready access to information on unpaid tax debts to consider in making decisions on physicians, health professionals and suppliers. Further, HHS has not established policy to participate in the IRS continuous levy program, thus preventing IRS from capturing at least a portion of the Medicare payments made to physicians, health professionals, and suppliers that owe tax debts. As stated earlier, federal law allows IRS to continuously levy federal vendor payments up to 100 percent until the tax debt is paid. IRS has implemented this authority by creating a continuous levy program that utilizes FMS’s Treasury Offset Program system. In July 2001, we reported that HHS did not have plans to participate in the continuous levy program and we recommended that the Commissioners of IRS and FMS work with HHS to develop plans to include Medicare payments in the continuous levy program. In July 2006, IRS began to pursue HHS participation in the continuous levy program through the Federal Contractor Tax Compliance (FCTC) Task Force, a multiagency group dedicated to improving the continuous levy process. In response to IRS’s request, HHS began to participate in the FCTC Task Force meetings in February 2007. If HHS had previously worked with IRS to levy Medicare Part B payments, we estimate, using the conservative 15 percent rate that FMS uses to levy civilian contractors, the federal government could have collected about $50 million in unpaid federal taxes for the first 9 months of calendar year 2005. Using the 100 percent rate authorized by law, the federal government could have collected approximately $140 million. These estimates were based on debt information IRS has reported to TOP as of September 30, 2005. Thousands of Medicare Part B physicians, health professionals, and suppliers have failed in their responsibility to pay federal taxes they owe as individuals and businesses residing and conducting business in this nation. Further our case studies demonstrate that physicians and other medical service providers with federal tax debts can receive Medicare Part B payments while engaging in abusive and potentially criminal activity. In addition, our case studies determined that some physicians who abused the federal tax system are also not providing quality care to all of their patients. Additionally, because HHS has failed to participate in the continuous levy process since its authorization in 1997, the federal government has missed the opportunity to collect hundreds of millions of dollars in unpaid taxes from Medicare Part B physicians, health professionals, and suppliers. The federal government cannot afford to leave millions of dollars in taxes uncollected each year in the current environment of federal deficits, nor can it continue to permit physicians, health professionals, and suppliers that have abused the federal tax system from participating in the Medicare program. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory Kutz at (202) 512-7455 or kutzg@gao.gov or Steve Sebastian at (202) 512-3406 or sebastians@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To identify the magnitude of unpaid taxes owed by Medicare Part B physicians, health professionals and suppliers, we requested from Department of Health and Human Services (HHS) the related Medicare Part B claims data for calendar year 2005. HHS was only able to provide us these data for the first 9 months of calendar year 2005 by the end of our review. We also obtained and analyzed the Internal Revenue Service (IRS) unpaid assessment data as of September 30, 2005. We matched the Medicare claim data to the IRS unpaid assessment data using the taxpayer identification number (TIN) field. To avoid overestimating the amount owed by Medicare Part B physicians, health professionals, and suppliers with unpaid tax debts and to capture only significant tax debts, we excluded from our analysis tax debts and paid claims meeting specific criteria to establish a minimum threshold in the amount of tax debt and in the amount of paid claims to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts are as follows: tax debts that IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2005 tax periods, and Medicare Part B physicians, health professionals, and suppliers with total unpaid taxes and Medicare Part B paid claims of less than $100. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid, and tax debts that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts from calendar year 2005 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid or abated within the current year. We further excluded tax debts and Medicare Part B paid claims of less than $100 because they are insignificant for the purpose of determining the extent of taxes owed. To identify examples of abuse or potentially criminal activity, we selected 40 Medicare Part B physicians, health professionals, and suppliers with federal tax debts for detailed audit and investigation. The 40 cases were chosen using a nonrepresentative selection approach based on our judgment, data mining, and a number of other criteria. Specifically, we narrowed the 40 cases with unpaid taxes based on the amount of unpaid taxes, number of unpaid tax periods, amount of payments reported by Medicare Part B, and indications that owner(s) might be involved in multiple companies with tax debts. We obtained copies of automated tax transcripts and other tax records (for example, revenue officer’s notes and certain individual tax returns) from IRS, and reviewed these records to exclude physicians and suppliers that had recently paid off their unpaid tax balances and considered other factors before reducing our number of case studies to 40. We performed additional searches of criminal, financial, and public records. In cases where record searches and IRS tax transcripts indicate that the owners or officers of a business are involved in other related entities that have unpaid federal taxes, we also reviewed records of the related entities and the owner(s) or officer(s), in addition to the original business we identified. For each related entity, we determined whether that entity had Medicare Part B payments for the first 9 months of calendar year 2005 and had unpaid federal taxes as of September 30, 2005. We updated the tax debt amount as of September 30, 2006, to reflect any additional tax assessments or collections that have occurred. In instances where we identified related parties that had both Medicare Part B payments and tax debts, our case studies included those related entities, combining unpaid taxes and combined Medicare Part B payments for the original individual/business as well as all related entities. To determine the extent to which HHS officials and their contractors are required to consider tax debts or other criminal activities in the enrollment of physicians, health professionals, and suppliers into Medicare, we examined Medicare regulations and HHS policies and procedures for enrollment. We also discussed policies and procedures used to enroll physicians, health professionals, and suppliers into Medicare with officials from two Medicare contractors. As part of these discussions, we inquired whether HHS and their contractors specifically consider tax debts or perform background investigations to determine whether prospective physicians, health professionals, and suppliers are qualified before their enrollment to Medicare is granted. To determine the extent to which HHS levies Medicare Part B payments to physicians, health professionals, and suppliers owing tax debts, we examined the statutory and regulatory authorities that govern the continuous levy program to determine whether any legal barriers exist. We also interviewed officials from HHS, two Medicare contractors, IRS, and Department of Treasury’s Financial Management Service (FMS) officials as to any operational impediments for the continuous levy of provider payments to pay federal tax debts. To determine the potential levy collections on the first 9 months of calendar year 2005, we used 15 percent and 100 percent of the total paid claim or total tax debt amount reported to TOP per IRS records, whichever is less. To be conservative, we used the 15 percent rate that FMS uses to levy civilian contractors. A gap will exist between what could be collected and the maximum levy amount calculated because (1) tax debts in TOP may not be eligible for immediate levy because IRS has not completed due process notifications, and (2) IRS may remove tax debts from the levy program because the taxpayer filed for bankruptcy, negotiated an installment agreement, or some other action which made the taxpayer ineligible for the levy program. To determine the reliability of the IRS unpaid assessments data, we relied on the work we performed during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address this report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s masterfile to IRS’s general ledger, identified no material differences. For HHS’s Medicare claims history and FMS’s TOP databases, we interviewed HHS and FMS officials responsible for their respective databases. In addition, we performed electronic testing of specific data elements in the databases that we used to perform our work. Based on our discussions with agency officials, review of agency documents, and our own testing, we concluded that the data elements used for this testimony were sufficiently reliable for our purposes. We conducted our audit work from June 2006 through February 2007 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. This appendix presents summary information on the abusive or potentially criminal activity associated with 25 of our 40 case studies. Table 2 summarizes the abuse or potentially criminal activity related to the federal tax system for these 25 physicians, health professionals, and suppliers that also received Medicare Part B payments in 2005. The cases involving businesses primarily involved unpaid payroll taxes. This appendix summarizes the extent to which Medicare physicians, health professionals, and suppliers have federal or state liens filed against their property. As discussed previously, certain tax debt information can only be discovered from public records, such as credit reports, if IRS files a federal tax lien against the property of a tax debtor. Of the 40 cases, 31 had federal tax liens filed by the Internal Revenue Service and 23 had tax liens filed by the states. Table 3 provides a summary of the federal or state tax liens filed for all 40 cases.
|
Under the Medicare program, the Department of Health and Human Services (HHS) and its contractors paid a reported $330 billion in Medicare benefits in calendar year 2005. Because GAO previously identified government contractors with billions of dollars in unpaid federal taxes, Congress requested that we expand our work in this area to all Medicare providers. This testimony addresses Medicare physicians, health professionals, and suppliers for services related to senior health care, who received about 20 percent of all Medicare payments. Because of limitations in HHS data, GAO was asked to determine if Medicare Part B physicians, health professionals, and suppliers have unpaid federal taxes, and if so, to (1) determine the magnitude of such debts; (2) identify examples of Medicare physicians and suppliers that have engaged in abusive, or potentially criminal activities; and (3) assess HHS efforts to prevent delinquent taxpayers from enrolling in Medicare and levy payments to pay delinquent federal taxes. To perform this work, GAO reviewed data from HHS and the Internal Revenue Service (IRS). In addition, GAO reviewed policies, procedures, and regulations related to Medicare. GAO also performed additional investigative activities. We plan to report on the results of our work related to other Medicare providers including any needed recommendations later this year. Over 21,000 of the physicians, health professionals, and suppliers (i.e., about 5 percent of all such providers) paid under Medicare Part B during the first 9 months of calendar year 2005 had tax debts totaling over $1 billion. This $1 billion figure is understated because some of these Medicare health care providers have understated their income and/or not filed their tax returns. We selected 40 Medicare physicians, health professionals, and suppliers with high tax debt for more in-depth investigation of the extent and nature of any related abusive or potentially criminal activity. Our investigation found abusive and potentially criminal activity, including failure to remit to IRS individual income taxes and/or payroll taxes withheld from their employees. Rather than fulfill their role as "trustees" of this money and forward it to IRS, they diverted the money for other purposes. Willful failure to remit payroll taxes is a felony under U.S. law. Further, individuals associated with some of these providers used payroll taxes withheld from employees for personal gain (e.g., to purchase a new home) or to help fund their businesses. Many of these individuals accumulated substantial wealth and assets, including million-dollar houses and luxury vehicles, while failing to pay their federal taxes. In addition, some physicians received Medicare payments even though they had serious quality-of-care issues, including license reprimands and prior suspensions from state medical boards, revocations of hospital privileges, and previous exclusions from the Medicare program. HHS has not issued Medicare regulations or policies requiring Medicare contractors to consider tax debts in making a decision about whether to enroll a physician, health professional, or supplier into Medicare. Further, HHS has not established a policy to obtain taxpayer consent to obtain tax information from IRS as part of its Medicare eligibility decision-making process. IRS can continuously levy up to 100 percent of each payment made to a federal payee--for example, a Medicare physician--until that tax debt is paid. However, HHS is not participating in the continuous levy program and thus the government has not collected unpaid taxes from Medicare payments. In the first 9 months of calendar year 2005, we estimate that the government lost opportunities to collect between $50 million and $140 million by not participating in the continuous levy program.
|
Since its creation in 1970, OMB has had two distinct but parallel roles. OMB serves as a principal staff office to the President by preparing the President’s budget, coordinating the President’s legislative agenda, and providing policy analysis and advice. The Congress has also assigned OMB specific responsibilities for ensuring the implementation of a number of statutory management policies and initiatives. Most importantly, it is the cornerstone agency for overseeing a framework of recently enacted financial, information resources, and performance management reforms designed to improve the effectiveness and responsiveness of federal departments and agencies. This framework includes the 1995 Paperwork Reduction Act and the 1996 Clinger-Cohen Act; the 1990 Chief Financial Officers Act, as expanded by the 1994 Government Management Reform Act; and the 1993 Government Performance and Results Act. OMB faces perennial challenges in carrying out these and other management responsibilities in an environment where its budgetary role necessarily remains a vital and demanding part of its mission. OMB’s resource management offices (RMOs) have integrated responsibilities for examining agency management, budget, and policy issues. The RMOs are supported by three statutory offices whose responsibilities include developing governmentwide management policies: the Office of Federal Financial Management, the Office of Federal Procurement Policy, and the Office of Information and Regulatory Affairs. In fiscal year 1996, OMB obligated $56 million and employed over 500 staff to carry out its budget and management responsibilities. The Results Act requires a strategic plan that includes six elements: (1) a comprehensive agency mission statement, (2) long-term goals and objectives for the major functions and operations of the agency, (3) approaches or strategies to achieve goals and objectives and the various resources needed to do so, (4) a discussion of the relationship between long-term goals/objectives and annual performance goals, (5) an identification of key external factors beyond agency control that could significantly affect achievement of strategic goals, and (6) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future program evaluations. Although OMB’s July draft included elements addressing its mission, goals and objectives, strategies, and key external factors affecting its goals, we suggested that these elements could be enhanced to better reflect the purposes of the Results Act and to more explicitly discuss how OMB will achieve its governmentwide management responsibilities. Furthermore, the July draft plan did not contain a discussion of two elements required under the Results Act: (1) the relationship between the long-term and annual performance goals and (2) the use of program evaluation in developing goals. The structural and substantive changes OMB made to its July 1997 strategic plan constitute a significant improvement in key areas. In general, OMB’s revised plan provides a more structured and explicit presentation of its objectives, strategies, and the influence of external factors. Each objective contains a discussion of these common elements, facilitating an understanding of OMB’s goals and strategies. OMB’s September plan addresses the six required elements of the Results Act. At the same time, enhancements could make the plan more useful to OMB and the Congress in assessing OMB’s progress in meeting its goals. The September plan’s mission statement recognizes both OMB’s statutory responsibilities and its responsibilities to advise the President, and the goals and objectives are more results-oriented and comprehensive than in the July draft. For example, the plan contains a new, results-oriented objective—“maximize social benefits of regulation while minimizing the costs and burdens of regulation”—for its key statutory responsibility regarding federal regulation review. The breadth of OMB’s mission makes it especially important that OMB emphasize well-defined and results-oriented goals and objectives that address OMB’s roles in both serving the President and overseeing the implementation of statutory governmentwide management policies. OMB more clearly defines its strategies for reaching its objectives in the September plan, particularly with regard to some of its management objectives. For example, in the draft plan, OMB did not discuss the accomplishments needed to fulfill its statutory procurement responsibilities. In contrast, the September plan lays out OMB’s long-term goal to achieve a federal procurement system comparable to those of high performing commercial enterprises. It says that OMB will identify annual goals to gauge OMB’s success, and discusses the means and strategies (such as working with agencies to promote the use of commercial buying practices) it will use to accomplish this goal. OMB also commits to working with the Federal Acquisition Regulation Council to revise regulations and publish a best practices document. In the area of regulatory reform, OMB also commits to improving the quality of data and analyses used in regulatory decision-making and to developing a baseline measure of the net benefits for Federal regulations. OMB’s clear and specific description of its strategies for its procurement and regulatory review objectives could serve as models for developing strategies for its Results Act and crosscutting objectives. Although strategies to provide management leadership in certain areas are more specific, other strategies could benefit from a clearer discussion of time frames, priorities, and expected accomplishments. For example, to meet its objective of working within and across agencies to identify solutions to mission-critical problems, OMB states it will work closely with agencies and a list of other organizations to resolve these issues. However, OMB does not describe specific problems it will seek to address in the coming years or OMB’s role and strategies for solving these issues. In defining its mission, goals and objectives, and strategies, OMB’s plan recognizes its central role in “managing the coordination and integration of policies for cross-cutting interagency programs.” The plan states that in each year’s budget, major crosscutting and agency-specific management initiatives will be presented along with approaches to solving them. The plan also provides a fuller discussion than was included in the July draft of the nature and extent of interagency groups that OMB actively works with in addressing a variety of functional management issues. Specific functional management areas, such as procurement, financial, and information management, are incorporated as long-term objectives. However, OMB’s plan could more specifically address how OMB intends to work with agencies to resolve long-standing management problems and high-risk issues with governmentwide implications. For example, in the information management area, OMB’s September plan refers to critical information technology issues, but it does not provide specific strategies for solving these issues. OMB discusses the ability of agencies’ computer systems to accommodate dates beyond 1999 (the Year 2000 problem) as a potential performance measure and states how it will monitor agencies’ progress. However, the plan does not describe any specific actions OMB will take to ensure this goal is met. We have previously reported on actions OMB needs to take to implement sound technology investment in federal agencies. In a related area, OMB has elsewhere defined strategies and guidance for agency capital plans that are not explicitly discussed in the strategic plan. With respect to programmatic crosscutting issues, questions dealing with mission and program overlap are discussed only generically as components of broader objectives (such as working with agencies to identify solutions or to carry out the Results Act). The Congress and a large body of our work have identified the fragmented nature of many federal activities as the basis for a fundamental reexamination of federal programs and structures. Our recent report identified fragmentation and overlap in nearly a dozen federal missions and over 30 programs. Such unfocused efforts can waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. The OMB plan states that the governmentwide performance plan, which OMB must prepare and submit as part of its responsibilities under the Results Act, will provide the “context for cross-cutting analyses and presentations,” but provides no additional specification. OMB’s strategic plan also does not explicitly discuss how goals and objectives will be communicated to staff and how staff will be held accountable. For example, OMB’s plan states that OMB staff are expected to provide leadership for and to be catalysts within interagency groups. Yet, the plan does not explain how OMB’s managers and staff will be made aware of and held accountable for this or other strategies for achieving OMB’s goals. As we noted in our review of the July draft plan, OMB’s staff and managers have a wide and expanded scope of responsibilities, and many of OMB’s goals depend on concerted actions with other agencies. In particular, tackling crosscutting issues will also require extensive collaboration between offices and functions within OMB, which the plan could discuss in more detail. In this environment, communicating results and priorities and assigning responsibility for achieving them are critical. The September plan more consistently discusses the relationship between annual and long-term goals as part of a discussion of each of its objectives. The plan provides useful descriptions of the performance measures OMB may use to assess its progress in its annual performance plan. For example, the plan suggests that “clean audit opinions” could measure how OMB is achieving its objective in the area of financial management. Such efforts are noteworthy because some of OMB’s activities, such as developing the President’s budget or coordinating the administration’s legislative program, present challenges for defining quantifiable performance measures and implementation schedules. Although the September plan provides a more consistent and thorough treatment of key external factors in achieving its goals, OMB could explain how it can mitigate the consequences of these factors. For example, OMB states that its goal of ensuring timely, accurate, and high-quality budget documents depends on the accuracy and timeliness of agency submissions of technical budget information. However, there is a role for OMB in assisting agencies to improve the accuracy and timeliness of data, particularly for such complex issues as estimating subsidy costs for loan and loan guarantee programs. OMB’s discussion of program evaluation could provide more information about how evaluations were used in developing its plan and how evaluations will be used to assess OMB’s and federal agencies’ capacity and progress in achieving the purposes of the Results Act. In preparing its strategic plan, OMB states that it reviewed and considered several studies of its operations prepared by OMB, GAO, and other parties. The plan also states that OMB will continue to prepare studies of its operational processes, organizational structures, and workforce utilization and effectiveness. However, OMB does not indicate clearly how prior studies were used, and OMB does not provide details on a schedule for its future studies, both of which are required by the Results Act. OMB officials have said it would be worthwhile to more fully discuss the nature and dimension of program evaluation in the context of the Results Act. As we noted in our review of the July draft plan, evaluations are especially critical for providing a source of information for the Congress and others to ensure the validity and reasonableness of OMB’s goals and strategies and to identify factors likely to affect the results of programs and initiatives. A clearer discussion of OMB’s responses to and plans for future evaluations could also provide insight into how the agency intends to address its major internal management challenges. For example, a critical question facing OMB is whether the approach it has adopted toward integrating management and budgeting, as well as its implementation of statutory management responsibilities, can be sustained over the long term. In view of OMB’s significant and numerous management responsibilities and the historic tension between the two concepts—of integrating or segregating management and budget responsibilities—we believe it is important that OMB understand how the reorganization has affected its capacity to provide sustained management leadership. In our 1995 review of OMB’s reorganization, we recommended that OMB review the impact of its reorganization as part of its planned broader assessment of its role in formulating and implementing management policies for the government. We suggested that the review focus on specific concerns that need to be addressed to promote more effective integration, including (1) the way OMB currently trains its program examiners and whether this is adequate given the additional management responsibilities assigned to these examiners and (2) the effectiveness of the different approaches taken by OMB in the statutory offices to coordinate with its resource management offices and provide program examiners with access to expertise. In commenting on our recommendation, OMB agreed that its strategic planning process offered opportunities to evaluate this initiative and could address issues raised by the reorganization. Although OMB’s plan states that it will increase the opportunities for all staff to enhance their skills and capabilities, it does not describe the kinds of knowledge, skills, and abilities needed to accomplish its mission nor a process to identify alternatives to best meet those needs. In summary, OMB has made significant improvements in its strategic plan. However, much remains to be done in improving federal management. We will be looking to OMB to more explicitly define its strategies to address important management issues and work with federal agencies and the Congress to resolve these issues. Mr. Chairman, this concludes our statement this morning. We would be pleased to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed how well the Office of Management and Budget's (OMB) strategic plan addresses the Government Performance and Results Act's requirements and some of the challenges remaining for OMB to address in future planning efforts. GAO noted that: (1) since its July 1997 draft, OMB has made changes to the plan based on its continuing planning efforts, congressional consultations, and comments from others; (2) overall, OMB's September 1997 plan addresses all required elements of the Results Act and reflects several of the enhancements GAO suggested in its review of the July draft; (3) specific improvements include: (a) goals and objectives that show a clearer results-orientation; (b) more clearly defined strategies for achieving these goals and objectives; and (c) an increased recognition of some of the crosscutting issues OMB needs to address; (4) however, additional enhancements to several of the plan's required elements and a fuller discussion of major management challenges confronting the federal government could help make the plan more useful to the Congress and OMB; (5) for example, the plan could provide a more explicit discussion of OMB's strategies on such subjects as information technology, high-risk issues, overlap among federal missions and programs, and strengthening program evaluation; (6) OMB's strategic plan indicates that the agency will use its annual performance plan, the governmentwide performance plan, other functional management plans, and the President's Budget to provide additional information about how it plans to address some of these and other critical management issues; (7) GAO will continue to review OMB's plans and proposals as additional detail concerning objectives, time frames, and priorities is established; and (8) GAO's intention is to apply an integrated perspective in looking at these plans, consistent with the intent of the Results Act, to ensure that OMB achieves the results expected by its statutory authorities.
|
Countries provide food aid through either in-kind donations or cash donations. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations to purchase food in local, regional, or global markets. U.S. food aid programs are all in-kind, and no cash donations are allowed under current legislation. However, the administration has recently proposed legislation to allow up to 25 percent of appropriated food aid funds to purchase commodities in locations closer to where they are needed. Other food aid donors have also recently moved from providing primarily in-kind aid to more or all cash donations for local procurement. Despite ongoing debates as to which form of assistance are more effective and efficient, the largest international food aid organization, the United Nations (UN) World Food Program (WFP), continues to accept both. The United States is both the largest overall and in-kind provider of food aid to WFP, supplying about 43 percent of WFP’s total contributions in 2006 and 70 percent of WFP’s in-kind contributions in 2005. Other major donors of in-kind food aid in 2005 included China, the Republic of Korea, Japan, and Canada. In fiscal year 2006, the United States delivered food aid through its largest program to over 50 countries, with about 80 percent of its funding allocations for in-kind food donations going to Africa, 12 percent to Asia and the Near East, 7 percent to Latin America, and 1 percent to Eurasia. Of the 80 percent of the food aid funding going to Africa, 30 percent went to Sudan, 27 percent to the Horn of Africa, 18 percent to southern Africa, 14 percent to West Africa, and 11 percent to Central Africa. Over the last several years, funding for nonemergency U.S. food aid programs has declined. For example, in fiscal year 2001, the United States directed approximately $1.2 billion of funding for international food aid programs to nonemergencies. In contrast, in fiscal year 2006, the United States directed approximately $698 million for international food aid programs to nonemergencies. U.S. food aid is funded under four program authorities and delivered through six programs administered by USAID and USDA; these programs serve a range of objectives, including humanitarian goals, economic assistance, foreign policy, market development, and international trade. (For a summary of the six programs, see app. I.) The largest program, P.L. 480 Title II, is managed by USAID and represents approximately 74 percent of total in-kind food aid allocations over the past 4 years, mostly to fund emergency programs. The Bill Emerson Humanitarian Trust, a reserve of up to 4 million metric tons of grain, can be used to fulfill P.L. 480 food aid commitments to meet unanticipated emergency needs in developing countries or when U.S. domestic supplies are short. U.S. food aid programs also have multiple legislative and regulatory mandates that affect their operations. One mandate that governs U.S. food aid transportation is cargo preference, which is designed to support a U.S.-flag commercial fleet for national defense purposes. Cargo preference requires that 75 percent of the gross tonnage of all government-generated cargo be transported on U.S.-flag vessels. A second transportation mandate, known as the Great Lakes Set-Aside, requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. Multiple challenges in logistics hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. While in some cases agencies have tried to expedite food aid delivery, most food aid program expenditures are for logistics, and the delivery of food from vendor to village is generally too time-consuming to be responsive in emergencies. Factors that increase logistical costs and lengthen time frames include uncertain funding processes and inadequate planning, ocean transportation contracting practices, legal requirements, and inadequate coordination in tracking and responding to food delivery problems. While U.S. agencies are pursuing initiatives to improve food aid logistics, such as prepositioning food commodities and using a new transportation bid process, their long-term cost-effectiveness has not yet been measured. In addition, the current practice of selling commodities to generate cash resources for development projects—monetization—is an inherently inefficient yet expanding use of food aid. The current practice of selling commodities as a means to generate resources for development projects—monetization—is an inherently inefficient yet expanding use of food aid. Monetization entails not only the costs of procuring, shipping, and handling food, but also the costs of marketing and selling it in recipient countries. Furthermore, the time and expertise needed to market and sell food abroad requires NGOs to divert resources from their core missions. However, the permissible use of revenues generated from this practice and the minimum level of monetization allowed by the law have expanded. The monetization rate for Title II nonemergency food aid has far exceeded the minimum requirement of 15 percent, reaching close to 70 percent in 2001 but declining to about 50 percent in 2005. Despite these inefficiencies, U.S. agencies do not collect or maintain data electronically on monetization revenues, and the lack of such data impedes the agencies’ ability to fully monitor the degree to which revenues can cover the costs related to monetization. USAID used to require that monetization revenues cover at least 80 percent of costs associated with delivering food to recipient countries, but this requirement no longer exists. Neither USDA nor USAID was able to provide us with data on the revenues generated through monetization. These agencies told us that the information should be in the results reports, which are in individual hard copies and not available in any electronic database. Various challenges to implementation, improving nutritional quality, and monitoring reduce the effectiveness of food aid programs in alleviating hunger. Since U.S. food aid assists only about 11 percent of the estimated hungry population worldwide, it is critical that donors and implementers use it effectively by ensuring that it reaches the most vulnerable populations and does not cause negative market impact. However, challenging operating environments and resource constraints limit implementation efforts in terms of developing reliable estimates of food needs and responding to crises in a timely manner with sufficient food and complementary assistance. Furthermore, some impediments to improving the nutritional quality of U.S. food aid, including lack of interagency coordination in updating food aid products and specifications, may prevent the most nutritious or appropriate food from reaching intended recipients. Despite these concerns, USAID and USDA do not sufficiently monitor food aid programs, particularly in recipient countries, as they have limited staff and competing priorities and face legal restrictions on the use of food aid resources. Some impediments to improving nutritional quality further reduce the effectiveness of food aid. Although U.S. agencies have made efforts to improve the nutritional quality of food aid, the appropriate nutritional value of the food and the readiness of U.S. agencies to address nutrition- related quality issues remain uncertain. Further, existing interagency food aid working groups have not resolved coordination problems on nutrition issues. Moreover, USAID and USDA do not have a central interagency mechanism to update food aid products and their specifications. As a result, vulnerable populations may not be receiving the most nutritious or appropriate food from the agencies, and disputes may occur when either agency attempts to update the products. Although USAID and USDA require implementing organizations to regularly monitor and report on the use of food aid, these agencies have undertaken limited field-level monitoring of food aid programs. Agency inspectors general have reported that monitoring has not been regular and systematic, that in some cases intended recipients have not received food aid, or that the number of recipients could not be verified. Our audit work also indicates that monitoring has been insufficient due to various factors including limited staff, competing priorities, and legal restrictions on the use of food aid resources. In fiscal year 2006, although USAID had some non-Title II-funded staff assigned to monitoring, it had only 23 Title II- funded USAID staff assigned to missions and regional offices in 10 countries to monitor programs costing about $1.7 billion in 55 countries. USDA administers a smaller proportion of food aid programs than USAID and its field-level monitoring of food aid programs is more limited. Without adequate monitoring from U.S. agencies, food aid programs may not effectively direct limited food aid resources to those populations most in need. As a result, agencies may not be accomplishing their goal of getting the right food to the right people at the right time. U.S. international food aid programs have helped hundreds of millions of people around the world survive and recover from crises since the Agricultural Trade Development and Assistance Act (P.L. 480) was signed into law in 1954. Nevertheless, in an environment of increasing emergencies, tight budget constraints, and rising transportation and business costs, U.S. agencies must explore ways to optimize the delivery and use of food aid. U.S. agencies have taken some measures to enhance their ability to respond to emergencies and streamline the myriad processes involved in delivering food aid. However, opportunities for further improvement remain to ensure that limited resources for U.S. food aid are not vulnerable to waste, are put to their most effective use, and reach the most vulnerable populations on a timely basis. To improve the efficiency of U.S. food aid—in terms of its amount, timeliness, and quality—we recommended in our previous report that the Administrator of USAID and the Secretaries of Agriculture and Transportation (1) improve food aid logistical planning through cost- benefit analysis of supply-management options; (2) work together and with stakeholders to modernize ocean transportation and contracting practices; (3) seek to minimize the cost impact of cargo preference regulations on food aid transportation expenditures by updating implementation and reimbursement methodologies to account for new supply practices; (4) establish a coordinated system for tracking and resolving food quality complaints; and (5) develop an information collection system to track monetization transactions. To improve the effective use of food aid, we recommended that the Administrator of USAID and the Secretary of Agriculture (1) enhance the reliability and use of needs assessments for new and existing food aid programs through better coordination among implementing organizations, make assessments a priority in informing funding decisions, and more effectively build on lessons from past targeting experiences; (2) determine ways to provide adequate nonfood resources in situations where there is sufficient evidence that such assistance will enhance the effectiveness of food aid; (3) develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards; and (4) improve monitoring of food aid programs to ensure proper management and implementation. DOT, USAID, and USDA—the three U.S. agencies to whom we direct our recommendations—provided comments on a draft of our report. These agencies—along with the Departments of Defense and State, FAO, and WFP—also provided technical comments and updated information, which we have incorporated throughout the report as appropriate. DOT stated that it strongly supports the transportation initiatives highlighted in our report, which it agrees could reduce ocean transportation costs. USAID stated that we did not adequately recognize its recent efforts to strategically focus resources to reduce food insecurity in highly vulnerable countries. Although food security was not a research objective of this study, we recognize the important linkages between emergencies and development programs and used the new USAID Food Security Strategic Plan for 2006-2010 to provide context, particularly in our discussion on the effective use of food aid. USDA took issue with a number of our findings and conclusions because it believes that hard analysis was lacking to support many of the weaknesses that we identified. We disagree. Each of our report findings and recommendations was based on a rigorous and systematic review of multiple sources of evidence, including procurement and budget data, site visits, previous audits, agency studies, economic literature, and testimonial evidence collected in both structured and unstructured formats. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. Should you have any questions about this testimony, please contact Thomas Melito, Director, at (202) 512-9601 or MelitoT@gao.gov. Other major contributors to this testimony were Phillip Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Kristy Kennedy, Joy Labez, Kendall Schaefer, and Mona Sehgal. The United States has principally employed six programs to deliver food aid: Public Law (P.L.) 480 Titles I, II, and III; Food for Progress; the McGovern-Dole Food for Education and Child Nutrition; and Section 416(b). Table 1 provides a summary of these food aid programs.
|
The United States is the largest global food aid donor, accounting for over half of all food aid supplies to alleviate hunger and support development. Since 2002, Congress has appropriated an average of $2 billion per year for U.S. food aid programs, which delivered an average of 4 million metric tons of food commodities per year. Despite growing demand for food aid, rising business and transportation costs have contributed to a 52 percent decline in average tonnage delivered between 2001 and 2006. These costs represent 65 percent of total emergency food aid, highlighting the need to maximize its efficiency and effectiveness. This testimony is based on a recent GAO report that examined some key challenges to the (1) efficiency of U.S. food aid programs and (2) effective use of U.S. food aid. Multiple challenges hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. Factors that cause inefficiencies include (1) insufficiently planned food and transportation procurement, reflecting uncertain funding processes, that increases delivery costs and time frames; (2) ocean transportation and contracting practices that create high levels of risk for ocean carriers, resulting in increased rates; (3) legal requirements that result in awarding of food aid contracts to more expensive service providers; and (4) inadequate coordination between U.S. agencies and food aid stakeholders in tracking and responding to food and delivery problems. U.S. agencies have taken some steps to address timeliness concerns. USAID has been stocking or prepositioning food domestically and abroad, and USDA has implemented a new transportation bid process, but the long-term cost effectiveness of these initiatives has not yet been measured. The current practice of using food aid to generate cash for development projects--monetization--is also inherently inefficient. Furthermore, since U.S. agencies do not collect monetization revenue data electronically, they are unable to adequately monitor the degree to which revenues cover costs. Numerous challenges limit the effective use of U.S. food aid. Factors contributing to limitations in targeting the most vulnerable populations include (1) challenging operating environments in recipient countries; (2) insufficient coordination among key stakeholders, resulting in disparate estimates of food needs; (3) difficulties in identifying vulnerable groups and causes of their food insecurity; and (4) resource constraints that adversely affect the timing and quality of assessments, as well as the quantity of food and other assistance. Further, some impediments to improving the nutritional quality of U.S. food aid may reduce its benefits to recipients. Finally, U.S. agencies do not adequately monitor food aid programs due to limited staff, competing priorities, and restrictions on the use of food aid resources. As a result, these programs are vulnerable to not getting the right food to the right people at the right time.
|
As a nation we have made greater strides in articulating a budget control framework to achieve our overall fiscal policy goals than in designing a framework for addressing the composition of spending. The unified budget provides information on the federal government’s overall fiscal policy—the aggregate size of the government and its borrowing requirements. However, the current budget does not highlight different types of spending; budget data are not presented in a way that promotes decisions to be made between spending intended to have future benefits versus spending for current consumption and improving the current quality of life. Since the current budget does not provide this type of focus on the composition of spending, it is difficult to focus on the impact various types of spending would have on the long-term potential output of the economy. Alternative budget presentations that accompany the President’s budgets provide some supplemental information to congressional decisionmakers, but they are assembled after executive budget decisions have been made. These presentations have had little effect on the level of investment undertaken by the government. The congressional budget and appropriations process allocates spending by broad mission area and by agency. These were not established to distinguish between investment and consumption spending. In the budget process there is no explicit consideration of investment versus consumption; a dollar is a dollar is a dollar. The share of total federal budget outlays devoted to investment, defined by GAO as including research and development, human capital, and infrastructure that has a direct bearing on long-term economic growth, gradually declined about 2 percentage points from a high of just over 10 percent in 1981. Investment outlays for fiscal years 1997 to 2002 are projected to continue this downward trend. This is in part a function of the fact that most investment spending is in the part of the budget considered “discretionary”—a part that has decreased. Some have proposed that the challenges agencies face in budgeting for capital acquisitions can be corrected by adopting a capital budget that separates revenues and outlays for long-lived physical assets from the rest of the budget. Many proposals for capital budgeting include an associated depreciation component for capital assets that is charged to the annual operating budget. In addition, these proposals commonly envision special budgetary treatment for capital by requiring balanced operating budgets while allowing deficit financing of capital. Capital budgeting of this nature presents several unique problems at the federal level. First, the federal government does not own many of the investments it makes that are intended to promote long-term private sector economic growth. Accounting standards developed by the Federal Accounting Standards Advisory Board are consistent with this thinking—assets not owned by the government are not reported on the government’s balance sheet. Second, appropriating only annual depreciation means the budget in any given year would reflect only a fraction of the total cost of an investment. This would undermine budgetary control of expenditures by not recognizing the full cost of an asset at the time a decision is made to acquire it. Currently, the law requires agencies to have budget authority before they can obligate or spend funds on any item. If the full amount of budget authority need not be available up front, the ability to control decisions when total resources are committed to a particular use is reduced. In addition, reporting only depreciation in the budget for an asset would make it look very inexpensive relative to other spending. It might also advantage physical capital amenable to depreciation over human capital and research and development. This would create a tremendous incentive to classify as many activities as possible as capital. Even if the fund control issues could be resolved, determining an appropriate depreciation amount would present problems. Investments in human capital would be particularly difficult to depreciate because of the complexities associated with measuring the future value and useful life of human capital. Thus, including depreciation in the budget could result in spending decisions being based on data that are not easily explained or supported. It is also important to remember that neither states nor private enterprises budget for depreciation. States do not record annual depreciation in either their capital or operating budgets because depreciation has no effect on the flow of current financial resources. Private businesses use depreciation primarily for two purposes: (1) to match revenues with expenses in a given period for the purposes of reporting profit or loss in financial statements and (2) for tax purposes. Neither of these purposes are applicable to federal budgeting, except for federal business-type activities that consider revenues and expenses in setting user fees. Some have proposed to deficit-finance capital and investment on the ground that such spending creates economic growth. Deficit financing of capital, however, would also create problems for the integrity of the budget process. If capital assets can be deficit-financed while other types of activities cannot, there would be a significant incentive to categorize as many activities as possible as capital. In addition, the productivity-enhancing benefits of investments may be offset if these investments are financed by deficits that reduce national saving and so displace private investment for long-term growth. Deficit financing implies that public investment has a higher rate of return than the private investment it would displace, which is an arguable presumption. The problems discussed with these two approaches do not mean that reform in budgeting for capital is not feasible. Meaningful budget reforms can be considered to improve decision-making on investments, but they need to be tailored to the unique roles and environment of the federal government. In prior work, we have proposed an alternative approach for dealing with federal spending intended to promote the private sector’s long-term economic growth. Establishing an investment component within the existing budget constraints is one promising way to encourage the Congress and the executive branch to make explicit decisions about how much spending overall should be devoted to investment. By recognizing the different impact of various types of federal spending, an investment focus within the budget would provide a valuable supplement to the unified budget’s concentration on macroeconomic issues. It would direct attention to the consequences of choices within the budget under existing caps. It would prompt a healthy debate about the overall level of public investment—a level that is now not determined explicitly by policymakers but is simply the result of numerous individual decisions. The unique budgeting problems raised by this approach are how to define investment and how to incorporate and enforce this framework within the current budget process. Turning first to the definitional question, if an investment component within the budget is to be implemented in a meaningful fashion, it will be important to decide what activities qualify for inclusion. There are many possible definitions, but the definition used for budgetary purposes should depend on the purpose that an investment component is expected to serve. Many analysts have suggested that investment is that which increases long-term private sector economic growth. The federal government promotes long-term economic growth in two ways—through its broad fiscal policy and through public investment. Accordingly, we have suggested that investment spending be defined as federal spending, either direct or through grants, that is specifically intended to enhance the private sector’s long-term productivity. We recognize, however, that the Congress may choose to define this category in other ways that may highlight other spending that has long-term benefits. Our definition of investment spending includes spending on (1) some intangible activities, such as research and development, (2) human capital designed to increase worker productivity, particularly education and training, and (3) infrastructure—physical capital—that is viewed as having a direct bearing on long-term economic growth, such as highways, water projects, and air traffic control systems. As noted above, although much of this is federally funded, it is not federally owned. Spending for many federally owned physical assets, such as for federal land, for office buildings, and for defense weapons systems, would not be included because such spending does not directly enhance long-term private sector productivity. The current budget process embodies a system of controls set up by the Budget Enforcement Act of 1990 (BEA), which established a set of caps on discretionary spending as part of the process. Most investment spending is within this category of spending. If a target for aggregate investment spending were established within the overall discretionary caps, the budget structure and process would prompt explicit consideration of the level of support for investment within overall fiscal constraints. An investment component would direct attention to the trade-offs between investment and noninvestment activities without undermining fiscal policies and established fiscal policy paths. This approach has the advantage of focusing budget decisionmakers on the overall level of investment supported in the budget without losing sight of the unified budget’s impact on the economy. It also has the advantage of building on the current congressional budget process as the framework for making decisions. And it does not raise the budget control problems posed by the more traditional capital budgeting proposals that use depreciation and deficit financing. Although the investment component would be subject to budget controls, the existence of a separate component could create an incentive to categorize many proposals as investment. Any distinction in a system of restraint creates such incentives. If, however, the Congress and the President want a separate component to work, difficult definitional issues can be resolved. Defining mandatory programs for BEA was not easy in 1990, but the Congress and the executive branch did reach agreement. Also, as part of the 1997 Balanced Budget Act, the President and the Congress were able to reach agreement on certain categories of spending, such as education, to receive favorable budget treatment. Each type of capital we are discussing raises its own unique decision-making challenges. For investment, the definitional question discussed earlier will have to be addressed. Moreover, if the federal government is to focus on the allocation of spending between consumption and investment in order to improve long-term economic growth, then it is important that federal investments be wisely selected. Programs proposed or defended as investments should be evaluated against the criterion of improving long-term economic capacity. Such judgments are difficult, but they are not impossible. In 1993, we developed a series of questions related to a program’s economic returns, design, and performance measures that may help decisionmakers assess the relative worth of competing investments. First, is the program designed to produce long-term economic growth? Second, is it worth implementing, including whether there is really a need for federal government intervention? Third, is it well-designed, including some assurance that federal funds supplement and do not supplant nonfederal funds? And fourth, how should the program be evaluated after implementation? Ideally, policymakers should have access to measures of relative rates of return from federal investment programs in allocating resources among programs. However, such data are scarce and further research is needed to develop additional and better information on the economic effect of various types of investment proposals. Potential economic returns may determine whether to embark on a plan for increased federal investment. In seeking the “best” federal investment, however, decisionmakers should consider not only estimated returns, but also whether the federal government is the right entity to address that need. Alternative approaches to meeting the perceived public need should be considered before addressing the problem with federal outlays. Program design is important to the ability of a program to contribute to private sector output and economic growth. Decisionmakers should consider design issues to promote effective program delivery, including (1) coordination with other federal programs and those of state and local governments and (2) targeting of funds to achieve the highest possible benefit. Coordination with state and local governments is particularly important when federal investments are implemented through those governments. Policymakers need to be aware of the possibility that the states and localities could use federal investment funds to supplant their own spending. We have reported on this issue and found that studies have suggested that even the prospect of additional federal grant funds can prompt states and localities to reduce their planned spending, which could trigger a decline in total overall public spending for the funded activity. Thus, even if a program is properly classified as an investment, its economic impact can be thwarted when federal funds are used to replace nonfederal funding. It is important that all public investment programs include, at the time of their implementation, provisions for evaluating program outcomes. Policymakers should use outcome data to ensure that ongoing investment programs continue to be worthwhile and well designed under changing circumstances. Also, to improve the federal government’s ability to invest wisely in the future, it is important to learn from public investments that have already been made. As federal agencies find themselves under increasing budgetary constraints and increasing demands to improve service, the importance of making the most effective capital asset acquisitions grows. Since spending by the federal government to support its own operations would not qualify for the investment component, a different approach is required. Here, the unique capital budgeting problem is the funding of assets that provide benefits over the long term but that must be paid for in one up-front sum. Capital assets often require large amounts of resources up-front and some may generate long-term efficiencies and savings. Like investment spending, spending for most federal capital assets is provided in annual appropriations acts and therefore is categorized by BEA as discretionary spending. The total of all agencies’ discretionary appropriations must remain within BEA’s discretionary caps, which generally have been declining since 1991. Thus, federal capital spending, like all discretionary spending, is being squeezed. In fiscal year 1997, the federal government spent $72.2 billion (4.5 percent of total outlays) on direct major physical capital investment. Of this, the largest portion, $52.4 billion, was spent on defense-related capital assets, while $19.7 billion was spent for nondefense capital. The President’s budget estimates for spending for direct physical capital investments decrease to $64.1 billion in fiscal year 1998, and then rebound slightly to $68.8 billion in fiscal year 1999. Of these amounts, $15.4 billion and $18.5 billion are for nondefense capital in fiscal years 1998 and 1999, respectively. For more than 100 years, the Adequacy of Appropriations Act and the Antideficiency Act have required agencies to have budget authority for any government obligation, including capital acquisitions. This is referred to as up-front funding. The requirement of full up-front funding is an essential tool in helping the Congress make trade-offs among various spending alternatives. Up-front funding helps ensure that decisionmakers are fully accountable for the budgetary and programmatic consequences of their decisions. This also ensures that the full costs of capital projects are recognized at the time the Congress and the President make the commitment to undertake them. Agencies have not always requested or received full up-front funding for capital acquisitions, however, which has occasionally resulted in higher acquisition costs, cancellation of major projects, and inadequate funding to maintain and operate the assets. For example, our work has identified the lack of full up-front funding as one of the key factors in the high rate of cost overruns, schedule slippages, and terminations in the Department of Energy’s (DOE) major acquisitions. The Office of Management and Budget’s (OMB) long-term goal is to include full funding for all new capital projects, or at least economically and programmatically viable segments of new projects. However, adherence to the up-front funding requirement also extracts a price, at least from an individual agency’s viewpoint. The requirement that the full cost of a project must be absorbed in the annual budget of the agency or program combined with the effect of the tight BEA discretionary spending caps can make capital acquisitions seem prohibitively expensive. This has led some to suggest that the result is a bias against capital in budget deliberations. Although up-front funding within the budget caps presents a challenge, a number of agencies have found ways to meet that challenge. Our work at selected federal agencies has demonstrated that more modest tools than adopting a full-scale capital budget can help accommodate up-front funding without raising the congressional or fiscal control issues of a separate capital budget. When accompanied by good financial management and appropriate congressional oversight, these tools can be useful in facilitating effective capital acquisition within the current unified budget context. In a 1996 report, we identified some strategies that have been successfully used by some agencies to accommodate spending on federal capital while preserving the fiscal discipline provided by the current budget controls. To identify these strategies, we examined how selected federal agencies plan and budget for capital assets. I must emphasize that agencies must obtain authority from the Congress to undertake some of these strategies and that some, such as the revolving funds and “savings accounts” discussed as follows, work best in agencies having proven financial management and capital planning capabilities. Budgeting for stand-alone stages of larger projects - A stand-alone stage is a unit of a capital project that can be economically or programmatically useful even if the entire project is not completed. For example, the Coast Guard may structure its contract for a class of new ships to acquire a lead ship with options for additional ships. The lead ship would be useful even if the entire class of ships is not completed as planned. Budgeting for stand-alone stages means that when a decision has been made to undertake a specific capital project, funding sufficient to complete a useful segment of the project is provided in advance. This helps ensure that a single appropriation will yield a functional asset while limiting the amount of budget authority needed. Using a revolving fund - Agencies use revolving funds to accumulate, over a period of years, the resources needed for up-front funding. By charging users for the cost to replace and maintain capital assets, revolving funds can help ensure that needed funds will be available for capital acquisitions and that program budgets reflect capital as well as operating costs. The concept of depreciation is useful for revolving funds when determining the fees to be charged to users. Establishing a “savings account” - A “savings account” achieves many of the same goals sought by revolving funds; however, users make voluntary contributions according to an established schedule for prospective capital purchases, rather than being charged retrospectively for capital usage. The “savings account” is designed to encourage managers to do better long-range planning for capital purchases and to enable them to accumulate over time the resources needed to fund capital acquisitions up-front. Contracting out and asset sharing - Some agency functions for which capital assets are acquired can be performed by the commercial market at less expense, thus reducing the amount of funding that an agency needs to have up front. Asset sharing involves sharing the purchase and use of capital assets with external entities. Sharing assets through contracting out can be especially useful and cost-effective when asset needs are short-term or episodic and nonrecurring. Agencies and the Congress must work together to find tools that encourage prudent capital decisions. Federal agencies should be encouraged to develop flexible budgetary mechanisms that help them accommodate the consistent application of up-front funding requirements while maintaining opportunities for appropriate congressional oversight and control. Regardless of the budget approach ultimately chosen for federal capital, it is essential that agencies take the time to properly plan for and manage their capital acquisitions. Prudent capital planning can help agencies to make the most of limited resources while failure to make timely and effective capital acquisitions can result in increased long-term costs. GAO, the Congress, and OMB have identified the need to improve federal decision-making regarding capital. Our past work has identified a variety of federal capital projects where acquisitions have yielded poor results—costing more than anticipated, falling behind schedule, and failing to meet mission needs and goals. For example, we have monitored the Federal Aviation Administration’s (FAA) acquisitions of major systems since FAA began its program to modernize the nation’s air traffic control system in the early 1980s. This modernization program has experienced substantial cost overruns, lengthy schedule delays, and performance shortfalls. Our work pointed to technical difficulties and weaknesses in FAA’s management of the acquisition process as primary causes for FAA’s recurring cost, schedule, and performance problems. Identified weaknesses included a failure to analyze mission needs, limited analyses of alternative approaches for achieving those needs, and poor cost estimates. As I mentioned earlier, we have also identified incremental funding as one of the causes of cost overruns, schedule slippages, and terminations in DOE’s major acquisitions. A number of laws enacted in this decade are propelling agencies toward improving their capital decision-making practices, including the Federal Acquisition Streamlining Act, the Clinger-Cohen Act, and the Government Performance and Results Act. To help agencies integrate and implement these various requirements, OMB recently developed a Capital Programming Guide—a supplement to OMB Circular A-11—which provides guidance to federal agencies on planning, budgeting, acquisition, and management of capital assets. We participated in the development of the guide and conducted extensive research to identify practices in capital decision-making used by outstanding state and local governments and private sector organizations. One federal agency, the U.S. Coast Guard, was also used as a case study. We will soon be reporting on the results of our research and I would like to provide you a preview of our results today. We identified five general principles that are important to the capital decision-making process as a whole, which I will summarize for you. 1. Integrate Organizational Goals Into the Capital Decision-making Process Leading organizations begin their capital decision-making process by defining their overall mission in comprehensive terms and by articulating results-oriented goals and objectives. These organizations consider a range of possible ways to achieve desired goals and objectives—examining both capital and noncapital alternatives. For example, the U.S. Coast Guard now begins its process by conducting a comprehensive needs assessment through what it calls its mission analysis process. 2. Evaluate and Select Capital Assets Using an Investment Approach An investment approach builds on an organization’s assessment of where it should invest its capital for the greatest benefit over the long term. Leading organizations use various decision-making practices and techniques to make comparisons and trade-offs between competing projects as well as to assess the strategic fit of the investment with the organization’s overall goals. Leading organizations also develop long-term capital plans that allow them to establish priorities for project implementation over the long term and assist with developing current and future budgets. 3. Balance Budget Control and Managerial Flexibility When Funding Capital Projects Officials at leading organizations agree that good budgeting requires that full costs be considered when making decisions to provide resources. At the federal level, this calls for a balance between congressional budgetary control and agency flexibility in financing capital acquisitions. As I discussed earlier, some strategies currently exist that allow agencies a certain amount of flexibility in funding capital projects without the loss of budgetary control on the part of the Congress. At the state level, one state we studied is funding the construction of a college campus in stand-alone stages—completing and occupying one building at a time. 4. Use Project Management Techniques to Optimize Project Success Leading organizations apply a variety of project management techniques to optimize project success and enhance the likelihood of meeting project-specific as well as organizationwide goals. These techniques include developing a project management team with the right people and the right skills, monitoring project performance, and establishing incentives to meet project goals. 5. Evaluate Results and Incorporate Lessons Learned Into the Decision-making Process Leading organizations have a common trait—a desire to assess and improve their performance. Some of the organizations in our study have implemented systematic procedures for evaluating project results, while others have taken a broader approach and reevaluated their capital decision-making processes as a whole. In order to promote an efficient public sector and a healthy and growing economy, the federal government should make explicit and well thought-out decisions on national investments that will foster long-term economic growth as well as on spending for federal capital that provides long-term benefits to the government’s own operations. The creation of an investment component within the federal budget could help the Congress and the President make more informed decisions regarding an appropriate mix of spending while retaining the strengths and discipline fostered by a unified budget and the current congressional budget process. While federal capital spending is important to efficient long-term government operations, a goal of the budget process should be to assist the Congress in allocating resources efficiently by ensuring that various spending options can be compared impartially—not necessarily to increase capital spending. The requirement of full up-front funding is an essential tool in helping the Congress make trade-offs among various spending alternatives. In an environment of constrained budgetary resources, agencies need, and some have developed, strategies and tools that can help facilitate these trade-offs and that enable them to accommodate up-front funding. Agencies have demonstrated that more modest tools than a full-scale capital budget can be developed to accommodate up-front funding within the current unified budget. It is essential for federal agencies to improve their capital decision-making practices to ensure that the purchase of new assets and infrastructure will have the highest and most efficient returns to the government and that existing assets will be adequately repaired and maintained. Federal agencies could draw lessons from the strategies and practices used by leading federal, state, local, and private sector entities and more widely apply these practices to the federal decision-making process. This concludes my prepared statement. I would be happy to answer any questions that you may have at this time and look forward to working with you as the Commission completes its work. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed ways the federal government should budget for capital, focusing on: (1) problems with the current process; (2) traditional capital budgeting proposals; (3) an alternative investment framework; (4) budgeting for federally owned capital assets; and (4) improving the way federal agencies plan for and manage federal capital acquisitions. GAO noted that: (1) the current unified budget does not highlight different types of spending; budget data are not presented in a way that promotes decisions to be made between spending intended to have future benefits versus spending for current consumption and improving the current quality of life; (2) since the current budget does not provide this type of focus on the composition of spending, it is difficult to focus on the impact various types of spending would have on the long-term potential output of the economy; (3) some have proposed that the challenges agencies face in budgeting for capital acquisitions can be corrected by adopting a capital budget that separates revenues and outlays for long-lived physical assets from the rest of the budget; (4) many proposals for capital budgeting include an associated depreciation component for capital assets which is charged to the annual operating budget; (5) in addition, these proposals commonly envision special budgetary treatment for capital by requiring balanced operating budgets while allowing deficit financing of capital; (6) capital budgeting of this nature presents several unique problems at the federal level; (7) meaningful budget reforms can be considered to improve decision-making on investments, but they need to be tailored to the unique roles and environment of the federal government; (8) establishing an investment component within the existing budget constraints is one promising way to encourage Congress and the executive branch to make explicit decisions about how much spending overall should be devoted to investment; (9) programs proposed or defended as investments should be evaluated against the criterion of improving long-term economic capacity; (10) as federal agencies find themselves under increasing budgetary constraints and increasing demands to improve service, the importance of making the most effective capital asset acquisitions grows; (11) agencies and Congress must work together to find tools that encourage prudent capital decisions; (12) regardless of the budget approach ultimately chosen for federal capital, it is essential that agencies take the time to properly plan for and manage their capital acquisitions; and (13) GAO identified five general principles that are important to the capital decision-making process.
|
Coastal barriers are unique land forms that function as buffers, protecting the mainland against the destructive forces of hurricanes and other coastal storms. Coastal barriers also provide habitat for migratory birds and other wildlife; and they provide essential nesting and feeding areas for commercially and recreationally important species of fish and other aquatic organisms such as sea turtles. In the United States, coastal barriers are predominantly distributed along the Atlantic and Gulf coasts but can also be found in areas surrounding the Great Lakes, the Virgin Islands, and Puerto Rico. From the Gulf of Maine to Padre Island, Texas, coastal barriers form an almost unbroken chain along the coastline. Coastal barriers are generally unsuitable for development because the movement of unstable sediments undermines man-made structures. Despite this threat, coastal areas that include coastal barriers are among the most rapidly growing and developed areas in the nation, accounting for 53 percent of the total population in the United States according to a 2004 report by the National Oceanic and Atmospheric Administration (NOAA), National Ocean Service. In 1982, Congress enacted the Coastal Barrier Resources Act to minimize (1) the loss of human life; (2) wasteful expenditures of federal revenue; and (3) damage to fish, wildlife, and other natural resources associated with coastal barriers along the Atlantic and Gulf coasts by restricting future federal expenditures and financial assistance, which have the effect of encouraging development of coastal barriers. The act designated 186 units, comprising about 453,000 acres along 666 miles of shoreline from Maine to Texas, which would later be known as the John H. Chafee Coastal Barrier Resources System (CBRS). Subsequently, the CBRS was further expanded to include additional units along coastal states from Maine to Texas, plus units in the Great Lakes, the Virgin Islands, and Puerto Rico. Currently, the CBRS includes 585 units, which consist of undeveloped coastal barrier lands and aquatic habitat that comprises nearly 1.3 million acres of land and associated aquatic habitat. The CBRS was also expanded to include 272 OPAs that comprise an additional 1.8 million acres of land and associated aquatic habitat. Most of the land in these OPAs is publicly held for conservation or recreational purposes, such as national wildlife refuges, national parks and seashores, and state and county parks; but some OPAs may also include some private property that may or may not be held for conservation. Under CBRA, no single federal agency has overall responsibility for administering activities within the CBRS; instead, all federal agencies must abide by the provisions of the act. CBRA does assign the Secretary of the Interior responsibility for, among other things, maintaining maps of each CBRS unit and recommending modifications to CBRS unit boundaries, as needed. Within the Department of Interior, these responsibilities belong to the FWS. Both agencies and property owners can request decisions from FWS regarding whether specific properties are within CBRS boundaries. Finally, agencies must consult with FWS to determine whether a proposed project is within the CBRS, and if so, whether the project is consistent with CBRA. The Coastal Barrier Resources Reauthorization Act of 2000 directed the Secretary of the Interior to complete a Digital Mapping Pilot Project for at least 50 but not more than 75 units in the CBRS and submit a report to the Congress that describes the results of the pilot project and the feasibility, data needs, and costs of completing digital maps for the entire CBRS. Currently, FWS is conducting a pilot project to create updated digital CBRS maps that would provide federal agencies and others with an enhanced tool for determining accurate boundary locations. Later, the Coastal Barrier Resources Reauthorization Act of 2005 directed the Secretary of the Interior to create digital maps by 2013 for all CBRS units not included in the pilot project. However, according to agency officials, the ability to conduct this project and the actual completion date will depend upon the specific funding that the agency receives for this project. CBRA does not prohibit development in CBRS units by owners who are willing to develop their properties without the benefit of federal financial assistance. Instead, with certain exceptions, CBRA prohibits federal expenditures or financial assistance within CBRS units that might encourage development. The prohibitions include, but are not limited to the following: the construction or purchase of any structure, facility, or related infrastructure; the construction or purchase of any road, airport, boat landing facility, or other facility on, or bridge or causeway to, any CBRS unit; any project to prevent the erosion of, or to otherwise stabilize, any inlet, shoreline, or inshore area for the purpose of encouraging development; and the issuance of flood insurance coverage under the National Flood Insurance Act of 1968 for any new construction or substantially improved property. CBRA allows certain federal assistance within the CBRS for limited activities after consultation with FWS. However, the act does not require the agencies to obtain FWS’ approval before acting. Such assistance includes, but is not limited to the following: the exploration, extraction, or transportation of energy resources that can be carried out only on, in, or adjacent to a coastal water area; the maintenance or construction of improvements of existing Federal navigation channels and related structures; the maintenance, replacement, reconstruction, or repair, but not the expansion, of publicly owned or operated roads, structures, or facilities that are essential links in a larger network or system; military activities essential to national security; and assistance for emergency operations essential to saving of lives and protecting property. CBRA has no provisions prohibiting the administration of federal regulatory activities, such as issuing certain permits, within the CBRS. Three federal agencies—the Corps, EPA, and the U.S. Coast Guard—issue permits that regulate, among other things, the discharge of dredged or fill material into federally regulated waters, including wetlands; the discharge of wastes into navigable waters; and the construction of bridges over navigable waters. Because much of the CBRS is comprised of wetlands and aquatic habitat, activities undertaken in these areas can require a permit from one or more of these agencies. Federal legislation other than CBRA provides the authority for issuing these permits. Among these are the Clean Water Act, the Rivers and Harbors Appropriation Act of 1899, and the Bridge Act of 1906, as amended. Despite the concentration of significant levels of development in a few units, most of the CBRS remains undeveloped. Specifically, we found an estimated 84 percent of all CBRS units remain undeveloped—with no new structures built since the unit was included in the CBRS. We found that factors such as the lack of suitably developable land in the unit and state laws discouraging development were responsible for inhibiting development. We also determined that an estimated 13 percent of CBRS units experienced minimal levels of development—consisting of less than 20 additional structures per unit since becoming part of the CBRS—while 3 percent of CBRS units experienced significant development—100 or more additional structures per unit. According to local officials, commercial interest and public desire to build in some units and local government support for development were some of the key factors contributing to the development in the CBRS units we reviewed. Appendix II lists the units in our review and the status of development in those units. On the basis of our analysis of a random sample of CBRS units, we estimate that 84 percent of the units experienced no new development since their inclusion in the CBRS. For the units in our sample, the undeveloped units were generally smaller in total acreage and had less developable acreage than the developed units. Although CBRA does not appear to have been a primary factor in discouraging development in the units we reviewed, officials indicated that in those areas where CBRA prohibitions are complemented by local and state government objectives for development, it is unlikely that there will be significant increases in development. Local officials cited several factors as being primarily responsible for inhibiting development. The Lack of Suitably Developable Land. This was a primary factor for the lack of development in a number of CBRS units that we reviewed. For example, the Boat Meadow unit in Massachusetts is comprised almost entirely of salt marshes with small sand bars scattered throughout shallow water, making the land unsuitable for development. Similarly, in the Wrightsville Beach unit in North Carolina, the sand continuously shifts, making the land too unstable for development. Figure 1 is an example of the type of terrain generally found in the Boat Meadow CBRS unit in Massachusetts. Lack of Accessibility to the Unit. A number of federal and local officials noted that some CBRS units are not easily accessible or are located in remote locations that are not desirable to developers. For example, a number of units, such as the Bay Joe Wise Complex in Louisiana, are only accessible by boat. Other units, such as the Boca Chica unit in Texas, are in such remote locations that an official said developers are not willing to build there. In addition, several remote and inaccessible locations do not currently have the infrastructure needed to develop the unit. For example, an official said the lack of existing infrastructure and the high cost of constructing development-quality water and sewage infrastructure have discouraged development on the North Padre Island unit in Texas. State Laws Discouraging Development. State laws were cited by a number of officials as reasons why development had not occurred in some CBRS units. Some states have adopted specific restrictions to prevent development in coastal or wetland areas, which are often found in CBRS units. For example, a number of units in Massachusetts—such as Black Beach and Squaw Island—have not experienced development due in part to wetland and coastal protection laws enacted by the state. In addition, both Maine and Massachusetts do not allow state funds or grants to be used for projects to encourage development in barrier beaches. In Rhode Island, any coastal development project must receive a permit from the Rhode Island Coastal Resources Management Council, and an official explained that it was highly unlikely that permits would be issued for new development in coastal regions of the state. Preservation Efforts by Conservation Groups. A number of CBRS units include land owned by entities seeking to preserve the area in its natural state. In some cases, CBRS units have lands that are owned by federal, state, or local governments, such as local parks or national forests. For example, the Whitefish Point unit in Michigan is part of the Hiawatha National Forest. In other cases, land in CBRS units is owned by conservation groups seeking to prevent development. For example, a significant portion of the Southgate Ponds unit in the U.S. Virgin Islands is owned by the St. Croix Environmental Association; the Fox Islands unit in Virginia is owned by the Chesapeake Bay Foundation; and the Pine Island Bay unit in North Carolina is mostly owned by the National Audubon Society. Although these owners have sought to prevent development within the unit, as the land becomes more valuable, owners may experience pressure to sell it for development purposes. Private home owners have also taken actions to prevent continued development in one CBRS unit we reviewed. Some portions of the Prudence Island Complex unit in Rhode Island are located in private home owners’ backyards. Home owners have voluntarily placed their land into a conservation easement to formally protect it from future development. Although the majority of CBRS units remain undeveloped, 16 percent have experienced some level of development. While the range of development varies between one additional structure for some units to over 400 new structures for another unit, the amount of development in most of the units has been small. Thirteen percent of the units have added less than 20 structures. Where there has been significant development, it has been concentrated in a relatively small number of units. We estimate that only 3 percent of CBRS units have experienced the addition of 100 or more new structures since their inclusion in the CBRS. The majority of the CBRS units within our sample that have experienced development are located in the southern United States. Two units experiencing the most extensive development were the Topsail, North Carolina unit and the Cape San Blas, Florida unit. Several other units in the south, such as the Four Mile Village unit in Florida and Bird Key Complex in South Carolina, have plans for continued development. None of the units in our sample located in the northern United States had experienced such extensive development. One factor contributing to increased development in the south is the greater amount of developable acres; 80 percent of the developable land in the CBRS is located in southern units—those located south of New Jersey. Local officials cited several factors as being primarily responsible for the development that has occurred. Commercial Interest and Public Desire to Build in the Unit. Officials told us that development had occurred in several areas because the public’s desire to develop in the unit was stronger than the disincentive of CBRA. For example, the Currituck Banks unit in North Carolina has experienced an increase of at least 400 new residential homes since its inclusion in the CBRS. Although this unit only has beach access for four- wheel drive vehicles, approximately 75 percent of the land south of the unit is currently built to capacity, and the increasing demand for residential structures is sending developers into the adjoining CBRS unit. Local officials stated that the lack of federal assistance did not appear to have any affect on the rate of development in the area. Similarly, the Cape San Blas unit in Florida has continued to experience increased development with at least 900 new structures—primarily single family vacation homes—being built since the unit’s inclusion in the CBRS. Officials in Cape San Blas believe that as other coastal locations around Florida became too expensive to find affordably priced ocean front homes, the area of Cape San Blas became a highly desirable location. Accounting for the significant development that has occurred in the Topsail unit in North Carolina, officials stated that the basic reason was simply supply and demand: people want to live on the coast of North Carolina, and the area that includes the CBRS unit had developable land available. Local Government Support for Development. Local officials explained that local governments with a pro-development attitude aided in increasing development in CBRS units. For example, local officials in Topsail, North Carolina told us that most of the 1,600 structures located in the Topsail CBRS unit were constructed after the unit’s inclusion in the CBRS. These officials indicated that the county government had begun development plans for land within the unit prior to its inclusion in CBRS. These officials noted that the county had targeted the area for development to promote tourism and increase the local tax base, and that certain infrastructure was built to support this increased development. As the result of these pro- development policies, a large portion of the unit has been developed with residential homes—many of which serve as vacation rentals during the summer months. Similarly, in the Cape San Blas unit in Florida the local government had development plans for the area prior to the adoption of CBRA. Local officials there said that the area was already subdivided into lots for development and that some existing infrastructure, such as roads, water systems, and telephone systems, was already built when the unit was added to the CBRS. Availability of Affordable Private Flood Insurance. Officials familiar with several CBRS units told us that initially restrictions on the availability of federal flood insurance had little impact on the development that occurred in some CBRS units. Lenders did not require flood insurance in order for home owners to obtain mortgage loans at the time most of the development occurred. According to these officials, home owners within CBRS units that chose to get flood insurance could readily get private flood insurance at rates comparable with federal flood insurance. However, in the past few years FEMA has updated its flood-zone maps and has designated some CBRS areas as special flood hazard areas. This change in designation has made areas that once did not require owners to obtain flood insurance in order to receive financing into areas where owners are now required to have flood insurance prior to obtaining mortgage loans. At the same time, officials said that in several CBRS units the cost of private insurance has skyrocketed and is no longer comparable to national flood insurance program rates. According to a local banker in Cape San Blas, a $250,000 home outside the CBRS unit can obtain flood insurance through the National Flood Insurance Program for $470 per year, but private flood insurance for homes located in the CBRS unit that are not eligible for national flood insurance could cost between $5,070 and $12,500 a year, depending on the insurance company. The new requirements mandating flood insurance for mortgages in some units and the increased costs of private flood insurance may begin to impact development in the CBRS in the future, according to local officials. For example, officials in Currituck County noted that the flood zone determination change had significantly reduced the number of building permits issued for new development in the CBRS unit since 2005 and suggested that the unit will now experience less future development. Likewise, the Cape San Blas unit in Florida has also been affected by the flood zone determination change. A local official stated that since FEMA adopted a special flood hazard area for the CBRS unit in 2002, property values in the unit have decreased by 30 percent. Because the cost of private flood insurance has risen dramatically in the last couple of years, a number of residents and officials representing areas within the CBRS, including Cape San Blas and Topsail, have unsuccessfully attempted to remove the areas from the CBRS, primarily so that residents would be eligible to obtain flood insurance through the National Flood Insurance Program. Our review of CBRS units did not include OPAs because FWS officials informed us that these areas were classified separately from system units and that the land was already protected from development by other mechanisms—such as its designation as a state or federal park. OPAs are not under the same limitations as CBRS units; the only restriction placed on federal spending within these areas is the prohibition on federal flood insurance. However, we found instances where land within OPAs was sold to private developers and development had increased in the area. For example, in the St. Andrews Complex unit in Florida, the Bahia de Tallaboa unit in Puerto Rico, and the Mustang Island unit in Texas, development has continued despite the units’ designation as an OPA. We found that federal agencies have provided some financial assistance prohibited by CBRA, some assistance allowed by CBRA, and hundreds of permits for federally regulated construction projects to entities within the CBRS units included in our review. Four agencies provided financial assistance expressly prohibited by CBRA to property owners in CBRS units. Three federal agencies also provided financial assistance to entities in CBRS units that is allowed under CBRA, but they do not track the amount of assistance they provided. As a result, we were unable to determine the total extent of such assistance. Finally, the Corps and EPA- authorized state agencies have issued hundreds of permits for a variety of federally regulated construction projects within CBRS units. Four federal agencies—FEMA, HUD, SBA, and VA—provided some financial assistance that is expressly prohibited by CBRA to property owners in CBRS units. Our review of approximately 4,500 addresses uncovered 73 active FEMA flood insurance policies, 37 inappropriate FEMA disaster assistance payments, 5 HUD home loan guarantees, 3 SBA disaster loans, and 11 VA home loan guarantees that should not have been made to property owners in CBRS units. Although three of the four agencies have procedures to prevent and detect assistance to property owners in CBRS units, agency officials cited several reasons why this erroneous assistance was provided in violation of CBRA, including the lack of updated CBRS maps, which makes determining the precise locations of properties and CBRS unit boundaries difficult. FEMA provides federally backed flood insurance for home owners, renters, and business owners in participating communities that are not in the CBRS. Structures that are built or substantially improved following their inclusion within the CBRS are not eligible for federal flood insurance. However, our review of policies active as of May 2006 identified 73 National Flood Insurance Program (NFIP) policies for properties in CBRS units. The flood insurance policies ranged from $26,500 to $350,000 and totaled approximately $20 million. Although these policies violated the CBRA, FEMA officials said it is unlikely that the agency would actually pay a claim on these policies, because before paying a claim, FEMA adjusters would first conduct a physical inspection of the property and determine whether it was in a CBRS unit. If a property was found to be within the CBRS, FEMA would deny the claim and refund the policy owner’s insurance premium. To prevent flood insurance policies from being issued for properties in CBRS units, FEMA’s Flood Insurance Manual requires that private insurance companies participating in the NFIP determine if a property is eligible for flood insurance. Prior to issuing a policy, the agent is required to review FEMA’s flood insurance maps to determine if the property is located within the CBRS and collect information to determine if the structure was built prior to the unit’s inclusion in the CBRS. However, according to FEMA, insurance agents have made mistakes and issued policies in violation of CBRA for two reasons: It may be difficult to locate a property and determine whether it is in a CBRS unit, especially when a property is near or adjacent to a CBRS boundary. For example, at one location we visited, we identified homes adjacent to each other where one property was in the CBRS and the other was not. In other CBRS units, some homes had backyards that fell within the CBRS. Furthermore, new streets may not be depicted on existing maps. According to FEMA officials, the insurance agent must often make a judgment call when determining whether a property is within the CBRS. The agent may not be familiar with CBRA prohibitions and may not follow procedures. According to FEMA officials and officials from a private insurance agency with whom we spoke, some home owners obtain flood insurance from insurance agents located inland, away from coastal areas, who might not have been aware of the CBRA restrictions. According to FEMA officials, the agency takes a number of steps to identify properties that may have inappropriately received federal flood insurance. Since 1998, FEMA has sought to assist private companies with identifying flood insurance policies that potentially were ineligible for flood insurance coverage because the property was within the CBRS. To accomplish this task, FEMA uses computer mapping technology to plot addresses and determine whether they are potentially in a CBRS unit. However, the computer software FEMA relies on cannot always correctly locate all addresses on the map. For example, this can occur if a street or address range is not included in the software, which can happen when a street or a range of addresses is new. Twenty of the 73 flood insurance policies that we determined were issued for a property that was in a CBRS unit could not be located on a map by FEMA’s computer software. In addition, computer mapping technology has inherent inaccuracies and may plot properties in the wrong location. For example, using our mapping software, we determined another 20 of the 73 flood insurance policies were for a property in the CBRS but were not identified as being in a CBRS unit by FEMA’s mapping software. FEMA officials said they recognize that their software may not always identify new addresses and streets in CBRS units, and so the agency obtains quarterly updates of new streets and addresses and rechecks insured properties against the updated information to identify any that might be located in CBRS units. When FEMA’s computer plotting reveals that a property for which a federal flood insurance policy has been issued may be in a CBRS unit, FEMA reports the error to the insurance company. Once an insurance company receives notification in the form of an error message that they may have written an ineligible policy, the company may take one of four actions: 1. The company can agree that the property is located in a CBRS unit and cancel the policy back to the inception date of coverage. 2. The company may agree that the property is located in a CBRS unit but prove that the building was constructed prior to the CBRS designation. In these cases, the policy is deemed valid and may remain in effect. 3. The company can disagree that the property is located in a CBRS unit and assume responsibility for the risk. In these cases, the policy would remain active, FEMA would continue to collect the premiums, but the insurance company would be responsible for paying any claims filed. Insurance companies have assumed liability for the risks associated with 29 of the 73 flood insurance policies that we identified had been issued for properties located in CBRS units. 4. The company can request that FWS make an official determination regarding whether the property is in the CBRS. If FWS determines that the property is in a CBRS unit, the policy is then cancelled back to the inception date of coverage. However, FEMA officials expressed concern about the length of time FWS takes to make a property determination. Typically, it takes FWS a year to respond to inquiries for a property determination. As of January 17, 2007, FEMA was waiting for determinations on 544 addresses from FWS. According to FWS officials, the process for making property determinations is labor intensive because they are using CBRS maps that were created more than 15 years ago and are not available in digital format. FWS officials told us that modernized digital maps of the CBRS would improve the accuracy and efficiency of the property determination process, allowing its customers and partners, in many cases, to determine within minutes whether a property is located within the CBRS. In 2000, the Congress directed the Secretary of the Interior to create draft digital maps for at least 50 and not more than 75 units, or nearly 10 percent of the CBRS. FWS has created draft digital maps of 60 CBRS units that it must submit to the Congress for its consideration. In May 2006, the Congress also instructed the Secretary of the Interior to create maps for the rest of the CBRS by May 2013. According to FWS, digital maps would replace the paper maps currently being used that are (1) outdated technologically and (2) sometimes inaccurate and may not align precisely with the natural or man-made features that the Congress intended the boundaries to follow. FWS officials believe that modernizing the CBRS maps will address the inaccuracies of the existing maps. To implement the map modernization project, FWS officials said that they investigated several options for procuring data to produce the required draft digital maps, including federal, state, local, and private sources. In many cases, FWS was able to obtain data internally or from other federal agencies at little or no cost, including wetlands data and national wildlife refuge boundaries from within FWS, aerial imagery from the U.S. Geological Survey, hydric soils data from the Department of Agriculture’s Natural Resources Conservation Service, and digital boundaries for many federally protected areas from NOAA. FEMA is also conducting a map modernization effort that includes preparing digital flood insurance maps. In 2006, FWS entered into an interagency agreement with FEMA whereby FWS will place current CBRS boundaries onto FEMA’s digital flood maps. FEMA provided FWS with $40,000 for an initial set of maps for some units. While the FEMA maps are not the “official” CBRS maps adopted by the Congress, FWS officials said that these digital maps will allow property owners, insurance agents, and others to have a much more accurate and precise tool for determining whether a property or project site is located near a CBRS area and would require an official determination from FWS. FEMA’s Individuals and Households Program (IHP) provides housing assistance and other assistance, such as medical or funeral assistance, for needs arising from a declared emergency or major disaster. For owners or renters residing in CBRS units, FEMA regulations allow providing temporary housing assistance (rent) but generally do not allow providing funds for housing repairs or construction assistance. However, we found that since August 26, 1998, FEMA provided 37 disaster assistance payments to individuals in CBRS units included in our review totaling $25,393. Most of the payments were for purposes labeled by FEMA as “Other Eligible Property Items.” According to FEMA officials, “Other Eligible Property Item” payments were for post-disaster purchases for emergency needs such as chainsaws, generators, heating fuel, dehumidifiers, air purifiers, and wet/dry vacuums. These payments were made under six different disaster declarations, all to individuals living in CBRS units in North Carolina and Florida. The units included Coconut Point, Cape San Blas, Blue Hole, Ponce Inlet, and Ormond-by-the-Sea in Florida, and Currituck Banks and Topsail in North Carolina. In addition to payments for “Other Eligible Property Items,” one payment of $645.95 was made for home repairs. FEMA procedures require officials making payment determinations in potential CBRS areas to document that the property is not in a CBRS unit prior to approving assistance for those types of assistance not allowed in such areas. However, according to a FEMA official, in these cases the procedures were not followed when these payments were approved. Through its Mortgage Insurance Homes program, HUD insures lenders against losses on mortgage loans used to finance the purchase of proposed, under construction, or existing housing, as well as to refinance indebtedness on existing housing as long as these properties are not located in the CBRS. In our review of insured home loans active as of June 2006, we identified five HUD-insured loans for properties located in CBRS units. Three of the loans were for properties in the Prudence Island Complex unit in Rhode Island; one in the Topsail unit in North Carolina; and one in the Cape San Blas unit in Florida. These insured loans were approved by HUD between 1985 and 2000, with loan amounts ranging from approximately $50,000 to $137,000, for a total of about $384,000. Despite the fact that all of HUD’s programs are subject to CBRA restrictions, HUD officials said that they have no procedure under their single family (one- to four-family property) mortgage insurance programs related to CBRA. HUD officials further indicated that while they could implement better controls for restrictions on providing single family mortgage insurance in the CBRS, it would be unnecessary in practical terms. HUD officials provided three primary reasons why it was unlikely that a HUD-insured loan would be provided for a property in a coastal area. First, HUD regulations require that flood insurance be obtained under the NFIP before HUD will insure single family mortgages for properties in FEMA-identified special flood hazard areas. HUD officials stated that because properties located in the CBRS would likely be in special flood hazard areas and the NFIP flood insurance is prohibited in the CBRS, HUD would not be able to insure single family mortgages for these properties. However, HUD’s explanation does not account for the fact that portions of CBRS units may not be in a special flood hazard area and that FEMA prohibitions are not universal as the NFIP flood insurance may be available to homes built before the area’s inclusion in the CBRS. Second, most HUD insurance for single family mortgages is for first-time home owners who typically are not buying homes in the higher priced ranges found in the CBRS. Third, property values in the CBRS are such that mortgage amounts would likely exceed the program limits for typical HUD-insured single family mortgages, as the mortgage limit for a one family property ranges from approximately $170,000 to $310,000, depending on the location. In response to our findings, HUD officials said that the department would be developing CBRA policy guidance and associated training to ensure future compliance. Following the issuance of a disaster declaration, SBA provides disaster loans to eligible home owners for repair or replacement of their primary residences. However, residences located in CBRS units are ineligible for this disaster loan assistance. During our review of the period January 1, 1990, through May 30, 2006, we found that SBA had made three disaster loans for home repairs for properties in CBRS units. The three loans ranged from $5,000 to $10,000 and totaled $24,200. These loans have been paid in full and were made to individuals in the Florida CBRS units of Blue Hole and Cape San Blas, and the Creek Beach unit in New York. To prevent disaster loans from being provided to properties within the CBRS, SBA procedures call for agency staff to consult FEMA’s flood maps to determine whether a property is within a CBRS unit before approving disaster loans. SBA officials acknowledge that two of these loans should not have been approved, but did not agree that the third loan was for a property within CBRS. These officials stated that it is sometimes difficult for agency staff to determine if a property is within the CBRS with the existing FEMA flood maps. SBA officials said that as a result of our review the agency will increase the number of quality assurance reviews conducted in any disaster area that includes a CBRS unit. VA issues home loan guarantees to help eligible recipients obtain homes or refinance home loans except in CBRS units. However, our review of home loan guarantees active as of September 2006 found that VA had provided 11 loan guarantees for homes in a CBRS unit. Nine of these 11 loan guarantees were issued to home owners in the Topsail unit in North Carolina, while the other two were provided to home owners in the Ormond-by-the-Sea unit in Florida. The amount of the 11 loan guarantees ranged from a low of about $14,340 to a high of about $45,900 for a total value of $352,188. VA officials told us that the agency’s Lenders Handbook includes provisions that inform readers that properties in CBRS units are ineligible as security for a VA-guaranteed loan. VA appraisers are instructed during training sessions to reject assignments appraising such properties. Also, to verify that loan guarantees are provided lawfully, agency officials said that they or their designees (1) examine appraisal paperwork for all loan applications looking for anomalies; (2) inspect 10 percent of all loan applicant properties to verify, among other things, that they are not in CBRS units; (3) review paperwork for 10 percent of all closed loans; and (4) visit lender offices and sample VA loans for compliance. In reviewing the provisions included in VA’s handbook, we determined that it inaccurately instructs appraisers to obtain the maps for determining the location of a property from the U.S. Geological Survey rather than from FWS. VA officials acknowledge that agency staff should have identified the 11 properties we discovered as located within the CBRS during their initial review of the appraisal paperwork. VA officials explained that as a result of our findings, they have (1) corrected the Lenders Handbook provisions to instruct staff to use maps maintained by FWS and (2) instructed officials at VA regional loan centers to modify their training to both lenders and appraisers to emphasize the procedures designed to prevent issuing loans to persons who reside in CBRS units. We found that three federal agencies had provided financial assistance allowable under CBRA to entities within the CBRS. We were unable to determine the total extent of such assistance, because these federal agencies do not track the amount of allowable financial assistance they provide to entities in CBRS units, and they could not provide us with the data necessary to estimate the total assistance provided. After a disaster, FEMA may provide disaster funding in CBRS units for emergency assistance such as debris removal and emergency protection measures. FEMA may also provide disaster funding following an emergency for activities like repairing roads or utilities, repairing existing water channels, or disposing of sand. Because FEMA could not provide reliable data on whether this disaster assistance was within a CBRS unit for each project, we could not determine the full extent of the allowable disaster assistance provided by FEMA. However, with FEMA’s data, we were able to identify that some of the projects were within CBRS units. For example, since 1998, FEMA provided at least $5.6 million in disaster assistance to the Topsail unit in North Carolina to fund projects to remove debris, replace signs, and repair beach access crosswalks and public beach facilities after Hurricanes Ophelia, Floyd, Irene, and Isabel. Similarly, in both the Cape San Blas, Florida and Topsail, North Carolina CBRS units, FEMA provided funds to construct an emergency berm in order to protect existing development after storms destroyed protective dunes and caused beach erosion. Table 1 provides examples of some of the disaster assistance FEMA has provided to CBRS units since 1998. As mentioned earlier, FEMA is also allowed to provide limited disaster assistance to individuals through the IHP after the President declares an emergency or major disaster in an area, including CBRS units. We found that since August 26, 1998, FEMA provided $8,237 to 16 individuals in CBRS units for emergency rental assistance. These payments were made to individuals in CBRS units in Florida and North Carolina. An exception to the limitations within CBRA allows FHWA to administer federal funding for projects on publicly owned or operated roads that are essential links in a larger transportation network and do not expand the existing transportation system. Because, as stated in agency guidance, FHWA determined that all roads within the federal highway system, including those in CBRS units, are usually “essential links” in a larger transportation network, most projects within CBRS units are permissible under CBRA after a consultation process with FWS. Although FHWA does not maintain data on which projects were located within CBRS units, we were able to identify—based on information provided by state officials— some examples of allowable projects in CBRS units that received federal funds from FHWA. For example, according to data from the Florida Department of Transportation, federal funding totaling approximately $1.1 million was provided to repair a road in the Cape San Blas unit after each of three hurricanes—Opal, Earl, and Ivan. An exception to the limitations within CBRA allows the Corps to provide assistance in CBRS units after consultation with FWS as part of its mission to maintain and improve existing navigation channels. We found that since 1983, the Corps performed at least 24 such projects in CBRS units, and most were to dredge channels. Many of these projects occurred along the Atlantic Intracoastal Waterway or in channels connecting this waterway to the Atlantic Ocean. Of the 24 projects, two-thirds occurred in CBRS units in South Carolina while the others were located in North Carolina, Florida, and Massachusetts. However, it is difficult to calculate the value of the Corps’ assistance to CBRS units because nearly all of the Corps’ projects involve activities both inside and outside CBRS units, and the Corps does not breakout project costs based on CBRS boundaries. EPA-authorized state agencies and the Corps have issued permits to property owners and entities within CBRS units for a number of different projects. Since 1983, EPA-authorized state agencies issued at least 41 permits to property owners and entities in nine different CBRS units. All of the permits were associated with the National Pollutant Discharge Elimination System (NPDES), primarily to allow storm water discharges from construction sites or for discharges from water or wastewater treatment systems. Florida, as an EPA-authorized permitting state, issued 38 of the 41 permits. Of the remaining three permits, two were issued by New York and one by the U.S. Virgin Islands, both of which are authorized by EPA to issue NPDES permits. The Corps was unable to provide a complete list of all the permits it had issued since CBRA was enacted. However, we have determined that since 1983, the Corps issued at least 194 permits in 20 different CBRS units for purposes such as erosion control, constructing piers and mosquito control ditches, filling wetlands, and raising fish and shellfish. Of these 194 permits: Eighty-three were authorized under Section 10 of the Rivers and Harbors Appropriation Act of 1899. The act gives the Corps authority to issue permits to construct piers or marinas in navigable waters. Eighty-seven were authorized under Section 404 of the Clean Water Act. Section 404 provides the Corps with the authority to issue or deny permits for discharges of dredged or fill material into waters under federal jurisdiction, including wetlands. Twenty-four involved activities covered by both Section 10 of the Rivers and Harbors Appropriation Act and Section 404 of the Clean Water Act. Almost two-thirds of these permits were issued to property owners and entities in CBRS units in Florida; the remaining permits were issued to entities in units in the Carolinas and New England. Although CBRA has limited the amount of federal financial assistance provided to some CBRS units, it does not appear to have been a major factor in discouraging development in those CBRS units that have developable land, local government and public support for development, and access to affordable private flood insurance. Despite CBRA’s prohibitions on federal assistance to units in the CBRS, four federal agencies—FEMA, HUD, SBA, and VA—have provided such assistance. While the amount of assistance provided in violation of CBRA is not large, it does raise concerns about the ability of federal agencies to fully comply with the requirements of the act. Unless federal agencies follow the procedures they have established to prevent the provision of prohibited assistance and have access to up-to-date and reliable maps to ensure that accurate determinations are made for properties located in CBRS units, it is likely that some violations of CBRA may continue to occur. In light of the federal financial assistance that was provided in violation of CBRA, we are recommending that the Secretaries of DHS, HUD, and VA, and the Administrator of SBA direct their agencies to (1) obtain official determinations from the FWS on whether the properties we identified as receiving federal assistance in violation of CBRA are in fact located within a CBRS unit and if they are, cancel all inappropriate loan guarantees and insurance policies that have been made to the owners of these properties and (2) examine their policies and procedures to ensure that they are adequate to prevent federal assistance that is prohibited by CBRA from being provided to entities in CBRS units. In addition, given the importance of digital maps to making accurate CBRS determinations, we are recommending that the Secretary of the Interior direct FWS to place a high priority on completing its efforts to develop digital maps that more accurately depict unit boundaries. We provided a draft of this report to the Department of Defense, DHS, DOI, HUD, SBA, and VA. We received comments via e-mail from the Department of Defense, DHS, and SBA, and we received written comments from the DOI, HUD, and VA. The Department of Defense and the SBA stated that they had no comments on the draft report, and DHS provided only technical comments and stated that it concurred with the report’s recommendations. In its written comments, DOI stated that it supports efforts to improve CBRS property determinations and ensure compliance with CBRA. DOI also indicated that it will consider our recommendation concerning prioritization of the completion of digital maps as it develops future budget requests. In its written comments, HUD stated that the loan guarantees in question have already been terminated. HUD also noted that it is now developing policy guidance and associated training to ensure that no future violations of CBRA occur. In its written comments, VA stated that it agreed with our findings and one of our recommendations but did not agree with our recommendation to cancel the inappropriate loan guarantees that it had made in violation of CBRA. VA stated that it did not believe that the small number of loan guarantees that we found indicated a pattern of abuse of CBRA and that canceling these guarantees would inflict a disproportionate harm on lenders and veterans who were not responsible for the erroneous property determinations that the loan guarantees were based on. While we understand VA’s concerns for the adverse impacts that could affect the parties involved, we believe that because these loan guarantees violate CBRA they should be rescinded. We have also incorporated the technical comments provided by DHS and DOI, as appropriate, throughout this report. HUD’s written comments are presented in appendix V, DOI’s written comments are presented in appendix VI, and VA’s written comments are presented in appendix VII. We are sending copies of this report to interested congressional committees as well as the Administrator, Small Business Administration; the Commander, U.S. Army Corps of Engineers; and the Secretaries of the Army, Defense, Homeland Security; Housing and Urban Development, Interior, and Veterans Affairs. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. We were asked to address issues related to the Coastal Barrier Resources Act, as amended, (CBRA) by reviewing development that has occurred and federal funding that has been provided within the John H. Chafee Coastal Barrier Resources System (CBRS). Specifically, we were asked to determine (1) the extent of development within the CBRS and (2) the extent of federal assistance provided to CBRS units. To determine the extent of development in the system, we determined the number of structures within each unit. We accomplished the task by electronically mapping addresses with MapInfo and layering electronic boundaries of CBRS units from the Federal Emergency Management Agency’s (FEMA) Q3 Digital Flood Insurance Rate Map data product with the mapped addresses. FEMA’s Q3 data provides the external boundaries for CBRS units, though it is not an exact replica of the boundaries. Our results are representative of the extent of development within the CBRS. We focused our review on a stratified random sample of 91 units drawn from the 584 total units in the system, excluding otherwise protected areas. The sample was drawn so that the results from the sample would have a precision margin of about plus or minus 10 percentage points at the 95 percent confidence interval. We were able to collect and analyze data for 84 of the 91 CBRS units, representing a weighted response rate of 92 percent. Of the 84 units, 42 units are located in the north and 42 units are located in the south. Northern units include those located in Connecticut, Maine, Massachusetts, Michigan, New Jersey, New York, and Rhode Island. Southern units are those located in Alabama, Florida, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Virginia, Texas, Puerto Rico, and the Virgin Islands. As a result of the high response rate, we reweighted the sample to represent the entire population of units. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the sample have margins of error of plus or minus 10 percentages points or less. To identify the number of structures within the CBRS units in our sample, we obtained address or parcel data from local government offices— including tax assessor’s offices and on-line databases, geographic information system departments, and information technology departments. The exact dates for the parcel and address datasets vary by location—however, we requested the most recent available data. We also collected year-built data, value, and type of structure when available. After determining whether the structure was within the CBRS unit, we reviewed the year-built data to determine how many structures were built since the unit’s inclusion in the CBRS. We did not independently assess the reliability of each address dataset provided by the local governments. However, for a number of address datasets we assessed the reliability of the data through interviews with knowledgeable local officials and verification of the addresses in the dataset by visual inspection during site visits. The electronic mapping was performed using roads and highways data provided by MapInfo’s 2002 StreetSmart program. As a result, our analysis would not be able to map structures located on a street not included in the 2003 roads and highways dataset. Once the addresses were mapped, they were layered in MapInfo with FEMA’s Q3 data. On the basis of conversations with FEMA and FWS officials, we believe these data are sufficiently reliable for the purposes of this study. Since the FEMA Q3 data CBRS boundaries may not be precise, the results of our analysis could incorrectly reflect whether a structure is within a CBRS unit. As with any electronic mapping technology, accuracy issues are inherent and may impact reliability of the results. We conducted site visits to 18 CBRS units in Florida, Massachusetts, North Carolina, Rhode Island, and South Carolina. During the site visits, we observed the CBRS units and interviewed local, state, and federal officials, home owners, realtors, insurance agents, and environmental officials to discuss the extent of development and factors encouraging or discouraging development in the units. In addition to the site visits, we conducted telephone interviews with local, state, or federal officials in Louisiana, Massachusetts, Maryland, Michigan, Rhode Island, Texas, Virginia, and Puerto Rico. Using the data collected during the site visits and the telephone interviews, we were able to determine reasons why development has or has not occurred in those units. To determine the extent of federal assistance provided to CBRS units, we identified eight agencies with programs that may have provided assistance to these areas. Appendix IV includes a complete list of these programs. We reviewed and analyzed federal legislation and regulations that are applicable to federal assistance to CBRS units, including the CBRA. For each of the programs that provide assistance that is prohibited in a CBRS unit, we interviewed officials regarding their agency procedures for preventing assistance to CBRS units. In instances where we identified violations, we collected additional information about the assistance provided and interviewed agency officials regarding the agency’s plans to prevent prohibited assistance from being provided in the future. We compiled a list of 4,472 addresses in 37 CBRS units that had at least one address. (See app. II for a list of CBRS units in our review.) As the section above details, we obtained address data from local county government sources and electronically mapped addresses within the boundaries of CBRS units. Next, we obtained data from each agency on program assistance provided and determined whether this assistance was provided within one of the CBRS units or to an address in a CBRS unit. We gathered additional data to determine whether civil works projects administered by the Corps and permits issued by the Corps or EPA- authorized state agencies were for an activity within a CBRS unit. We asked agency officials to provide latitude and longitude data for every project that occurred or permit that was issued in any of the counties that contained at least one of the 37 CBRS units. Then we electronically mapped the coordinates to determine if the activity occurred within the boundaries of a CBRS unit. Corps officials, however, were unable to provide latitude and longitude data for a large percentage of the permits issued. For each agency program, we assessed the accuracy and reliability of the data system by obtaining from the agency written responses regarding (1) the agency’s methods of data collection and quality control reviews, (2) practices and controls over accuracy of data entry, and (3) any limitations of the data. We determined that the agencies’ data were sufficiently reliable for the purposes of our review unless noted below. The details of our analysis for each agency program are provided below. To identify flood insurance policies in CBRS units, we obtained data from FEMA’s National Flood Insurance Program for all policies as of May 8, 2006. As mentioned above, we compiled a list of 4,472 addresses in 37 CBRS units. Because structures built prior to a CBRS unit’s inclusion in the system may still obtain flood insurance, we had to determine whether each structure was built prior to the unit’s inclusion in the CBRS. Of the 4,472 addresses, we were able to determine that 648 structures were built prior to the unit’s inclusion in the CBRS, and we deleted these addresses from our analysis. We reviewed the addresses of all the structures built after the unit’s inclusion in the CBRS and addresses where we could not determine the year built in our flood insurance analysis. Thus, we reviewed 3,824 addresses in 21 units to determine if the structure had federal flood insurance. Structures that were built prior to the CBRS unit’s inclusion in the system cannot obtain federal flood insurance if the property has been substantially improved. We did not collect data on whether properties had been improved. If we identified a flood insurance policy among the addresses where we were unable to determine the year built, we reviewed the year-built data in FEMA’s database. If FEMA’s data revealed that the structure was built prior to the unit’s inclusion in CBRA, we eliminated the match from our review. We provided FEMA our list of addresses located in 37 CBRS units. FEMA compared our list of addresses with addresses for which Individuals and Households Program (IHP) payments had been provided from its National Emergency Management Information System. FEMA reviewed payments from the system from August 26, 1998, to August 2, 2006. The Small Business Administration (SBA) provided loan data for business and disaster loans in all states with CBRS units from its Loan Accounting System. SBA provided us records for loans approved January 1, 1990, through May 30, 2006. While we did not find any matches for business loans, SBA officials told us that the address in the database could be a mailing address and not the physical address of the business. For SBA disaster loans, the SBA officials said that the address in the database is the location where the assistance was provided. We obtained data for 13 of the Department of Housing and Urban Development’s (HUD) single family and multifamily housing programs. (See app. IV for a list of the programs reviewed.) HUD provided data from its Real Estate Management System and Single Family Housing Enterprise Data Warehouse database for loans as of June 2006. The Department of Veterans Affairs (VA) provided loan guarantee data from its Home Loan Guaranty database. VA provided us records for active loan guarantees as of October 2006. We obtained data for 22 of the U.S. Department of Agriculture’s (USDA) business and industry, community facilities and single family and multifamily housing programs. USDA provided this information from its Automated Multi-Family Housing Accounting System, Guaranteed Loan System, MortgageServ Loan Servicing System databases, and Multi-Family Information System. The information was as of July 2006. For USDA’s utility programs, we used a different methodology because the projects are not associated with one structure as with a flood insurance policy or a housing loan. For USDA’s electric programs, USDA officials reviewed the construction work plans and the environmental reports for recent loans to electric service providers who provide service to selected counties in Florida, North Carolina, and Virginia that include one or more of the CBRS units included in our review. USDA officials determined that it does not appear that they have financed projects serving a CBRS unit. For USDA’s water and waste programs, we requested the environmental assessment forms statements for the projects in the counties that included a CBRS unit we identified as having 100 or more structures. We then reviewed these records to determine if they indicated that USDA officials had considered whether the projects would impact a CBRS unit when reviewing them. FEMA provided data from its National Emergency Management Information System on disaster assistance provided to counties and territories in our review, including data for the U.S. Virgin Islands and Puerto Rico. The data were from November 1998 through July 2006. We reviewed assistance designated by FEMA as being within a CBRS unit. However, we determined that this designation was not sufficiently reliable to identify all projects within the CBRS. We could not determine the full extent of assistance provided to CBRS units because the CBRS designation in the data was not always reliable. In addition, some assistance was provided countywide, and we could not determine if this assistance was provided to the unit. As a result, we provide examples of disaster assistance in our report. Where data were available, we electronically mapped the location of the assistance to verify it was within a CBRS unit. We obtained data for projects receiving federal aid that received federal funds between January 1996 and July 2006 in counties with a CBRS unit that we identified as having 10 or more structures. Federal Highway Administration (FHWA) officials extracted the data for these projects from the Financial Management Information System—-a database that tracks projects that receive federal funding. To determine whether assistance was provided that is prohibited by law, we identified new construction projects or projects that added capacity to existing roadways. To determine whether these projects were within a CBRS unit, we relied upon interviews and location analysis provided by FHWA officials and Department of Transportation officials from New Jersey, North Carolina, Puerto Rico, Rhode Island, South Carolina, and the Virgin Islands. For projects in New Jersey, state officials provided aerial photographs with the location of federally funded projects. For North Carolina, we met with FHWA officials to review maps displaying the location of federally funded projects. For Puerto Rico, FHWA and Puerto Rico Department of Transportation officials electronically mapped the location of projects. For Florida, we relied upon analysis conducted by an official at the Florida FHWA division office. For Rhode Island, South Carolina, and the U.S. Virgin Islands, Department of Transportation officials or public works department officials provided a paper map marked with the location of the projects. Because of the volume of projects that are allowable, we did not determine the number of allowable projects in every CBRS unit. However, we did review the allowable projects in Gulf County, Florida, and Onslow County, North Carolina. This appendix provides information on the CBRS units that we reviewed. Table 2 shows the units included within our random sample and the approximate number of new structures in each unit. Tables 3 and 4 show the CBRS units we included in our analysis to determine the extent of federal expenditures and permits made to entities in the CBRS. Table 3 displays the CBRS units included within our sample that had structures—regardless of whether the structures were built prior to or after the units inclusion in the CBRS. Table 4 lists the additional CBRS units that were suggested for inclusion in our review by FWS because the agency had information suggesting that development was occurring in these areas. Within the state of Massachusetts, there are 62 CBRS units. The units consist of 64,076 total acres of land, with 88 percent of the land considered wetlands by FWS. We visited six CBRS units—Black Beach, Boat Meadow, Centerville, Herring Brook, Sandy Neck, and Squaw Island. These units were primarily salt marsh or wetland areas with narrow coastal beach areas. Only the Boat Meadow unit had experienced any new development since inclusion in the CBRS. Figure 2 displays the CBRS units we visited during our site visit. The Herring Brook and Sandy Neck units both include land used as a public beach destination. The Sandy Neck unit is a coastal barrier beach, with both public and private beach areas, approximately 6 miles long varying in width from 200 yards to one-half mile. The unit is classified by the local government as a conservation and recreation area. Several homes are located on the unit, but one local official noted they are all registered by the state as historic places. The Centerville unit and the Squaw Island unit each have a barrier beach, but the beach is privately owned. The Centerville unit serves as a private beach and protective buffer for the homes bordering the unit. A local official noted the residents annually pay for a beach nourishment project in order to keep the protective buffer for their homes. The Squaw Island unit is a barrier beach and wetlands surrounding an area of developed land that was excluded from CBRS. The excluded area consists of homes valued between $1.7 and $6.9 million. Figure 3 displays a portion of Squaw Island. Both the Black Beach and Boat Meadow units consist primarily of salt marshes and wetlands. The southern portion of the Black Beach unit has one street of homes that were built prior to CBRA. One local official described the homes as “traditional Cape Cod” style houses. The Boat Meadow unit has several neighborhoods bordering the unit with one neighborhood partially included in the unit. It is within this area that new development—three single-family homes—has occurred since the unit’s inclusion in the CBRS. Within the state of Rhode Island, there are 21 CBRS units. The CBRS units consist of 10,320 total acres, with 83 percent considered wetlands by the FWS. During our site visit to Rhode Island, we focused our review on the Prudence Island Complex unit. The Prudence Island Complex unit consists of numerous separate pieces of land all included in one CBRS unit. The unit is located in residential neighborhoods in several counties around the Narragansett Bay. Although approximately 50 homes are located within the CBRS unit, only 8 of the homes have been built since inclusion within CBRS. Figure 4 shows the CBRS units that we visited in Rhode Island. Several areas included in the Prudence Island Complex are backyards of private homes. Home owners voluntarily included the CBRS land in their backyards in conservation easements, limiting the right of future owners of the property to develop the land. Figure 5 is one of the homes with a backyard that falls in the CBRS unit boundaries. Another area included in the unit is owned by the Rhode Island Country Club and serves as a golf practice area. Figure 6 is the country club land that is included in the CBRS. The unit also includes a small beach and a wetland inlet located in a residential neighborhood. The inlet leads to the Rhode Island Country Club. A local official stated that the County Club has asked the U.S. Army Corps of Engineers to re-dredge the inlet to improve the playability of the golf course—which gets heavily saturated during rains. Dredging within the CBRS unit would allow water to run off the course faster. Figure 7 shows the area within the CBRS unit that would be dredged. Within the state of South Carolina, there are 16 CBRS units. FWS officials determined that the units consist of 97,856 total acres, with 90 percent of land considered wetlands. We visited two units in South Carolina—Bird Key Complex and Captain Sams Inlet (see fig. 8). Each of the units has experienced the addition of 10 or fewer residential homes since its inclusion in the CBRS. The developed portions of both of these units are located on coastal islands—the Captain Sams Inlet homes are located on Seabrook Island, and the Bird Key Complex homes are located on Kiawah Island. Development in the Captain Sams Inlet CBRS unit is located in the Seabrook Island Resort—a 2,200-acre, privately gated, beachfront community on Seabrook Island. According to local officials, the title to the land where these homes are located was in dispute for years, which delayed its development, unlike the rest of the island. Local officials also stated that they believe that if the title to the land had not been in dispute, the area would have developed at the time of the CBRS unit designations and most likely would not have been included in the CBRS. Because of the CBRS inclusion, the property owners in the unit are no longer eligible for certain types of federal assistance, in particular federal flood insurance, which they noted is much less expensive than privately available insurance. Officials with whom we met on neighboring Kiawah Island stated that a developer has plans to build up to 50 units on a 20-acre portion of the Captain Sams Inlet CBRS unit that is located on Kiawah Island. Development in the Bird Key Complex CBRS unit is located on the northeast portion of Kiawah Island, which is also a privately gated, beachfront community with approximately 3,000 homes. The southern portion of the CBRS unit includes a few homes that we identified as being located in the unit, an 18-hole golf course, and an area of land called “Cougar Island.” Kiawah officials told us that a private developer has plans to build 360 homes on 24 acres of Cougar Island at a future date. Within the state of Florida, there are 67 CBRS units. The units range extensively in size and composition and encompass 285,937 total acres along both the Atlantic and Gulf Coasts. Overall, 87 percent of the land within the units is considered wetlands by FWS. We visited three units— Four Mile Village, Cape San Blas, and Deer Lake Complex (see fig. 9). All three units we visited had experienced some level of development. However, the development ranged from 11 new structures in Deer Lake Complex to at least 900 new structures in Cape San Blas since the units inclusion in the CBRS. The Four Mile Village unit in Florida has experienced an increase of at least 100 new residential structures since its inclusion in CBRS. This unit is expected to continue to experience development, as a 167-home private development project called Cypress Dunes is completed. The Cypress Dunes project consists of a 44-acre gated community and will include a clubhouse, pool, exercise center, dining facility, and tennis courts, all entirely within the CBRS unit. The 1,637-acre Topsail Hill Preserve State Park makes up more than one- half of the Four Mile Village CBRS unit. The preserve was purchased in 1992 with funds from the Conservation Acquisition of Recreation Lands program, also known as Forever Florida. Topsail was purchased for its unique natural ecosystems, including freshwater coastal dune lakes, wet prairies, scrub, pine flatwoods, marshes, cypress domes, seepage slopes and 3.2 miles of sparkling white sand beaches. The park also includes areas to bike, walk, swim, fish, and access to the beach, plus a full-facility campground features a swimming pool, tennis courts, and shuffleboard courts. The Cape San Blas CBRS unit is located on a peninsula in the Florida panhandle. It has experienced significant development since its inclusion within the CBRS with the addition of at least 900 new homes. Primarily, the homes are single-family residences used as vacation homes and rentals. In November 2002, FEMA designated parts of Cape San Blas as a special flood hazard area. Mortgage lenders require home owners in these zones to obtain flood insurance. Because federal flood insurance is not available in CBRS, home owners with mortgages must obtain private flood insurance. At the same time, officials told us that the cost of private insurance has skyrocketed and is no longer comparable to national flood insurance program rates. According to local officials, tourism in the Cape San Blas area is important to the economy of the county. They told us that property values in the unit have decreased since FEMA adopted a special flood hazard area for the CBRS unit. Residents and local officials have unsuccessfully attempted to remove Cape San Blas from the CBRS so that residents would be eligible to obtain flood insurance through the National Flood Insurance Program. In the 109th Congress, legislation was introduced in the House of Representatives that would exempt Cape San Blas, along with another unit, from CBRA’s prohibitions and the limitations on flood insurance. However, the bill never came to a vote. We identified some development in the Deer Lake Complex unit since its inclusion within the CBRS. A total of 11 new single-family homes have been constructed within the unit. Within the state of North Carolina, there are 10 CBRS units consisting of 52, 215 total acres—approximately 6,809 of those are considered developable acres by FWS. We visited four CBRS units—Topsail, Lea Island, Currituck Banks, and Wrightsville Beach (see fig. 10). Both Topsail and Currituck have experienced significant levels of development since inclusion within the CBRS. In contrast, Lea Island and Wrightsville Beach are impractical locations for development as they are significantly affected by erosion and shifting sands. The Topsail unit in North Topsail is a barrier island with low elevation without the protection of substantial dunes. It has a total of approximately 1,600 structures and local officials stated that most of the structures were built after CBRA was enacted. The unit consists of single and multifamily homes, a few hotels/motels, a convenience store, and the North Topsail Beach Town Hall. In recent years, the unit has been hit several times by hurricanes. For example, in 1996, Hurricanes Bertha and Fran caused significant damage. The storms leveled dunes, cut new channels across the island, dumped tons of sand, and destroyed more than 300 buildings. The federal government provided funds that assisted in repairing the streets, repairing water and sewer lines, replacing signs, and removing substantial debris. Since that time, the area has been rebuilt, but other storms have continued to cause damage. We identified at least $5.6 million in disaster assistance that was provided to entities in the unit since November 1998. Portions of the Topsail CBRS unit have experienced substantial levels of erosion. As the soil erodes, the ocean becomes dangerously close to the homes. Figure 11 pictures one of several homes in the Topsail area where the ocean waves make contact with the home’s foundation. Several areas outside of the CBRS unit have approved plans for a federally funded Corps beach renourishment project. However, because areas within the CBRS unit are ineligible for federal funding for a beach renourishment project, local officials are pursuing other opportunities to fund the portion of the project that falls within the CBRS boundaries. For example, the Town of North Topsail Beach recently proposed a $34 million bond package to pay for the beach renourishment project, but the voters rejected the proposal in November 2006. According to local officials, during the time when much of the development occurred in the Topsail unit, affordable private flood insurance was generally available. However, in recent years the cost of private flood insurance has increased tremendously. Currently, these officials said that many residents are frustrated with CBRA’s prohibitions on the availability of federal flood insurance and federal funding for beach renourishment projects. According to these officials, residents in the Topsail CBRS unit are upset that they must pay significantly higher insurance premiums than their neighbors who own properties just outside of the unit who can obtain federal flood insurance. The Currituck Banks CBRS unit is located on the outer banks of North Carolina, with the northern boundary at the Virginia state line. The unit has also experienced significant new development, with at least 400 new residential homes built since inclusion in CBRS. Local officials stated that rapid development has occurred in the area since the late 1980s and that as of June 2006, there were 550 single-family dwellings within the unit. However, officials noted this only represents 18 percent of the total capacity of homes that can be built in the unit. County planning staff noted that the area currently has 3,088 actual or planned building lots available. Although the Currituck Banks unit does not have any paved roads and is only accessible by four-wheel drive vehicle or boat, it still continues to be developed, partly because people on the Outer Banks are seeking the solitude that living in the CBRS unit can provide. Moreover, the unit has an extensive canal system that allows residents direct boat access to their homes and the mainland. The Lea Island CBRS unit is a tiny barrier island, accessible only by boat, and located south of the Figure Eight Island. The island is privately owned, but local officials stated that conservation groups are slowly trying to buy more of the island. The island is approximately 60 acres long with most of the land less than 10 feet above sea level. The island is in a constant state of flux due to erosion and shifting sand. According to a local coastal official, 15 homes previously existed on Lea Island, but all of them— except for one small cabin—had been destroyed by natural disasters. At the time CBRA was enacted, FWS determined that the Wrightsville Beach unit had 83 developable acres of land. However, sand continuously shifts within the unit. At one point, the majority of sand in the unit had shifted to such an extent that the entire unit was under water. According to local officials, to keep the unit above water, local entities must continually dredge an inlet adjacent to the unit to replenish the unit with sand. National Flood Insurance Program Individuals and Households Program Public Assistance Program (Disaster) In addition to the contact named above, Sherry McDonald, Assistant Director; Natalie Herzog; Stuart Ryba; Jay Spaan; Amy Ward-Meier; and Leigh M. White made key contributions to this report. Also contributing to this report were Kevin Bray, John Delicath, Nancy Hess, Gloria Saunders and Jay Smale.
|
In 1982, Congress enacted the Coastal Barrier Resources Act. The Coastal Barrier Resources Act, as amended (CBRA), designates 585 units of undeveloped coastal lands and aquatic habitat as the John H. Chafee Coastal Barrier Resources System (CBRS). CBRA prohibits most federal expenditures and assistance within the system that could encourage development, but it allows federal agencies to provide some types of assistance and issue certain regulatory permits. In 1992, GAO reported that development was occurring in the CBRS despite restrictions on federal assistance. GAO updated its 1992 report and reviewed the extent to which (1) development has occurred in CBRS units since their inclusion in the system and (2) federal financial assistance and permits have been provided to entities in CBRS units. GAO electronically mapped address data for structures within 91 randomly selected CBRS units and collected information on federal financial assistance and permits for eight federal agencies. An estimated 84 percent of CBRS units remain undeveloped, while 16 percent have experienced some level of development. About 13 percent of the developed units experienced minimal levels of development--typically consisting of less than 20 additional structures per unit since becoming part of the CBRS, and about 3 percent experienced significant development--consisting of 100 or more structures per unit--since becoming part of the CBRS. According to federal and local officials, CBRA has played little role in the extent of development within the CBRS units that we reviewed because they believe that other factors have been more important in inhibiting development. These include (1) the lack of suitably developable land in the unit; (2) the lack of accessibility to the unit; (3) state laws discouraging development within coastal areas; and (4) ownership of land within the unit by groups, such as the National Audubon Society, who are seeking to preserve its natural state. In units that GAO reviewed where development had occurred, federal and local officials also identified a number of factors that have contributed to development despite the unit's inclusion in the CBRS. These include (1) a combination of commercial interest and public desire to build in the unit, (2) local government support for development, and (3) the availability of affordable private flood insurance. Multiple federal agencies have provided some financial assistance to property owners in CBRS units that is expressly prohibited by CBRA; some assistance allowed under CBRA; and hundreds of permits for federally regulated development activities within the unit. Specifically, four agencies--the Department of Housing and Urban Development, the Department of Veterans Affairs, the Federal Emergency Management Agency, and the Small Business Administration--provided financial assistance, such as flood insurance and loan guarantees, totaling about $21 million that is prohibited by CBRA to property owners in CBRS units. Although most of these agencies had processes in place to prevent such assistance from being provided, they cited problems with inaccurate maps as being a key factor leading to these errors. With regard to financial assistance allowed by CBRA, GAO found that three federal agencies have provided such assistance but did not track how much assistance they provided, so the total extent of this assistance is unknown. With regard to permits issued in CBRS units for federally regulated activities, GAO identified hundreds of permits issued by the Army Corps of Engineers and state agencies authorized to issue permits on behalf of the Environmental Protection Agency. These permits covered various activities such as the construction of piers, the discharge of dredged or fill material into federally regulated waters, and permits associated with water discharges from construction sites or wastewater treatment systems.
|
States provide health care coverage to low-income uninsured children largely through two federal-state programs—Medicaid and SCHIP. Since 1965, Medicaid has financed health care coverage for certain categories of low-income individuals—over half of whom are children. To expand health coverage for children, the Congress created SCHIP in 1997 for children living in families whose incomes exceed the eligibility limits for Medicaid. Although SCHIP is generally targeted at families with incomes at or below 200 percent of the federal poverty level, each state may set its own income eligibility limits within certain guidelines. As of February 2002, 16 states have created Medicaid expansion programs, 16 states have separate child health programs, and 19 states have combination Medicaid expansions and separate child health components. (See figure 1.) SCHIP offers significant flexibility in program design and benefits provided by allowing states to use existing Medicaid structures or create child health programs that are separate from Medicaid. Medicaid expansions must follow Medicaid eligibility rules and cost-sharing requirements, which are generally not allowed for children. A Medicaid expansion also creates an entitlement by requiring a state to continue providing services to eligible children even when its SCHIP allotment is exhausted. In contrast, a state that chooses a separate child health program approach may introduce limited cost-sharing. Additionally, a state with a separate child health program under SCHIP may limit its own annual contribution, create waiting lists, or stop enrollment once the funds it budgeted for SCHIP are exhausted. States choosing combination programs take both approaches. For example, Connecticut’s combination SCHIP program has a limited Medicaid expansion—increasing eligibility for 17 to 18 year olds up to 185 percent of the federal poverty level. Additionally, the state created a separate child health program, which covers all children in families with incomes over 185 percent, up to 300 percent of the federal poverty level. With regard to program benefits, the choices states make in designing SCHIP have important implications. For example, a state opting for a Medicaid expansion under SCHIP must provide the same benefits offered under its Medicaid program. These benefits are quite broad and include Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) services for most children. EPSDT services are designed to target health conditions and problems for which children are at risk, including iron deficiency, obesity, lead poisoning, and dental disease. These services are also intended to detect and correct conditions that can hinder a child’s learning and development, such as vision and hearing problems. In contrast, states opting for separate child health programs may depart from Medicaid requirements and provide benefits based on coverage standards in the SCHIP legislation. SCHIP separate child health programs generally cover basic benefits, such as physician services, inpatient and outpatient hospital services, and laboratory and radiological services. Other benefits, such as prescription drugs and hearing, mental health, dental, and vision services, may be provided at the states’ discretion. States also may place limits on services provided and require cost-sharing, while Medicaid generally does not permit cost-sharing for children. In addition to having flexibility in program design and benefits offered, states participating in SCHIP have a larger proportion of their program expenditures paid by the federal government than for Medicaid. A state’s Medicaid program expenditures are matched by the federal government using a formula that is based on a state’s per capita income in relationship to the national average. Federal matching rates for SCHIP are “enhanced”—they are established under a formula that takes 70 percent of a state’s Medicaid matching rate and adds 30 percentage points, with an overall federal share that may not exceed 85 percent. For 2001, federal shares of SCHIP expenditures ranged from 65 to 84 percent, with the national average federal share equaling about 72 percent. In contrast, 2001 federal shares for Medicaid ranged from 50 to 77 percent of expenditures, with the national average at about 57 percent. The SCHIP statute requires states to screen all SCHIP applicants for Medicaid eligibility and, if they are eligible, enroll them in Medicaid. BBRA included a mandate that the OIG conduct a study every 3 years, beginning in fiscal year 2000, to (1) determine the number, if any, of enrollees in SCHIP who are eligible for Medicaid and (2) assess states’ progress in reducing the number of uninsured low-income children, including progress in achieving the strategic objectives and performance goals in their SCHIP plans, which set forth how states intend to use their SCHIP funds to provide child health assistance. BBRA directed the OIG to review states with approved SCHIP programs that do not provide health benefits under Medicaid; consequently, the OIG focused on the 15 states that in 1999 operated separate child health programs under SCHIP. Of these 15 states, the OIG excluded 2 states— Washington and Wyoming—because the delayed start-up of their programs resulted in no enrollees in fiscal year 1999, the year that the OIG reviewed. From the remaining 13 states, the OIG used a two-stage sampling plan to select 5 states for review. The OIG first divided the 13 states into two strata, selecting Pennsylvania separately as stratum I because it had a large number of children—81,758—enrolled in its program in fiscal year 1999. Enrollment across the remaining 12 states ranged from 1,019 in Montana to 57,300 in North Carolina. The OIG randomly selected 4 of the 12 states (North Carolina, Oregon, Utah, and Vermont) for inclusion in its study. (See table 1.) For the 5 sample states, the OIG reviewed a variety of documents the states submitted to HCFA, such as their SCHIP plans and SCHIP evaluation reports, which are states’ assessments of the effectiveness of their programs. OIG staff conducted site visits and met with officials responsible for administering SCHIP in all 5 states. The OIG also randomly selected 100 active SCHIP case files from each of the 5 states in order to evaluate whether Medicaid-eligible children were incorrectly enrolled in SCHIP. The OIG did not verify accuracy and completeness of the state case files; rather, it focused on whether the information in each file supported the conclusion reached by the state. In determining whether Medicaid-eligible children were improperly enrolled in SCHIP, the OIG reported that, based on a sample of 5 states, SCHIP enrollees in the 13 states with separate child health programs were generally appropriately enrolled. However, because of variations in the administration of state programs, generalizing from the 5 states to the 13 states may not be appropriate. In addition, focusing on only those states with separate SCHIP programs does not capture the experience of the majority of states or the majority of SCHIP-enrolled children. Ensuring appropriate enrollment in SCHIP is important regardless of a state’s SCHIP design, because any child eligible for Medicaid that is incorrectly enrolled in SCHIP results in a state receiving a higher federal matching rate. Reviewing states, for example, that operate separate child health programs as part of a combination program would have increased the proportion of children under consideration from 16.5 percent to 65 percent of all SCHIP children enrolled in 1999, and thus provided more comprehensive information regarding states’ enrollment practices. To determine whether states were improperly enrolling Medicaid-eligible children in SCHIP, the OIG separated the 13 states with separate child health programs into two strata. The first stratum was the state of Pennsylvania, which the OIG intentionally selected because it had the most children enrolled in SCHIP among the 13 states. Four states were then randomly selected from the remaining 12 states. Among the 5 states it reviewed, the OIG identified only a few cases in which Medicaid-eligible children were inappropriately enrolled. For example, it reported that 1 state had a single case in which a Medicaid-eligible child was enrolled in SCHIP, while 2 other states had three and five such cases. The report also found that 2 states did not have any Medicaid-eligible children enrolled in SCHIP. The OIG concluded from these findings that most SCHIP enrollees were correctly enrolled in the 13 states administering separate child health programs. Variations in states’ enrollment practices, however, raise questions about the extent to which results from a sample of 5 states can be generalized to 13 states. Had the OIG drawn its random sample of active SCHIP cases across the 13 states in its sampling universe, it would have been better able to generalize its results. An OIG official told us that the office chose to analyze a sample of 5 states rather than all 13 states because of time and resource constraints. Recognizing that analyzing a pure random sample of cases across a large number of states may be too resource intensive, choosing a stratified sample of states may provide more information on the extent to which accurate enrollment may vary with different states’ practices. Even with a stratified sample, however, generalization to all states may be problematic. The OIG did select a stratified sample and chose one characteristic—size of a state’s SCHIP program—to develop two strata. While dividing states in terms of size is potentially useful, additional distinctions may be important because program characteristics vary considerably from state to state. For example, states with differing administrative structures (New York uses health plans to determine eligibility and enroll eligible individuals, Colorado uses an enrollment contractor, and Oregon uses its Medicaid staff to determine SCHIP eligibility) could be grouped by certain characteristics for review. This could help determine whether such differences in administrative structures have a bearing on appropriate enrollment in SCHIP. To examine whether the OIG’s sampling approach reflected variations in states’ administrative structures, we categorized the 12 states in the second stratum based on whether they had the same program staff determine eligibility for both the SCHIP and Medicaid programs, which can help achieve consistency in eligibility decisions. We found that the random sample of 4 states did not include any states where different employees were responsible for determining SCHIP and Medicaid eligibility, thus raising concerns as to whether conclusions could be generalized. (See table 2.) Because the scope of the study was limited to the 13 states with separate child health programs, the OIG examined 322,534, or 16.5 percent, of the approximately 2 million children enrolled in SCHIP in fiscal year 1999. A review that also included separate SCHIP programs in states that opted for a combination approach under SCHIP would have expanded the available universe to 26 states and to 65 percent of all SCHIP children enrolled in 1999. Moreover, using the OIG’s general audit authority, the scope of future reviews could include states with SCHIP Medicaid expansions, which would provide the Congress with more complete information on the extent to which states are enrolling low-income children in the appropriate programs. If this approach had been used in 1999, 23 states and almost one-fourth of all children enrolled in SCHIP would have been added. (See table 3.) The OIG identified important limitations to states’ evaluations that made it unable to conclude whether states were making progress in reducing the number of uninsured children and in meeting the objectives and goals that they established under SCHIP. For example, the OIG found that states made inappropriate assumptions in reporting data about the relationship of SCHIP enrollment to the rates of uninsured, which undermined the credibility of states’ results, and that states often had poor baseline data against which to measure progress. The OIG also found that states set goals without considering how to evaluate progress, and that little emphasis was placed on evaluation by the states. As a result, the OIG made recommendations to both HCFA and HRSA on ways that the federal government could assist and guide states in making improvements in their analyses. While the initial OIG reviews were inconclusive due to weaknesses in states’ evaluations, future efforts may benefit from federal initiatives under way aimed at improving state-level data and analyses of SCHIP. These initiatives, however, may not have been in place long enough to benefit the OIG’s next review, since results are due in 2003. As a result, the OIG may wish to select a different approach—such as identifying states with more rigorous practices in evaluation, or augmenting its review with other sources beyond those provided by the states. The OIG identified limitations to the 5 states’ SCHIP evaluations and thus was unable to draw conclusions about states’ progress in reducing the number of uninsured children or meeting their stated objectives and goals. For example, the OIG cited concerns regarding the reliability of states’ reports of reductions in the number of uninsured, including inadequate data and evaluation practices. In cases in which states were unable to measure objectives that were established at the beginning of their SCHIP programs, their evaluations generally provided descriptive information on activities but did not assess the effect that such activities had on achieving specific goals. (See table 4.) For example, the OIG reported that none of the 5 states it reviewed attempted evaluations of their outreach programs or offered explanations of how such programs affected their measurable progress in enrollment or the number of uninsured children. Of particular concern were limitations in measuring how well states are meeting the primary objective of the SCHIP program—reducing the number of uninsured. As noted by the OIG, states—and other researchers—have been hampered by limited reliable state-level data regarding children’s insurance status. When SCHIP was enacted, estimates of the number of low-income uninsured children were derived from the annual health insurance supplement to the Current Population Survey (CPS), the only nationwide source of information on uninsured children by state. CPS is based on a nationally representative sample and is considered adequate to produce national estimates. However, CPS data have well- recognized shortcomings, particularly with regard to state-level estimates, which can be unreliable and exhibit volatility from year to year because of small samples of uninsured low-income children, particularly in states with smaller populations. For example, using the 1994 through 1996 CPS data, estimates of the number of uninsured children in Delaware ranged from 12,000 to 32,000. In part because of these data limitations, some states—including 3 of the states sampled by the OIG—moved to special surveys or studies that were conducted locally in an effort to develop more precise estimates of the number of uninsured children. Despite efforts by states to better estimate the number of uninsured children, the OIG cited concerns regarding states’ analyses. For example, the OIG reported that some states estimated reductions in the number of uninsured children by subtracting the number of SCHIP enrollees from their original baseline estimates. However, such an approach does not ensure that increases in SCHIP lead to reductions in the number of uninsured because increases in SCHIP enrollment can result from children moving from private insurance coverage to public insurance under SCHIP, an effect known as “crowd-out.” Additionally, changing economic factors can further complicate assessments of a state’s progress in reducing the number of uninsured children. For example, a state may significantly increase enrollment in SCHIP but—because of declines in the economy and increased unemployment—continue to see an increase in the number of uninsured. Under these circumstances, “progress” in reducing the number of uninsured may be more difficult to identify. Based on its findings, the OIG recommended that HCFA identify a core set of evaluation measures that will enable all SCHIP states to provide useful information. It further recommended that HCFA and HRSA provide guidance and assistance to states in conducting useful evaluations of their programs. The OIG noted that SCHIP staffs would benefit from assistance and training regarding the type of data to collect and how to conduct evaluations. HCFA concurred with these recommendations and cited efforts under way to improve states’ evaluations of their SCHIP programs. Several federal efforts are under way that should help improve states’ data sources and their evaluations of the extent to which their SCHIP programs are reducing the number of uninsured children. If implemented on a timely basis, efforts such as the following should help inform the OIG’s subsequent evaluations. The Congress appropriated $10 million each year beginning in fiscal year 2000 to increase the sample size of CPS. Beginning in 2001, larger sample sizes are being phased into CPS, which should help improve the accuracy of state-level CPS estimates of uninsured children. CMS is working with states to develop consistent performance measures for SCHIP, with a focus on ensuring appropriate methodology and consistency of data. As a condition of their state SCHIP plans, some states are required to assess whether the SCHIP program is “crowding out” private health insurance in their states. These studies could help assess the extent to which SCHIP is drawing its enrollment from uninsured children—or from children who were previously insured. BBRA requires HHS to conduct an evaluation of SCHIP to determine the effectiveness of the program and to provide information to guide future federal and state policy. To comply with BBRA, HHS plans a series of reports addressing a variety of major topic areas, ranging from program design to access and utilization; the first report is expected in spring 2002. HHS plans to use multiple research strategies, including case studies, surveys, and focus groups, to address questions of interest. As the OIG continues to analyze states’ progress in SCHIP, its future reviews are likely to benefit from improvements in state-level estimates of the number of uninsured children and evaluations of program implementation. Moreover, improvements in states’ analyses and available data should help the OIG identify and address areas in need of additional review. However, to the extent that these improvements are not in place by the time the OIG undertakes its second analysis due in 2003, it may benefit from expanding its scope of work to identify and assess states with more rigorous analyses. The OIG may also wish to review other sources that have assisted states in making evaluation improvements. For example, while some states have received private grant funds to help with SCHIP enrollment, they have also received technical assistance for the purpose of conducting evaluations on the success of their enrollment strategies. Other states have paired with universities or research organizations to improve their information on the uninsured. By also drawing on the experience of states with strong evaluations or data sources, the OIG will be better able to identify approaches that could further strengthen federal and states’ approaches and inform the Congress on progress in implementing SCHIP. Through its periodic evaluations of states’ efforts to ensure appropriate SCHIP enrollment and to reduce the number of uninsured children, the OIG is in a position to provide objective information to the Congress and others about the program’s operation and success. To better capture the experience of all states, regardless of the design of their SCHIP programs, the OIG should expand its scope beyond the 13 states in its first review to also include states that operate separate child health programs within SCHIP combination programs and consider including Medicaid expansion programs as well. This would provide a broader base for understanding how well states are screening for Medicaid eligibility and identifying issues related to reducing the number of uninsured children. Such an expansion of scope may also help identify states with more rigorous evaluations of their SCHIP programs, and thus provide information on effective approaches to SCHIP evaluation as well as more complete information for the Congress. In order to better inform the Congress on states’ efforts to implement SCHIP, we recommend that the HHS inspector general expand the scope of the statutorily required periodic reviews to include all states with separate child health programs, including those with combination programs, and consider using its general audit authority to explore whether issues of appropriate SCHIP enrollment also exist among states that have opted for Medicaid expansions under SCHIP, and should therefore be included in future OIG reviews. We provided the inspector general of HHS an opportunity to comment on a draft of this report. In its comments, the OIG concurred with our recommendations, and agreed that expanding the scope of its inspections to include combination programs that include separate child health programs would give a greater breadth of information. It also agreed that including SCHIP Medicaid expansions would broaden the perspective and present more conclusive information regarding the status of states’ SCHIP programs. The OIG also provided general comments regarding its approach and possible approaches to designing future reviews. For example, the OIG stated that it would consider including differing state processes as a factor in its next sample design. The OIG also noted the importance of focusing on states’ measurement of their own program performance. We agree with the OIG that properly conducted state evaluations serve a vital function and we believe that continued review of these efforts by the OIG is an important contribution to better understanding states’ progress under SCHIP. In response to the OIG’s oral and written comments, we revised the report to better clarify the scope of the BBRA mandate. The full text of the OIG’s written comments is reprinted in appendix I. We are sending copies of this report to the inspector general of the Department of Health and Human Services and other interested parties. We will also make copies available to others on request. If you or your staffs have questions about this report, please contact me on (202) 512- 7118 or Carolyn Yocom at (202) 512-4931. JoAnn Martinez-Shriver and Behn Miller also made contributions to this report. Medicaid and SCHIP: States’ Enrollment and Payment Policies Can Affect Children’s Access to Care. GAO-01-883. Washington, D.C.: Sept. 10, 2001. Children’s Health Insurance: SCHIP Enrollment and Expenditure Information. GAO-01-993R. Washington, D.C.: July 25, 2001. Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits. GAO/HEHS-00-86. Washington, D.C.: April 14, 2000. Children’s Health Insurance Program: State Implementation Approaches are Evolving. GAO/HEHS-99-65. Washington, D.C.: May 14, 1999.
|
Congress created the State Children's Health Insurance Program (SCHIP) in 1997 to reduce the number of uninsured children in families with incomes that are too high to qualify for Medicaid. Financed jointly by the states and the federal government, SCHIP encourages state participation by offering a higher federal matching rate than the Medicaid program. Concerns have been raised that states might inappropriately enroll Medicaid-eligible children in SCHIP and thus obtain higher federal matching funds than allowed under Medicaid. The Department of Health and Human Services Office of Inspector General (OIG) concluded that Medicaid-eligible children were not being enrolled in SCHIP by the 13 states that administer separate child health care programs. Furthermore, the issue of appropriate enrollment is not limited to states with completely separate child health programs but also applies to those states with combination programs and Medicaid expansions, which also receive the higher SCHIP matching rate. The OIG could not conclude whether states were reducing the number of uninsured children and meeting the objectives and goals they established in their SCHIP programs. The OIG found that some states had set program goals without considering how they might be measured and that states' staffs often lacked adequate evaluation skills.
|
In 1975, the Congress created the federal child support enforcement program as title IV-D of the Social Security Act. The program’s original purpose was to strengthen state and local efforts for obtaining child support for families receiving AFDC and for any non-AFDC individuals who apply for child support services. The program provides a broad range of services, including basic services such as locating the noncustodial parent or parents, establishing paternity through genetic tests or other means, establishing support orders obligating noncustodial parents to pay specific amounts, and collecting the payment of support owed. While the ultimate goal under the program is to collect child support for each case, any one case may need a different combination of basic services before collections can begin. To illustrate these different combinations of needed services, figure 1 gives a breakdown of the services Virginia’s child support cases needed when time limits began. For example, at the beginning of the 2-year time limit, 4 percent of the custodial parents already had a support order in place but the noncustodial parents needed to be located and their support orders needed to be enforced through wage-withholding or other means. Appendix II describes the basic child support enforcement services. Child support enforcement is a joint federal and state responsibility. Within the federal government, OCSE is responsible for providing leadership, technical assistance, and standards for effective state programs. States or local offices under state supervision deliver child support services to families. The federal government and the states share administrative costs at the rate of 66 percent and 34 percent, respectively. In fiscal year 1996, administrative costs for the program were $3 billion and collections totaled $12 billion. About 13 percent of the 7.4 million AFDC child support cases and 28 percent of 9.3 million non-AFDC child support cases nationwide received at least one support payment in 1996. The new welfare reform law significantly changed our nation’s welfare policy. Since 1935, AFDC had entitled single-parent families to receive monthly cash assistance generally for as long as they met income and other eligibility criteria and had children under age 18. TANF represents a significant departure from this approach, placing a 5-year limit on federal aid designed to ensure that assistance is temporary for most recipients.Before the welfare reform law passed, however, 14 states were granted waivers under section 1115 of the Social Security Act allowing them to experiment with assistance time limits ranging from 18 months to 5 years.Although state policies regarding exemptions and extensions varied, these state waivers were the first efforts to make assistance temporary based solely on a specified period of time. Table 1 summarizes some provisions of the time-limited programs adopted in the three states we reviewed. The Congress also wanted to encourage parental responsibility by requiring OCSE and the states to strengthen the existing child support enforcement program by adopting new enforcement tools. These tools include federal and state registries of child support orders, federal and state directories of new employee hires, and state reporting of quarterly wage reports to help locate noncustodial parents and enforce support orders. As all states move to implement these tools, many states are also working to complete the statewide automated systems required by federal law. From fiscal year 1981 through fiscal year 1997, the states have spent about $3.2 billion in federal and state funds to develop these systems to manage their caseloads. However, as of March 30, 1998, only 25 of the 54 child support enforcement systems had been certified by HHS. In addition, the welfare reform legislation established new custodial parent cooperation requirements and penalties to strengthen existing requirements and to simplify the paternity establishment process. The law also included additional tools such as the matching of data with states’ financial institutions and the revocation of noncustodial parents’ driver’s, professional and occupational, and recreational licenses to enforce the payment of child support. Finally, the Congress required that HHS in consultation with the states develop and report to the Congress on a new incentive payment system to encourage states to operate effective programs. In the first three states to enforce time limits, most families who reached their 21-month to 3-year time limits did not have any child support collected for them during the 12 months before their welfare termination. Moreover, in about one-half to two-thirds of these families’ child support cases, child support was not due at termination because a support obligation had not yet been established. Only about 20 to 30 percent of families reaching their time limits had any child support collected for them in the 12 months before termination. Figure 2 shows the status of welfare families’ child support cases during the 12-month period before termination. The median annual amounts due for all cases with a current support order in Connecticut, Florida, and Virginia were $3,054, $2,134, and $2,067, respectively. However, as shown in table 2, the amount collected for families rarely equaled the full amount due. On average, the amount collected ranged from 43 percent to 52 percent of the amount due. For families with child support collected, median child support collections ranged from $581 to $1,348 and mean collections ranged from $1,065 to $1,388 for the 12-month period. Upon reaching termination, many families in the three states still required one or more basic child support services, including locating the noncustodial parent and establishing paternity and support orders, before collections and enforcement could begin. From 56 to 81 percent of all child support cases reviewed still needed at least one of these basic services when welfare benefits expired. One-half to about two-thirds of these cases had been open in the child support system for 5 years or longer. Locating a noncustodial parent is an essential precondition of all child support collections, for without location, paternity and support orders cannot be established, orders cannot be enforced, and collections cannot be made. In addition, locating a noncustodial parent may be necessary even though paternity has previously been established and a support order is in place. As figure 3 shows, from 56 to 81 percent of the noncustodial parents who needed to be located at the start of the time limit were not located by the time welfare benefits were terminated. State officials said locating noncustodial parents for families with older child support cases was particularly problematic. Information initially provided by custodial parents when they applied for welfare is no longer current, and new information is rarely forthcoming. Some noncustodial parents, like the custodial parents, are less educated, have fewer skills, and are less likely to be regularly employed. Therefore, they are harder to locate through employment sources. State officials also said noncustodial parents are often mobile, work “off the books,” and some may quit their jobs as soon as they are located through their employers. Another basic child support service that may still be needed is establishing paternity. Of the large proportion of the cases we reviewed needing paternity established at the start of the time limit, the vast majority still did not have paternity established by the time welfare benefits were terminated. From 71 percent to 79 percent of the child support cases that needed to have paternity established did not have paternity established by the time welfare benefits ended, as shown in figure 4. Most of the cases reviewed needed support orders established at the start of the time limit, and the vast majority of them still did not have orders established by the time welfare benefits were terminated. From 75 to 79 percent of the child support cases needing orders remained without orders at termination, as illustrated by figure 5. The outcomes in two states with strong child support performance demonstrate how states may achieve better results establishing paternity, obtaining support orders, and collecting child support. In these states, we reviewed new AFDC child support cases that were first opened in 1992 and remained open for 5 years—the maximum length of time a family may receive federal TANF aid. About two-thirds of the cases that remained open for 5 years received some child support in the last 12 months of the period. Yet, despite the greater success rate of these states, about one-third of their child support clients would have reached the end of a 5-year time limit without any child support, as shown in figure 6. About two-thirds of the cases that remained open for 5 years had some child support collected for them in the last 12 months. However, the majority of cases did not receive the full amount due, as shown in table 3. The median annual amounts due for all cases with a current support order in Minnesota and Washington were $2,351 and $2,358, respectively. The amount collected per case averaged 69 to 83 percent of the full amount due, higher than the cases reviewed in the first states to enforce time limits. For cases with collections, the median child support collections ranged from $1,875 to $2,118 and mean collections ranged from $2,211 to $2,316 in the last 12 months. States achieved high rates of paternity and support order establishment during the 5-year period. More than 80 percent of cases needing paternity or support order establishment or both received these basic services during the 5-year period, as figures 7 and 8 show. However, about one-third of all the cases that remained open for 5 years had no child support collected for them during the last 12 months of the period. Time limits are being implemented on the two different sets of families represented in our study: those already receiving aid before time limits began and those whose assistance will begin under time limits. Many of the families already receiving aid who did not have collections before the time limits began may be unlikely to obtain child support before welfare benefits expire unless states can improve their performance in locating noncustodial parents. In addition, families who begin receiving aid under time limits are much more likely to receive some support before their benefits expire if states aggressively pursue their cases. A state’s success in obtaining child support can provide an important supplement to a family’s earnings. If states expect to obtain child support for families before their time-limited welfare benefits expire, the states will need to improve their performance and ensure that they effectively implement the new tools provided by the Congress. Our findings from the three states that implemented time limits under waivers indicate that many welfare families that did not get child support before the time limits began may be unlikely to obtain it before their welfare benefits expire unless states can locate noncustodial parents. Failure to locate the noncustodial parent was the primary reason efforts did not succeed in the three states that had implemented time-limited aid. One-half to three-quarters of the cases that needed a support order or paternity established could not get these services because the state could not or did not locate the noncustodial parent. Fifty-nine to 90 percent of the cases not receiving child support in these states were cases that had been open for more than 5 years. Although state officials told us that locating noncustodial parents for families who have been receiving welfare for several years is particularly difficult, opportunities may exist for states to renew or enhance their efforts to pursue child support for these cases. The implementation of time-limited assistance may motivate custodial parents to provide more current noncustodial parent information as their benefits expire. In addition, the new federal law requires states to reduce a family’s grant amount by at least 25 percent for failure to cooperate with child support enforcement and gives states the option to deny assistance to the entire family. OCSE officials suggested that states may need to employ new strategies to work with families who have been receiving welfare for several years. Custodial parents in these cases may have to be reinterviewed to obtain current noncustodial parent information and to be educated on the benefits of obtaining child support in a time-limited welfare environment. Coupled with the new enforcement tools available to states, this new information could lead to better location and collection outcomes. States will also need to aggressively pursue new cases opened under time limits to help ensure successful outcomes. Our analysis of cases that first opened in 1992 and remained open for 5 years in two states showed that more than 70 percent of the paternities and support orders ever established on these cases were obtained within the first 2 years after opening. In addition, state officials told us that noncustodial parent information has to be pursued early and aggressively to achieve successful outcomes. Once a family reaches the end of its cash assistance, child support can be an important supplement to its income. Many families who leave welfare are employed in low-wage jobs. For example, in our recent report on TANF implementation in seven states, we found three states tracking wages for welfare recipients, with mean wages for welfare recipients placed in jobs ranging from $5.60 to $6.60 per hour. At these wage levels, many working families’ earnings are near or below the federal poverty level. Although these families’ incomes may be increased through the earned income tax credit and they may benefit from receiving other aid, including food stamps and medical assistance, they also may incur significant work-related expenses. For such families, child support payments could further enhance family incomes or reduce their need for public assistance. A study released in 1993 focusing on divorced women in Wisconsin found that receiving even minor amounts of child support can play a significant role in keeping families self-sufficient. However, for those families without earnings, child support is unlikely to replace families’ lost cash assistance. For example, in the first three states to enforce time limits, the mean monthly child support collected ranged from 22 percent to 60 percent of the mean grant received in the month before termination. If states expect to ensure that families receive child support before their time-limited welfare benefits expire, they will need to get their statewide automated systems operational and certified, ensure that the new tools are effectively implemented, and improve their performance. State officials from all five states reviewed believed that the new tools mandated under welfare reform will help improve performance in their states. For example, they cited the national new-hire and support order registries as tools that offer significant potential improvement for their interstate caseloads, which constitute about one-third of their total caseloads. In addition, they expected the new custodial parent cooperation requirements to result in more accurate and more complete noncustodial parent information at welfare intake, which should help them locate noncustodial parents. However, officials in Connecticut, Florida, and Virginia also said they were challenged by rising caseloads and resource limitations. A Virginia official told us, for example, that despite rising caseloads, the child support agency had not been authorized to hire any child support staff in the last 4 years, and thus is currently 17 percent under authorized strength. To improve performance and take full advantage of the potentially powerful new tools provided by welfare reform, states will have to either increase their productivity or commit additional resources to ensure that the tools are effectively implemented. In commenting on a draft of our report, HHS agreed with several of the major implications cited, including the importance of improving the location of noncustodial parents and aggressively pursuing new cases and that child support can be an important supplement to postassistance family income. However, HHS questioned whether the past experience of a limited number of states should be used to predict the future performance of all states, especially with the new tools available to the states under welfare reform. We agree that our results should not be used to project future collection rates nationwide. However, we believe our work highlights the challenges states and families face in the new time-limited welfare environment. We also agree that new tools, such as the national new-hire and support order registries, hold the promise of improving states’ child support performance. HHS also noted that the child support collections potential is often limited by the lack of job skills and low educational attainment of the fathers associated with welfare and former welfare families. We discuss in our report how these factors make some noncustodial parents less likely to be regularly employed and more difficult to locate, and we cite studies suggesting that lack of income may be a barrier to collecting child support. (HHS’ comments are in app. III.) We also provided copies of a draft of this report to the five states covered in our review, who provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and its Subcommittee on Social Security and Family Policy; the Secretary of HHS; and HHS’ Assistant Secretary for Children and Families. We also will make copies available to others on request. If you or your staff have any questions about this report, please contact Gale C. Harris, Assistant Director, at (202) 512-7235 or Kevin M. Kumanga, Senior Evaluator, at (202) 512-4962. Other major contributors to this report are Patricia Elston, Anndrea Ewertsen, and Christopher Morehouse. Time-limited welfare is being imposed on two distinctively different sets of welfare child support cases—existing cases and new cases. Additionally, time limits on TANF aid vary in length, from a maximum of 5 years, as specified in federal law, to shorter periods as determined by the states.To assess the outcomes for these different sets of cases, we used two different units of analysis. For the existing cases in the states that implemented shorter time limits under waivers, we examined the child support outcomes for families whose aid was terminated at the end of their time limits. To do this, we reviewed all the child support cases associated with each family. To determine the extent to which states with relatively good performance in child support enforcement obtained child support for new welfare child support cases over a 5-year benefit period, we sampled and reviewed individual child support cases as the indicator of state agencies’ likely performance. To track child support outcomes for welfare recipients whose welfare benefits have been terminated, we selected the three states in which families first faced benefit termination under state waivers approved before federal welfare reform. These states were Connecticut, Florida, and Virginia (see table 1 for summary of time limits adopted in these states). We then identified the child support cases associated with the families whose welfare benefits expired. Table I.1 shows the number of welfare cases and their child support cases analyzed for this report. In Connecticut and Florida, we drew a random sample of welfare cases; in Virginia, we reviewed the child support cases of all families whose welfare benefits had been terminated. In each state, we gathered case data for analysis by reviewing automated case files. We identified the child support services each case needed from the date the time limits began and tracked the child support outcomes that had been achieved for these cases when time limits expired. In most cases, these child support cases had been open for many years before the time limits were imposed. Specifically, we determined whether the noncustodial parent needed to be located and whether the case needed paternity or support order establishment at the date the time limits went into effect. For all cases with orders, whether they were established before or after the time limits were imposed, we also tracked the amount of current child support due and collected during the 12 months before time limits expired to determine the likelihood that families would have child support after their welfare benefits end. Our unit of analysis was the welfare family: that is, we identified all child support cases associated with the same welfare case and combined the total amounts due and collected on their behalf. Additionally, we annualized the financial outcomes for cases that had less than a full year of current support due during the period reviewed. Because most states have adopted 5-year time limits, we selected two states in which we tracked child support outcomes for 5 years. To select these states, we developed an index of child support performance using preliminary 1996 data. We assigned scores for performance in specific areas, with additional weights for performance on AFDC cases. We focused on states with strong performance in these areas, with the rationale that outcome data from these states could show how high-performing states have been able to meet the child support needs of their clients in a 5-year period. We ranked the states according to these scores and selected Minnesota and Washington from the 10 states with the highest scores. In both states, we gathered case data for analysis by reviewing automated case files. To track child support outcomes for 5 years, we identified new welfare cases that opened in June 1992 and the child support cases associated with each welfare family. Each child support case represents one noncustodial parent. We randomly selected our cases from this universe of child support cases. We collected data for more than 200 cases in each state and focused our analysis on the cases that remained open child support cases for the entire 5-year period—63 cases in Minnesota and 54 in Washington. We tracked outcomes regardless of whether the child support cases had changed from an AFDC to a non-AFDC status. Less than one-quarter of these cases remained AFDC child support cases continuously for the entire 5 years. We identified the child support services each case needed from the date the case opened and tracked the child support outcomes that had been achieved for these cases through May 1997, after 5 years had elapsed. Specifically, we determined whether the case needed paternity or support order establishment. For cases with orders, we also tracked the amount of current child support due and collected during the 12 months before time limits expired. We tracked collections by child support case only. Table I.2 shows the number of child support cases identified, sampled, and analyzed for this report. As stated earlier, a large proportion of the child support cases closed before the end of the 5-year period and therefore were not included in our analysis. About half of the cases were closed by the midpoint of the 5-year period. Figures I.1 and I.2 show the numbers and percentages of cases that closed each year during the period we reviewed. Represents case closures from January through May 1997. Represents case closures from June through December 1992. Represents case closures from January through May 1997. Represents case closures from June through December 1992. The most frequently noted reasons for closing a child support case were the custodial parent moving out of the state or country, the custodial and noncustodial parents’ reuniting or the noncustodial parent being added to the welfare grant, the case being closed because the custodial parent refused to cooperate, or the state’s failing to locate the noncustodial parent. Collectively, these reasons accounted for about 48 and 40 percent, respectively, of the case closures in Minnesota and Washington. To arrive at the estimates presented in this report, in every state but Virginia (where we selected all families whose aid was terminated), we sampled from either a population of those who were terminated from AFDC, or a pool of those who joined the AFDC rolls in June 1992. Because we analyzed samples of cases to estimate characteristics of the entire population of such cases, our estimates of percentages and dollar amounts have standard errors associated with them. A standard error is the variation that occurs by chance because a sample, rather than the entire population, was analyzed. The size of the standard error reflects the precision of the estimate. The smaller the standard error, the more precise the estimate. Following is a description of the sizes of the standard errors for the estimates presented in this report. The standard error for the estimated mean support amount due and the estimated mean support amount collected varied by state, but for no state was it greater than $615 for the mean amount due and $759 for the mean amount collected. The standard error for the estimated percentage of noncustodial parents for whom either location, paternity establishment, or a support order was needed varied by state, but in no state did it exceed 13 percentage points. The standard error for the percentage of AFDC families that received no support money varied by state but was in no state greater than 12.3 percentage points. The standard error for the percentage of the support order amounts due and received for families varied by state, but in no state did they exceed 13.7 percentage points. All of the standard errors were calculated at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the range defined by the estimate, plus or minus the standard error, contains the true percentage or dollar figure we would have found if we had analyzed data from the entire population. Location includes efforts at local, state, and federal levels to identify a noncustodial parent’s address, Social Security number, place of employment, and other characteristics. It might include efforts to directly contact individuals; contacts with public and private institutions, such as credit bureaus and state and federal income tax agencies; and the use of computer tape matches with state and federal databases. Paternity establishment is the identification of the legal father of a child, usually through the courts or expedited through hearings in a quasi-judicial or administrative body. Paternities are established in either of two ways: (1) through voluntary acknowledgment by the father or, (2) if contested, through a determination made on the basis of scientific and testimonial evidence. Support order establishment involves the development of a support order that legally obliges the noncustodial parent to pay child support and provide medical insurance coverage when it is available at reasonable cost. The child support enforcement agency must help custodial parents initiate an action in court or through an administrative or expedited legal process that will produce such an order. The child support enforcement agency helps determine a child’s financial needs and the extent to which the noncustodial parent can provide financial support and medical insurance coverage. Support orders are subject to periodic review and adjustment at least every 3 years in welfare cases and upon parental request in nonwelfare cases. Collections and enforcement involve enforcing, monitoring, and processing payments. To enforce payment on delinquent cases or to ensure regularity and completeness of current accounts, child support enforcement agencies have a wide array of techniques at their disposal. These techniques include bonds and security deposits, federal and state tax intercepts, garnishments, liens, and wage withholding, among others. Noncustodial parents’ payments must also be monitored, recorded, and distributed. Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Child Support Enforcement: Strong Leadership Required to Maximize Benefits of Automated Systems (GAO/AIMD-97-72, June 30, 1997). Welfare Reform: States’ Early Experiences With Benefit Termination (GAO/HEHS-97-74, May 15, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Child Support Enforcement: States’ Experience With Private Agencies’ Collection of Support Payments (GAO/HEHS-97-11, Oct. 23, 1996). Child Support Enforcement: States and Localities Move to Privatized Services (GAO/HEHS-96-43FS, Nov. 20, 1995). Child Support Enforcement: Opportunity to Reduce Federal and State Costs (GAO/T-HEHS-95-181, June 13, 1995). Child Support Enforcement: Families Could Benefit From Stronger Enforcement Program (GAO/HEHS-95-24, Dec. 27, 1994). Child Support Enforcement: Federal Efforts Have Not Kept Pace With Expanding Program (GAO/T-HEHS-94-209, July 20, 1994). Child Support Enforcement: Credit Bureau Reporting Shows Promise (GAO/HEHS-94-175, June 3, 1994). Child Support Assurance: Effect of Applying State Guidelines to Determine Fathers’ Payments (GAO/HRD-93-26, Jan. 23, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on how successful states are likely to be in obtaining child support for families whose benefits are subject to time limits, focusing on: (1) how successful states that experimented with time-limited benefits before welfare reform have been in obtaining child support for families who reach their limits; (2) how successful states have been in obtaining child support for families within a 5-year period, the maximum time a family may receive Temporary Assistance for Needy Families (TANF) benefits; and (3) the implications time limits have for states and families. GAO noted that: (1) many TANF families may not be able to count on child support as a steady source of income when their time-limited welfare benefits expire; (2) in the first three states to enforce welfare benefit time limits--Connecticut, Florida, and Virginia--only about 20 to 30 percent of families had any child support collected for them in the 12 months before their welfare benefits were terminated; (3) about one-half or more of the child support cases without collections lacked a child support order legally obligating a noncustodial parent to pay child support at the time the families' assistance was terminated, despite having a long history in the child support program before time limits were implemented; (4) for families whose child support was secured, the median collections among the three states ranged from a total of $581 to $1,348 for the 12-month period; (5) in two high-performing child support states, Minnesota and Washington, GAO observed better outcomes for a sample of Aid to Families with Dependent Children child support cases that first opened in 1992 and remained open for 5 years; (6) about two-thirds of the families received some child support in the last 12 months of that period; (7) support order establishment rates were higher for these cases as well: in both states, orders were established within 5 years for more than 80 percent of the cases that needed them; (8) the median amounts of child support collected for these families ranged from $1,875 to $2,118 for the 12-month period; (9) despite these outcomes, about one-third of the child support clients in these states reached the end of the 5-year period without any child support; (10) to better ensure that child support is available for families in a time-limited welfare system, states will need to improve their child support performance for families already in the welfare system and for those who enter it for the first time; (11) in the three states GAO studied that had imposed time limits on families already receiving aid, from one-half to three-quarters of the families could not get child support because the state did not or could not locate the noncustodial parent; (12) it is also important for states to move quickly to pursue child support for families that have just begun receiving aid; (13) state officials told GAO that information on noncustodial parents is best pursued early and aggressively to achieve successful outcomes; and (14) GAO's analysis showed that successful outcomes are most likely within 2 years after a family begins receiving child support services.
|
As a result of 150 years of changes in financial regulation in the United States, the regulatory system has become complex and fragmented. Today, responsibilities for overseeing the financial services industry are shared among almost a dozen federal banking, securities, futures, and other regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies. In particular, five federal agencies— including the Federal Deposit Insurance Corporation, the Federal Reserve, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and the National Credit Union Administration—and multiple state agencies oversee depository institutions. Securities activities are overseen by the Securities and Exchange Commission and state government entities, as well as by private sector organizations performing self-regulatory functions. Futures trading is overseen by the Commodity Futures Trading Commission and also by industry self-regulatory organizations. Insurance activities are primarily regulated at the state level with little federal involvement. Other federal regulators also play important roles in the financial regulatory system, such as the Public Company Accounting Oversight Board, which oversees the activities of public accounting firms, and the Federal Trade Commission, which acts as the primary federal agency responsible for enforcing compliance with federal consumer protection laws for financial institutions, such as finance companies, which are not overseen by another financial regulator. Much of this structure has developed as the result of statutory and regulatory changes that were often implemented in response to financial crises or significant developments in the financial services sector. For example, the Federal Reserve System was created in 1913 in response to financial panics and instability around the turn of the century, and much of the remaining structure for bank and securities regulation was created as the result of the Great Depression turmoil of the 1920s and 1930s. Changes in the types of financial activities permitted for depository institutions and their affiliates have also shaped the financial regulatory system over time. For example, under the Glass-Steagall provisions of the Banking Act of 1933, financial institutions were prohibited from simultaneously offering commercial and investment banking services, but with the passage of the Gramm-Leach-Bliley Act of 1999 (GLBA), Congress permitted financial institutions to fully engage in both types of activities. Several key developments in financial markets and products in the past few decades have significantly challenged the existing financial regulatory structure. (See fig. 1.) First, the last 30 years have seen waves of mergers among financial institutions within and across sectors, such that the United States, while still having large numbers of financial institutions, also has several very large globally active financial conglomerates that engage in a wide range of activities that have become increasingly interconnected. Regulators have struggled, and often failed, to mitigate the systemic risks posed by these conglomerates, and to ensure they adequately manage their risks. The portion of firms that conduct activities across the financial sectors of banking, securities, and insurance increased significantly in recent years, but none of the regulators is tasked with assessing the risks posed across the entire financial system. A second dramatic development in U.S. financial markets in recent decades has been the increasingly critical roles played by less-regulated entities. In the past, consumers of financial products generally dealt with entities such as banks, broker-dealers, and insurance companies that were regulated by a federal or state regulator. However, in the last few decades, various entities—nonbank lenders, hedge funds, credit rating agencies, and special-purpose investment entities—that are not always subject to full regulation by such authorities have become important participants in our financial services markets. These unregulated or less regulated entities can sometimes provide substantial benefits by supplying information or allowing financial institutions to better meet demands of consumers, investors or shareholders, but pose challenges to regulators that do not fully or cannot oversee their activities. For example, significant participation in the subprime mortgage market by generally less-regulated nonbank lenders contributed to a dramatic loosening in underwriting standards leading up to the current financial crisis. A third development that has revealed limitations in the current regulatory structure has been the proliferation of more complex financial products. In particular, the increasing prevalence of new and more complex investment products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Regulators failed to adequately oversee the sale of mortgage products that posed risks to consumers and the stability of the financial system. Fourth, standard setters for accounting and financial regulators have faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments, and in addressing challenges arising from the global convergence of accounting and auditing standards. Finally, with the increasingly global aspects of financial markets, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. For example, the current system has complicated the ability of financial regulators to convey a single U.S. position in international discussions, such the Basel Accords process for developing international capital standards, and international officials have also indicated that the lack of a single point of contact on, for example, insurance issues has complicated regulatory decision making. As a result of significant market developments in recent decades that have outpaced a fragmented and outdated regulatory structure, significant reforms to the U.S. regulatory system are critically and urgently needed. The current system has important weaknesses that, if not addressed, will continue to expose the nation’s financial system to serious risks. As early as 1994, we identified the need to examine the federal financial regulatory structure, including the need to address the risks from new unregulated products. Since then, we have described various options for Congress to consider, each of which provides potential improvements, as well as some risks and potential costs. Our report offers a framework for crafting and evaluating regulatory reform proposals; it consists of the following nine characteristics that should be reflected in any new regulatory system. By applying the elements of this framework, the relative strengths and weaknesses of any reform proposal should be better revealed, and policymakers should be able to focus on identifying trade-offs and balancing competing goals. Similarly, the framework could be used to craft proposals, or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system. 1. Clearly defined regulatory goals. A regulatory system should have goals that are clearly articulated and relevant, so that regulators can effectively conduct activities to implement their missions. A critical first step to modernizing the regulatory system and enhancing its ability to meet the challenges of a dynamic financial services industry is to clearly define regulatory goals and objectives. In the background of our report, we identified four broad goals of financial regulation that regulators have generally sought to achieve. These include ensuring adequate consumer protections, ensuring the integrity and fairness of markets, monitoring the safety and soundness of institutions, and acting to ensure the stability of the overall financial system. However, these goals are not always explicitly set in the federal statutes and regulations that govern these regulators. Having specific goals clearly articulated in legislation could serve to better focus regulators on achieving their missions with greater certainty and purpose, and provide continuity over time. Given some of the key changes in financial markets discussed in our report—particularly the increased interconnectedness of institutions, the increased complexity of products, and the increasingly global nature of financial markets—Congress should consider the benefits that may result from re-examining the goals of financial regulation and making explicit a set of comprehensive and cohesive goals that reflect today’s environment. For example, it may be beneficial to have a clearer focus on ensuring that products are not sold with unsuitable, unfair, deceptive, or abusive features; that systemic risks and the stability of the overall financial system are specifically addressed; or that U.S. firms are competitive in a global environment. This may be especially important given the history of financial regulation and the ad hoc approach through which the existing goals have been established. We found varying views about the goals of regulation and how they should be prioritized. For example, representatives of some regulatory agencies and industry groups emphasized the importance of creating a competitive financial system, whereas members of one consumer advocacy group noted that reforms should focus on improving regulatory effectiveness rather than addressing concerns about market competitiveness. In addition, as the Federal Reserve notes, financial regulatory goals often will prove interdependent and at other times may conflict. Revisiting the goals of financial regulation would also help ensure that all involved entities—legislators, regulators, institutions, and consumers—are able to work jointly to meet the intended goals of financial regulation. Such goals and objectives could help establish agency priorities and define responsibility and accountability for identifying risks, including those that cross markets and industries. Policymakers should also carefully define jurisdictional lines and weigh the advantages and disadvantages of having overlapping authorities. While ensuring that the primary goals of financial regulation—including system soundness, market integrity, and consumer protection—are better articulated for regulators, policymakers will also have to ensure that regulation is balanced with other national goals, including facilitating capital raising, innovation, and other benefits that foster long-term growth, stability, and welfare of the United States. Once these goals are agreed upon, policymakers will need to determine the extent to which goals need to be clarified and specified through rules and requirements, or whether to avoid such specificity and provide regulators with greater flexibility in interpreting such goals. Some reform proposals suggest “principles-based regulation” in which regulators apply broad-based regulatory principles on a case-by-case basis. Such an approach offers the potential advantage of allowing regulators to better adapt to changing market developments. Proponents also note that such an approach would prevent institutions in a more rules-based system from complying with the exact letter of the law while still engaging in unsound or otherwise undesirable financial activities. However, such an approach has potential limitations. Opponents note that regulators may face challenges to implement such a subjective set of principles. A lack of clear rules about activities could lead to litigation if financial institutions and consumers alike disagree with how regulators interpreted goals. Opponents of principles-based regulation note that industry participants who support such an approach have also in many cases advocated for bright-line standards and increased clarity in regulation, which may be counter to a principles-based system. The most effective approach may involve both a set of broad underlying principles and some clear technical rules prohibiting specific activities that have been identified as problematic. Key issues to be addressed: Clarify and update the goals of financial regulation and provide sufficient information on how potentially conflicting goals might be prioritized. Determine the appropriate balance of broad principles and specific rules that will result in the most effective and flexible implementation of regulatory goals. 2. Appropriately comprehensive. A regulatory system should ensure that financial institutions and activities are regulated in a way that ensures regulatory goals are fully met. As such, activities that pose risks to consumer protection, financial stability, or other goals should be comprehensively regulated, while recognizing that not all activities will require the same level of regulation. A financial regulatory system should effectively meet the goals of financial regulation, as articulated as part of this process, in a way that is appropriately comprehensive. In doing so, policymakers may want to consider how to ensure that both the breadth and depth of regulation are appropriate and adequate. That is, policymakers and regulators should consider how to make determinations about which activities and products, both new and existing, require some aspect of regulatory involvement to meet regulatory goals, and then make determinations about how extensive such regulation should be. As we noted in our report, gaps in the current level of federal oversight of mortgage lenders, credit rating agencies, and certain complex financial products such as CDOs and credit default swaps likely have contributed to the current crisis. Congress and regulators may also want to revisit the extent of regulation for entities such as banks that have traditionally fallen within full federal oversight but for which existing regulatory efforts, such as oversight related to risk management and lending standards, have been proven in some cases inadequate by recent events. However, overly restrictive regulation can stifle the financial sectors’ ability to innovate and stimulate capital formation and economic growth. Regulators have struggled to balance these competing objectives, and the current crisis appears to reveal that the proper balance was not in place in the regulatory system to date. Key issues to be addressed: Identify risk-based criteria, such as a product’s or institution’s potential to harm consumers or create systemic problems, for determining the appropriate level of oversight for financial activities and institutions. Identify ways that regulation can provide protection but avoid hampering innovation, capital formation, and economic growth. 3. Systemwide focus. A regulatory system should include a mechanism for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk or the institutions in which it is created. A regulatory system should focus on risks to the financial system, not just institutions. As noted in our report, with multiple regulators primarily responsible for individual institutions or markets, none of the financial regulators is tasked with assessing the risks posed across the entire financial system by a few institutions or by the collective activities of the industry. The collective activities of a number of entities—including mortgage brokers, real estate professionals, lenders, borrowers, securities underwriters, investors, rating agencies and others—likely all contributed to the recent market crisis, but no one regulator had the necessary scope of oversight to identify the risks to the broader financial system. Similarly, once firms began to fail and the full extent of the financial crisis began to become clear, no formal mechanism existed to monitor market trends and potentially stop or help mitigate the fallout from these events. Having a single entity responsible for assessing threats to the overall financial system could prevent some of the crises that we have seen in the past. For example, in its Blueprint for a Modernized Financial Regulatory Structure, Treasury proposed expanding the responsibilities of the Federal Reserve to create a “market stability regulator” that would have broad authority to gather and disclose appropriate information, collaborate with other regulators on rulemaking, and take corrective action as necessary in the interest of overall financial market stability. Such a regulator could assess the systemic risks that arise at financial institutions, within specific financial sectors, across the nation, and globally. However, policymakers should consider that a potential disadvantage of providing the agency with such broad responsibility for overseeing nonbank entities could be that it may imply an official government support or endorsement, such as a government guarantee, of such activities, and thus encourage greater risk taking by these financial institutions and investors. Regardless of whether a new regulator is created, all regulators under a new system should consider how their activities could better identify and address systemic risks posed by their institutions. As the Federal Reserve Chairman has noted, regulation and supervision of financial institutions is a critical tool for limiting systemic risk. This will require broadening the focus from individual safety and soundness of institutions to a systemwide oversight approach that includes potential systemic risks and weaknesses. A systemwide focus should also increase attention on how the incentives and constraints created by regulations affects risk taking throughout the business cycle, and what actions regulators can take to anticipate and mitigate such risks. However, as the Federal Reserve Chairman has noted, the more comprehensive the approach, the more technically demanding and costly it would be for regulators and affected institutions. Key issues to be addressed: Identify approaches to broaden the focus of individual regulators or establish new regulatory mechanisms for identifying and acting on systemic risks. Determine what additional authorities a regulator or regulators should have to monitor and act to reduce systemic risks. 4. Flexible and adaptable. A regulatory system should be adaptable and forward-looking such that regulators can readily adapt to market innovations and changes and include a mechanism for evaluating potential new risks to the system. A regulatory system should be designed such that regulators can readily adapt to market innovations and changes and include a formal mechanism for evaluating the full potential range of risks of new products and services to the system, market participants, and customers. An effective system could include a mechanism for monitoring market developments— such as broad market changes that introduce systemic risk, or new products and services that may pose more confined risks to particular market segments—to determine the degree, if any, to which regulatory intervention might be required. The rise of a very large market for credit derivatives, while providing benefits to users, also created exposures that warranted actions by regulators to rescue large individual participants in this market. While efforts are under way to create risk-reducing clearing mechanisms for this market, a more adaptable and responsive regulatory system might have recognized this need earlier and addressed it sooner. Some industry representatives have suggested that principles-based regulation would provide such a mechanism. Designing a system to be flexible and proactive also involves determining whether Congress, regulators, or both should make such determinations, and how such an approach should be clarified in laws or regulations. Important questions also exist about the extent to which financial regulators should actively monitor and, where necessary, approve new financial products and services as they are developed to ensure the least harm from inappropriate products. Some individuals commenting on this framework, including industry representatives, noted that limiting government intervention in new financial activities until it has become clear that a particular activity or market poses a significant risk and therefore warrants intervention may be more appropriate. As with other key policy questions, this may be answered with a combination of both approaches, recognizing that a product approval approach may be appropriate for some innovations with greater potential risk, while other activities may warrant a more reactive approach. Key issues to be addressed: Determine how to effectively monitor market developments to identify potential risks; the degree, if any, to which regulatory intervention might be required; and who should hold such a responsibility. Consider how to strike the right balance between overseeing new products as they come onto the market to take action as needed to protect consumers and investors, without unnecessarily hindering innovation. 5. Efficient and effective. A regulatory system should provide efficient oversight of financial services by eliminating overlapping federal regulatory missions, where appropriate, and minimizing regulatory burden while effectively achieving the goals of regulation. A regulatory system should provide for the efficient and effective oversight of financial services. Accomplishing this in a regulatory system involves many considerations. First, an efficient regulatory system is designed to accomplish its regulatory goals using the least amount of public resources. In this sense, policymakers must consider the number, organization, and responsibilities of each agency, and eliminate undesirable overlap in agency activities and responsibilities. Determining what is undesirable overlap is a difficult decision in itself. Under the current U.S. system, financial institutions often have several options for how to operate their business and who will be their regulator. For example, a new or existing depository institution can choose among several charter options. Having multiple regulators performing similar functions does allow for these agencies to potentially develop alternative or innovative approaches to regulation separately, with the approach working best becoming known over time. Such proven approaches can then be adopted by the other agencies. On the other hand, this could lead to regulatory arbitrage, in which institutions take advantage of variations in how agencies implement regulatory responsibilities in order to be subject to less scrutiny. Both situations have occurred under our current structure. With that said, recent events clearly have shown that the fragmented U.S. regulatory structure contributed to failures by the existing regulators to adequately protect consumers and ensure financial stability. As we note in our report, efforts by regulators to respond to the increased risks associated with new mortgage products were sometimes slowed in part because of the need for five federal regulators to coordinate their response. The Chairman of the Federal Reserve has similarly noted that the different regulatory and supervisory regimes for lending institutions and mortgage brokers made monitoring such institutions difficult for both regulators and investors. Similarly, we noted in our report that the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. One first step to addressing such problems is to seriously consider the need to consolidate depository institution oversight among fewer agencies. Since 1996, we have been recommending that the number of federal agencies with primary responsibilities for bank oversight be reduced. Such a move would result in a system that was more efficient and improve consistency in regulation, another important characteristic of an effective regulatory system. In addition, Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. We have not studied the issue of an optional federal charter for insurers, but have through the years noted difficulties with efforts to harmonize insurance regulation across states through the NAIC-based structure. The establishment of a federal insurance charter and regulator could help alleviate some of these challenges, but such an approach could also have unintended consequences for state regulatory bodies and for insurance firms as well. Also, given the challenges associated with increasingly complex investment and retail products as discussed earlier, policymakers will need to consider how best to align agency responsibilities to better ensure that consumers and investors are provided with clear, concise, and effective disclosures for all products. Organizing agencies around regulatory goals as opposed to the existing sector-based regulation may be one way to improve the effectiveness of the system, especially given some of the market developments discussed earlier. Whatever the approach, policymakers should seek to minimize conflict in regulatory goals across regulators, or provide for efficient mechanisms to coordinate in cases where goals inevitably overlap. For example, in some cases, the safety and soundness of an individual institution may have implications for systemic risk, or addressing an unfair or deceptive act or practice at a financial institution may have implications on the institution’s safety and soundness by increasing reputational risk. If a regulatory system assigns these goals to different regulators, it will be important to establish mechanisms for them to coordinate. Proposals to consolidate regulatory agencies for the purpose of promoting efficiency should also take into account any potential trade-offs related to effectiveness. For example, to the extent that policymakers see value in the ability of financial institutions to choose their regulator, consolidating certain agencies may reduce such benefits. Similarly, some individuals have commented that the current system of multiple regulators has led to the development of expertise among agency staff in particular areas of financial market activities that might be threatened if the system were to be consolidated. Finally, policymakers may want to ensure that any transition from the current financial system to a new structure should minimize as best as possible any disruption to the operation of financial markets or risks to the government, especially given the current challenges faced in today’s markets and broader economy. A financial system should also be efficient by minimizing the burden on regulated entities to the extent possible while still achieving regulatory goals. Under our current system, many financial institutions, and especially large institutions that offer services that cross sectors, are subject to supervision by multiple regulators. While steps toward consolidated supervision and designating primary supervisors have helped alleviate some of the burden, industry representatives note that many institutions face significant costs as a result of the existing financial regulatory system that could be lessened. Such costs, imposed in an effort to meet certain regulatory goals such as safety and soundness and consumer protection, can run counter to other goals of a financial system by stifling innovation and competitiveness. In addressing this concern, it is also important to consider the potential benefits that might result in some cases from having multiple regulators overseeing an institution. For example, representatives of state banking and other institution regulators, and consumer advocacy organizations, note that concurrent jurisdiction— between two federal regulators or a federal and state regulator—can provide needed checks and balances against individual financial regulators who have not always reacted appropriately and in a timely way to address problems at institutions. They also note that states may move more quickly and more flexibly to respond to activities causing harm to consumers. Some types of concurrent jurisdiction, such as enforcement authority, may be less burdensome to institutions than others, such as ongoing supervision and examination. Key issues to be addressed: Consider the appropriate role of the states in a financial regulatory system and how federal and state roles can be better harmonized. Determine and evaluate the advantages and disadvantages of having multiple regulators, including nongovernmental entities such as SROs, share responsibilities for regulatory oversight. Identify ways that the U.S. regulatory system can be made more efficient, either through consolidating agencies with similar roles or through minimizing unnecessary regulatory burden. Consider carefully how any changes to the financial regulatory system may negatively impact financial market operations and the broader economy, and take steps to minimize such consequences. 6. Consistent consumer and investor protection. A regulatory system should include consumer and investor protection as part of the regulatory mission to ensure that market participants receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. A regulatory system should be designed to provide high-quality, effective, and consistent protection for consumers and investors in similar situations. In doing so, it is important to recognize important distinctions between retail consumers and more sophisticated consumers such as institutional investors, where appropriate considering the context of the situation. Different disclosures and regulatory protections may be necessary for these different groups. Consumer protection should be viewed from the perspective of the consumer rather than through the various and sometimes divergent perspectives of the multitude of federal regulators that currently have responsibilities in this area. As discussed in our report, many consumers that received loans in the last few years did not understand the risks associated with taking out their loans, especially in the event that housing prices would not continue to increase at the rate they had in recent years. In addition, increasing evidence exists that many Americans are lacking in financial literacy, and the expansion of new and more complex products will continue to create challenges in this area. Furthermore, regulators with existing authority to better protect consumers did not always exercise that authority effectively. In considering a new regulatory system, policymakers should consider the significant lapses in our regulatory system’s focus on consumer protection and ensure that such a focus is prioritized in any reform efforts. For example, policymakers should identify ways to improve upon the existing, largely fragmented, system of regulators that must coordinate to act in these areas. This should include serious consideration of whether to consolidate regulatory responsibilities to streamline and improve the effectiveness of consumer protection efforts. Another way that some market observers have argued that consumer protections could be enhanced and harmonized across products is to extend suitability requirements—which require securities brokers making recommendations to customers to have reasonable grounds for believing that the recommendation is suitable for the customer—to mortgage and other products. Additional consideration could also be given to determining whether certain products are simply too complex to be well understood and make judgments about limiting or curtailing their use. Key issues to be addressed: Consider how prominent the regulatory goal of consumer protection should be in the U.S. financial regulatory system. Determine what amount, if any, of consolidation of responsibility may be necessary to enhance and harmonize consumer protections, including suitability requirements and disclosures across the financial services industry. Consider what distinctions are necessary between retail and wholesale products, and how such distinctions should affect how they are regulated. Identify opportunities to protect and empower consumers through improving their financial literacy. 7. Regulators provided with independence, prominence, authority, and accountability. A regulatory system should ensure that regulators have independence from inappropriate influence; have sufficient resources, clout, and authority to carry out and enforce statutory missions; and are clearly accountable for meeting regulatory goals. A regulatory system should ensure that any entity responsible for financial regulation is independent from inappropriate influence; has adequate prominence, authority, and resources to carry out and enforce its statutory mission; and is clearly accountable for meeting regulatory goals. With respect to independence, policymakers may want to consider advantages and disadvantages of different approaches to funding agencies, especially to the extent that agencies might face difficulty remaining independent if they are funded by the institutions they regulate. Under the current structure, for example, the Federal Reserve primarily is funded by income earned from U.S. government securities that it has acquired through open market operations and does not assess charges to the institutions it oversees. In contrast, OCC and OTS are funded primarily by assessments on the firms they supervise. Decision makers should consider whether some of these various funding mechanisms are more likely to ensure that a regulator will take action against its regulated institutions without regard to the potential impact on its own funding. With respect to prominence, each regulator must receive appropriate attention and support from top government officials. Inadequate prominence in government may make it difficult for a regulator to raise safety and soundness or other concerns to Congress and the administration in a timely manner. Mere knowledge of a deteriorating situation would be insufficient if a regulator were unable to persuade Congress and the administration to take timely corrective action. This problem would be exacerbated if a regulated institution had more political clout and prominence than its regulator because the institution could potentially block action from being taken. In considering authority, agencies must have the necessary enforcement and other tools to effectively implement their missions to achieve regulatory goals. For example, in a 2007 report we expressed concerns over the appropriateness of having OTS oversee diverse global financial firms given the size of the agency relative to the institutions for which it was responsible. It is important for a regulatory system to ensure that agencies are provided with adequate resources and expertise to conduct their work effectively. A regulatory system should also include adequate checks and balances to ensure the appropriate use of agency authorities. With respect to accountability, policymakers may also want to consider different governance structures at agencies—the current system includes a combination of agency heads and independent boards or commissions— and how to ensure that agencies are recognized for successes and held accountable for failures to act in accordance with regulatory goals. Key issues to be addressed: Determine how to structure and fund agencies to ensure each has adequate independence, prominence, tools, authority and accountability. Consider how to provide an appropriate level of authority to an agency while ensuring that it appropriately implements its mission without abusing its authority. Ensure that the regulatory system includes effective mechanisms for holding regulators accountable. 8. Consistent financial oversight. A regulatory system should ensure that similar institutions, products, risks, and services are subject to consistent regulation, oversight, and transparency, which should help minimize negative competitive outcomes while harmonizing oversight, both within the United States and internationally. A regulatory system should ensure that similar institutions, products, and services posing similar risks are subject to consistent regulation, oversight, and transparency. Identifying which institutions and which of their products and services pose similar risks is not easy and involves a number of important considerations. Two institutions that look very similar may in fact pose very different risks to the financial system, and therefore may call for significantly different regulatory treatment. However, activities that are done by different types of financial institutions that pose similar risks to their institutions or the financial system should be regulated similarly to prevent competitive disadvantages between institutions. Streamlining the regulation of similar products across sectors could also help prepare the United States for challenges that may result from increased globalization and potential harmonization in regulatory standards. Such efforts are under way in other jurisdictions. For example, at a November 2008 summit in the United States, the Group of 20 countries pledged to strengthen their regulatory regimes and ensure that all financial markets, products, and participants are consistently regulated or subject to oversight, as appropriate to their circumstances. Similarly, a working group in the European Union is slated by the spring of 2009 to propose ways to strengthen European supervisory arrangements, including addressing how their supervisors should cooperate with other major jurisdictions to help safeguard financial stability globally. Promoting consistency in regulation of similar products should be done in a way that does not sacrifice the quality of regulatory oversight. As we noted in a 2004 report, different regulatory treatment of bank and financial holding companies, consolidated supervised entities, and other holding companies may not provide a basis for consistent oversight of their consolidated risk management strategies, guarantee competitive neutrality, or contribute to better oversight of systemic risk. Recent events further underscore the limitations brought about when there is a lack of consistency in oversight of large financial institutions. As such, Congress and regulators will need to seriously consider how best to consolidate responsibilities for oversight of large financial conglomerates as part of any reform effort. Key issues to be addressed: Identify institutions and products and services that pose similar risks. Determine the level of consolidation necessary to streamline financial regulation activities across the financial services industry. Consider the extent to which activities need to be coordinated internationally. 9. Minimal taxpayer exposure. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. Policymakers should consider identifying the best safeguards and assignment of responsibilities for responding to situations where taxpayers face significant exposures, and should consider providing clear guidelines when regulatory intervention is appropriate. While an ideal system would allow firms to fail without negatively affecting other firms— and therefore avoid any moral hazard that may result—policymakers and regulators must consider the realities of today’s financial system. In some cases, the immediate use of public funds to prevent the failure of a critically important financial institution may be a worthwhile use of such funds if it ultimately serves to prevent a systemic crisis that would result in much greater use of public funds in the long run. However, an effective regulatory system that incorporates the characteristics noted previously, especially by ensuring a systemwide focus, should be better equipped to identify and mitigate problems before it become necessary to make decisions about whether to let a financial institution fail. An effective financial regulatory system should also strive to minimize systemic risks resulting from interrelationships between firms and limitations in market infrastructures that prevent the orderly unwinding of firms that fail. Another important consideration in minimizing taxpayer exposure is to ensure that financial institutions provided with a government guarantee that could result in taxpayer exposure are also subject to an appropriate level of regulatory oversight to fulfill their responsibilities. Key issues to be addressed: Identify safeguards that are most appropriate to prevent systemic crises while minimizing moral hazard. Consider how a financial system can most effectively minimize taxpayer exposure to losses related to financial instability. Finally, although significant changes may be required to modernize the U.S. financial regulatory system, policymakers should consider carefully how best to implement the changes in such a way that the transition to a new structure does not hamper the functioning of the financial markets, individual financial institutions’ ability to conduct their activities, and consumers’ ability to access needed services. For example, if the changes require regulators or institutions to make systems changes, file registrations, or other activities that could require extensive time to complete, the changes could be implemented in phases with specific target dates around which the affected entities could formulate plans. In addition, our past work has identified certain critical factors that should be addressed to ensure that any large-scale transitions among government agencies are implemented successfully. Although all of these factors are likely important for a successful transformation for the financial regulatory system, Congress and existing agencies should pay particular attention to ensuring there are effective communication strategies so that all affected parties, including investors and consumers, clearly understand any changes being implemented. In addition, attention should be paid to developing a sound human capital strategy to ensure that any new or consolidated agencies are able to retain and attract additional quality staff during the transition period. Finally, policymakers should consider how best to retain and utilize the existing skills and knowledge base within agencies subject to changes as part of a transition. Mr. Chairman and Members of the Committee, I appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Orice M. Williams at (202) 512-8678 or williamso@gao.gov, or Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov. Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216. Washington, D.C.: January 8, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Hedge Funds: Regulators and Market Participants Are Taking Steps to Strengthen Market Discipline, but Continued Attention Is Needed. GAO-08-200. Washington, D.C.: January 24, 2008. Information on Recent Default and Foreclosure Trends for Home Mortgages and Associated Economic and Market Developments. GAO-08-78R. Washington, D.C.: October 16, 2007. Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure. GAO-08-32. Washington, D.C.: October 12, 2007. Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration. GAO-07-154. Washington, D.C.: March 15, 2007. Alternative Mortgage Products: Impact on Defaults Remains Unclear, but Disclosure of Risks to Borrowers Could Be Improved. GAO-06-1021. Washington, D.C.: September 19, 2006. Credit Cards: Increased Complexity in Rates and Fees Heightens Need for More Effective Disclosures to Consumers. GAO-06-929. Washington, D.C.: September 12, 2006. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Consumer Protection: Federal and State Agencies Face Challenges in Combating Predatory Lending. GAO-04-280. Washington, D.C.: January 30, 2004. Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk. GAO/GGD-00-3. Washington, D.C.: October 29, 1999. Bank Oversight: Fundamental Principles for Modernizing the U.S. Structure. GAO/T-GGD-96-117. Washington, D.C.: May 2, 1996. Financial Derivatives: Actions Needed to Protect the Financial System. GAO/GGD-94-133. Washington, D.C.: May 18, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses GAO's January 8, 2009, report that provides a framework for modernizing the outdated U.S. financial regulatory system. GAO prepared this work under the authority of the Comptroller General to help policymakers weigh various regulatory reform proposals and consider ways in which the current regulatory system could be made more effective and efficient. This testimony (1) describes how regulation has evolved in banking, securities, thrifts, credit unions, futures, insurance, secondary mortgage markets and other important areas; (2) describes several key changes in financial markets and products in recent decades that have highlighted significant limitations and gaps in the existing regulatory system; and (3) presents an evaluation framework that can be used by Congress and others to shape potential regulatory reform efforts. The current U.S. financial regulatory system has relied on a fragmented and complex arrangement of federal and state regulators--put into place over the past 150 years--that has not kept pace with major developments in financial markets and products in recent decades. Today, almost a dozen federal regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies share responsibility for overseeing the financial services industry. As the nation finds itself in the midst of one of the worst financial crises ever, it has become apparent that the regulatory system is ill-suited to meet the nation's needs in the 21st century. Several key changes in financial markets and products in recent decades have highlighted significant limitations and gaps in the existing regulatory system. First, regulators have struggled, and often failed, to mitigate the systemic risks posed by large and interconnected financial conglomerates and to ensure they adequately manage their risks. Second, regulators have had to address problems in financial markets resulting from the activities of large and sometimes less-regulated market participants--such as nonbank mortgage lenders, hedge funds, and credit rating agencies--some of which play significant roles in today's financial markets. Third, the increasing prevalence of new and more complex investment products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Fourth, standard setters for accounting and financial regulators have faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments, and in addressing challenges arising from the global convergence of accounting and auditing standards. Finally, as financial markets have become increasingly global, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. These significant developments have outpaced a fragmented and outdated regulatory structure, and, as a result, significant reforms to the U.S. regulatory system are critically and urgently needed. The current system has significant weaknesses that, if not addressed, will continue to expose the nation's financial system to serious risks. Our report offers a framework for crafting and evaluating regulatory reform proposals consisting of nine characteristics that should be reflected in any new regulatory system. By applying the elements of the framework, the relative strengths and weaknesses of any reform proposal should be better revealed, and policymakers should be able to focus on identifying trade-offs and balancing competing goals. Similarly, the framework could be used to craft proposals, or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system.
|
Physician groups with at least 200 physicians were eligible to apply for the PGP Demonstration and 10 were selected by CMS. (See table 1.) CMS’s technical review panel evaluated each applicant based on its organizational structure, operational feasibility, geographic location, and demonstration implementation strategy. Collectively, the 10 participating physician groups are all multispecialty practices comprising more than 6,000 physicians who provide care for more than 220,000 Medicare FFS beneficiaries. While all the participants have at least 200 physicians, group practice size varies widely, ranging from 232 to 1,291 physicians. Except for the Marshfield Clinic, all participants identified themselves as integrated delivery systems that include, in addition to their group practice, other health care entities such as hospitals, surgical centers, or laboratories. Nearly all of the participants have nonprofit tax status, except for the Everett Clinic and the Integrated Resources for the Middlesex Area (IRMA), which are for profit. Overall, a majority of the 10 participants are located in small cities and serve either predominantly rural or suburban areas. These participants provide care over wide geographic areas by using satellite physician group office locations, ranging from 10 to 65 physician group office locations. Under the PGP Demonstration’s design, participating physician groups are eligible to earn annual cost-savings bonuses for generating Medicare program savings. Participants that received cost-savings bonuses were also eligible to receive additional bonuses for meeting certain quality targets. Both the cost-savings and quality bonuses are in addition to payments physicians receive under Medicare FFS. There are three main steps in CMS’s bonus payment methodology to determine which participants are awarded bonus payments and the amount of these bonuses: (1) determination of eligibility for performance bonus payments, (2) determination of the size of the bonus pool, and (3) determination of actual bonus payments earned. (See fig. 1.) For the first step of the bonus payment methodology, to determine eligibility for receiving bonus payments, participating physician groups had to generate savings greater than 2 percent of their target expenditure amounts, relative to a comparison group of beneficiaries intended to have similar characteristics. CMS stated that the purpose of the 2 percent savings threshold was to further account for the possibility of random fluctuations in expenditures rather than actual savings. CMS also stated that it used separate comparison groups for each of the participants to distinguish the effect of the demonstration’s incentive payments from trends among Medicare beneficiaries unrelated to the demonstration. Operationally, Medicare beneficiaries were assigned to the comparison groups or to the participating physician groups retrospectively at the conclusion of each performance year, using Medicare claims data sent to CMS by providers following the delivery of care. As a part of the process of selecting beneficiaries for each comparison group that were similar to those served by the participating physician group they were being compared with, CMS ensured that beneficiaries (1) resided in the same geographic service areas as the beneficiaries assigned to the corresponding physician group; (2) had received at least one office or outpatient service, referred to as an evaluation and management (E&M) service, in that performance year; and (3) had not received any E&M services from the corresponding physician group that year or had been assigned to the participant’s group of beneficiaries in any previous performance year. For step two, determining the size of the bonus pools, participating physician groups that generated savings beyond the 2 percent threshold were eligible to receive up to 80 percent of those savings as potential bonuses. The remaining 20 percent, and all other savings not awarded to the participants, were retained by the Medicare program. In the third step, the determination of actual bonus amounts earned, eligible participating physician groups could receive up to the full amount available in their bonus pools as cost-savings bonuses and quality-of-care bonuses. Specifically, for PY1, 70 percent of the bonus pool was awarded as a cost-savings bonus to participants who met the 2 percent cost-savings threshold, and up to 30 percent of the bonus pool could have been awarded as a quality-of-care bonus, for those who met the cost-savings threshold. The quality-of-care bonus was awarded to participants that met or exceeded various quality-of-care targets within an area of clinical focus selected by CMS, in collaboration with other organizations and the participating physician groups. In PY1, CMS focused on diabetes management, and required participants to meet targets on a set of 10 diabetes measures, including whether a beneficiary received an eye exam or foot exam. To meet the quality-of-care target for each of the diabetes measures, a participant had to either improve its performance by a certain amount relative to its baseline performance or meet a national set of performance measures, referred to as HEDIS® measures, established by the National Committee for Quality Assurance (NCQA). Participants could also receive a prorated share of the quality-of-care bonus, based on success meeting some, but not all, of the quality-of-care targets. While the bonus payment methodology will remain the same throughout the demonstration, CMS added other quality-of-care measures and increased the relative significance of the quality-of-care measures in PY2 and PY3. In PY2, quality-of-care measures pertaining to CHF and coronary artery disease (CAD) were added to the existing diabetes measures. In PY3, quality-of-care measures pertaining to the management of hypertension and screening for breast and colorectal cancer were added to the existing diabetes, CHF, and CAD measures. The proportion of the bonus pool dedicated to meeting the quality-of-care targets—30 percent in PY1—also increased in each performance year. For PY2, the potential quality-of-care bonus increased to 40 percent of the potential bonus pool, and the proportion of the bonus pool that will be paid as a cost-savings bonus decreased to 60 percent. For PY3, the cost-savings and quality-of- care bonuses each will constitute 50 percent of the total bonus paid. In July 2007, CMS reported that in PY1, 2 of the 10 participating physician groups earned bonuses for achieving cost-saving and quality-of-care targets, while all 10 participants achieved 7 or more of the 10 quality-of- care targets. The Marshfield Clinic and the University of Michigan Faculty Group Practice received performance bonus payments of approximately $4.6 million and $2.8 million, respectively, in PY1. The Marshfield Clinic generated approximately $6 million in Medicare savings in PY1, above the 2 percent threshold established by CMS. Of this $6 million bonus pool, the Medicare program retained approximately $1.2 million, and Marshfield Clinic earned $3.4 million for the cost-savings component of the bonus and $1.2 million for meeting 9 of the 10 quality-of-care targets. The University of Michigan Faculty Group Practice generated approximately $3.5 million in savings in PY1 above the 2 percent threshold. Medicare retained approximately $700,000 and the University of Michigan Faculty Group Practice earned nearly $2 million for the cost-savings component of the bonus, and just over $800,000 for meeting 9 of the 10 quality-of-care targets. Of the remaining eight participating physician groups that did not earn cost-savings bonuses in PY1, all performed well in meeting the quality-of- care targets. Specifically, all eight of these participants achieved 7 or more of the 10 quality-of-care targets, with two participants meeting all 10 quality-of-care targets and two others achieving 9 of the targets. In addition, six of the participants came close to achieving the 2 percent threshold for the cost-savings component of the performance bonus payment in PY1. These six groups reduced their Medicare spending growth rates compared to their comparison group, but not beyond the 2 percent threshold. (See fig. 2.) While the number of pay-for-performance programs—programs in which a portion of a provider’s payment is based on their performance against defined measures—has increased in recent years, this growth has occurred largely in the private sector by commercial health plans. MedVantage reported that 107 pay-for-performance programs were in place as of November 2005, up from 84 the year before. Of these 107 pay-for-performance programs, 21 were public sector programs, of which 10 were Medicare programs. Currently, CMS has 5 programs, including the PGP Demonstration, that are demonstrations testing alternative physician payment methods. (See table 2.) Among these 5 physician pay-for-performance demonstrations, 4, including the PGP Demonstration, test physician pay-for-performance methods by offering incentives to physicians for meeting clinical performance standards, while 1 focuses on aligning financial incentives between hospitals and physicians. The PGP Demonstration was the first of CMS’s Medicare demonstrations to test physician pay-for-performance. Participants in CMS’s Medicare Health Care Quality Demonstration, projected to begin in 2008, may elect to use this overall design and bonus payment methodology from the PGP Demonstration. Among CMS’s other pay-for- performance demonstrations that are not physician related, is the Premier Hospital Quality Incentive Demonstration, a hospital-specific pay-for- performance demonstration for more than 260 hospitals in the Premier Inc., system. Under this demonstration, CMS provides bonus payments for hospitals with the highest levels of performance in five clinical conditions, including acute myocardial infarction. A recent study examining this demonstration concluded that among hospitals receiving performance bonuses, patients did not have a significant improvement in quality-of-care of care or outcomes for acute myocardial infarction. The participating physician groups implemented care coordination programs to achieve cost savings and improved their management processes to meet quality improvement targets CMS set for particular diabetes measures in PY1. More specifically, management process improvements included enhancing information technology (IT) systems, incorporating more team-based approaches, and improving administrative processes. Despite early positive indicators in cost savings, the full impact of programs implemented for the PGP Demonstration, particularly in care coordination, is largely unknown because many programs were not in place for all 12 months of the first performance year. The participating physician groups implemented 47 programs, which were either new or expansions of existing programs, to achieve cost savings and meet the CMS-set diabetes quality-of-care targets, with each participant implementing from 2 to 9 programs. (See app. II for a complete list of new and expanded programs implemented for the PGP Demonstration.) More specifically, participants focused nearly three-quarters of their new and expanded programs on care coordination—programs that manage the care of a small number of chronically ill and frail elderly patients who account for a disproportionately large share of overall costs. (See fig. 3.) The remaining one-quarter of programs focused on patient education, medication-related issues, improving administrative processes, and other initiatives. Among the 47 programs, participating physician groups devoted the largest portion of their program resources to care coordination programs designed to reduce hospitalizations by improving post-acute care. Our analysis showed that for 9 of the 10 participants at least half of demonstration-specific, full-time equivalents (FTE) were devoted to care coordination programs. (See table 3.) Participants told us they selected care coordination programs that provided post-acute care because they believed these programs would reduce future hospitalizations and yield the most cost savings in the shortest amount of time. For example, both Billings Clinic and Park Nicollet Health Services used a telephonic interactive voice response (IVR) system to monitor patients’ health status at home following a hospitalization or another significant health event. Approximately half of the care coordination programs were case- management programs that targeted high-cost, high-risk patients with multiple medical conditions, while the other half were disease- management programs that treated patients with a specific disease, such as CHF. Seven participants focused on case-management programs by using care managers for patients with multiple medical conditions to reduce hospitalizations. For example, an official from the Dartmouth- Hitchcock Clinic stated that the clinic’s primary strategy for the PGP Demonstration was to reduce hospitalizations and readmissions through more effective discharge planning, such as calling patients at home following their hospital discharge and encouraging them to schedule follow-up appointments with their physicians. Three participants committed the majority of their resources to disease-management programs; two of the three participants told us they focused on CHF because it is a costly disease to treat and would therefore generate savings within the first performance year. Other diseases, such as diabetes, could take several years to generate cost savings. CHF and diabetes are two of the most common chronic diseases among Medicare beneficiaries, according to recent health policy research. All 10 participating physician groups reported that their care coordination programs were making progress in both achieving cost savings and providing broader benefits to their programs and communities. In particular, four participants reported declines in hospitalizations for patients enrolled in their CHF care coordination programs. For example, Park Nicollet Health Services reported a 61 percent reduction in hospitalizations for patients enrolled in its CHF care-management program, which utilized an IVR to interact with patients on a daily basis. Park Nicollet representatives estimated this program saved $4,680 yearly, on average, for each patient enrolled in the program. Because the demonstration included other Medicare and non-Medicare patients, its benefits extended beyond the patients assigned to Park Nicollet for the demonstration. Further, several participants stated that collaboration and information sharing among the 10 participants on designing and implementing programs and analyzing data resulted in improvements to their demonstration programs, which broadly benefit their organizations. Representatives from St. John’s Health Systems stated that creating a care- coordination program had additional benefits, including the adoption of such programs by other health systems and physician groups throughout the community. Despite early positive indicators of cost savings, the full impact of programs implemented for the PGP Demonstration, particularly in care coordination, is largely unknown for a variety of reasons, including that many programs were not in place for all 12 months of the first performance year. Only 1 of the 10 participants had all of its programs in place for all 12 months of PY1. For example, the Marshfield Clinic had a case-management program operational for all 12 months in PY1, and a disease-management program operational for 4 months. By the beginning of PY2, only 6 of the 10 participants had all of their care coordination programs operational. Officials from participating physician groups stated that program implementation delays were caused by program complexity, the process of gaining management approval for significant program start- up costs, and educating physicians about the programs. In addition, two participants stated that because their care coordination programs were phased in throughout the first two performance years, PY3 may be the first year that the full impact of these programs is realized. To meet the quality-of-care targets set by CMS on diabetes management, participating physician groups improved their management processes by investing in IT, creating team-based approaches, and improving administrative processes, particularly for diabetes management, the quality-of-care target for PY1. To earn the maximum bonus, participants that met the 2 percent cost-savings target had to further meet a quality-of- care improvement target in a particular clinical area. The measures selected by CMS for each performance year of the demonstration focused on chronic conditions prevalent in the Medicare population that are treated in primary care. In PY1, CMS selected diabetes management as the focus for quality improvement for the demonstration participants. See table 4 for a categorization of how the participants worked to improve quality, specifically for diabetes, by using physician feedback, patient registries, team-based approaches, and improved documentation. All participating physician groups made new investments in IT, by adding features to existing EHR systems or using technology to track physicians’ performance on the quality-of-care measures set by CMS. For example, Marshfield Clinic implemented electronic alerts in its EHR system to remind clinical staff to provide care, such as immunizations. Participants primarily used electronic methods for physician feedback as a tool for physicians to track their performance and that of their peers to improve their internal operations and patient care. For example, Geisinger Health System’s physician feedback system provided physicians with access to monthly reports for each physician, which compared each physician’s performance in meeting the quality-of-care measures. According to administrators from Geisinger, this transparent approach fostered positive competition among its physicians to improve quality of care. Participants also invested in IT by creating electronic patient or disease-specific databases or lists referred to as patient registries to better identify patients eligible for enrollment in diabetes programs. The St. John’s Health System, which did not have an EHR system, created an electronic patient registry to track patients with diabetes and to alert physicians to provide certain tests. Six of the participating physician groups relied to a greater extent on a team-based approach to improve care processes. Using a team-based approach, participants expanded the roles and responsibilities of nonphysician staff such as nurses, medical assistants, and care managers so that they worked more effectively with physicians to deliver quality care. Although the demonstration required additional quality reporting, officials from two of the participating physician groups stated that they were able to treat the same number of patients in a day. For example, Dartmouth-Hitchcock Clinic used care managers who were nurses to maximize the effectiveness of patients’ office visits. These staff scheduled lab tests in advance of patients’ office visits when appropriate, developed patient action plans, and communicated with physicians before and after patients’ arrivals. Physicians from Dartmouth-Hitchcock told us that the time they spent with patients had become more effective because of this new approach. Four participating physician groups improved their administrative processes by creating better documentation methods for diabetes-related tests and exams. They created worksheets, derived from patients’ medical records, to ensure that patients received diabetes tests, such as foot and eye exams. In addition to improving documentation, these initiatives served as reminders to physicians to complete diabetes-related tests and exams and also reduced the burden of data collection for reporting purposes. For example, IRMA created paper forms that were added to patient records to collect data on tests as they were conducted. In addition to improving documentation, these forms were intended to relieve some of the burden of collecting data for smaller practices within the organization. IRMA physicians also received paper worksheets at the point of care to help monitor and track care provided to their diabetes patients. CMS’s design for the PGP Demonstration was generally a reasonable approach for rewarding participating physician groups for cost-savings and quality performance. However, the demonstration design created a particular challenge for CMS in providing timely performance feedback and bonus payments to the participants which, if received more quickly, may have enabled them to improve their programs. CMS’s design for the PGP Demonstration was generally a reasonable methodological approach for determining whether the actions taken by the participants resulted in cost savings and improvements in quality, and rewarding participants as appropriate. In particular, three aspects of the PGP Demonstration design were consistent with established methodological practices considered effective: a rigorous research study design to isolate the effects of the demonstration’s incentives, a risk- adjustment approach to adjust for changes in patient health status, and a quality component to help ensure that participating physician groups did not achieve cost savings at the expense of quality. CMS used a rigorous research design to enable it to isolate the effectiveness of the actions taken by each of the participants in the demonstration. Specifically, CMS used a modified “pre-test/post-test” control group design that is generally viewed by experts as an effective way to control for some of the most common threats to internal validity, in this case the ability of the study design to measure the true effects of CMS’s incentive payments. The features of CMS’s study design included a separate comparison group for each participant to distinguish the effects of the demonstration’s incentives from unrelated spending trends in the participants’ service areas. Comparison groups’ beneficiaries are from the participants’ geographic service areas and, as such, are affected by the same local market trends as the participants. In addition, the study design included a baseline period, before the demonstration began, that helped to control for trends that may have occurred without demonstration-related interventions. A standard pre-test/post-test control group design would have randomly assigned beneficiaries to either a comparison group or a participant’s beneficiaries. To avoid having to restrict or control beneficiaries’ choice of providers and health care services, and to continue to operate within the Medicare FFS system while the demonstration was in place, CMS modified this standard approach. Rather than assigning beneficiaries randomly at the start of the demonstration to participant or comparison groups, the agency retrospectively assigned beneficiaries at the end of each year based on the beneficiaries’ natural use of outpatient E&M services. CMS also used a rigorous risk-adjustment approach to adjust for changes in patients’ health-care status. Without these adjustments, CMS could not have been reasonably assured that changes in spending growth were not attributable to changes in patients’ health care status, and the severity and complexity of diagnosis. For the PGP Demonstration, CMS tailored the CMS-Hierarchical Condition Category (HCC) model, the risk-adjustment model that it currently uses to make capitation payments to Medicare managed care plans. This model accounts for changes in the health status of beneficiaries. Furthermore, CMS incorporated a quality component into the research design, which helped ensure that participants would not achieve cost savings at the expense of quality. The quality-of-care measures CMS selected were based on a consensus of experts and were developed in collaboration with the American Medical Association and quality assurance organizations and with input from the participants. In addition, CMS has placed an increased emphasis on quality in its bonus payment methodology for future years. By PY3, half of the available bonus pool will be awarded based on each participant’s success in meeting quality-of-care metrics in six clinical areas: diabetes, CHF, CAD, management of hypertension, and screening for breast cancer and colorectal cancer. While CMS’s research design for the PGP Demonstration was generally a reasonable approach, it also created some challenges for the participating physician groups. Challenges resulting from the demonstration design included providing timely performance feedback and bonus payments and the use of a uniform 2 percent savings threshold that may have disadvantaged certain participants. Participants also raised other concerns related to the demonstration design that were related to their local markets. Overall, participants did not receive performance feedback or bonus payments for their PY1 efforts until after the beginning of the third performance year. Specifically, CMS provided participants with performance feedback and bonus payments regarding their efforts in PY1 in three phases beginning 12 months after the end of PY1. In April 2007, CMS provided each participant with a cost-savings summary report displaying its success in controlling Medicare expenditures for PY1 and the size of its cost-savings bonus pool. (See fig. 4.) A little over 2 months later, CMS provided each participant with a detailed settlement sheet displaying its individual cost-savings and quality-of-care bonuses for PY1. It was not until July 2007, 15 months after the end of PY1, that the two participants that earned a demonstration bonus for PY1—the Marshfield Clinic and the University of Michigan Faculty Group Practice—received their bonus payments of $3.4 million and $2.1 million, respectively. CMS officials explained that generating feedback for the participating physician groups required 15 months because the demonstration design depended on the time-consuming process of retrospectively analyzing Medicare beneficiaries’ claims and chart-based data. Specifically, CMS officials stated that the process of calculating participants’ cost-savings bonuses required at least 12 months after the conclusion of the first performance year—6 months to accrue claims data that were sufficiently complete and a second 6 months to analyze and calculate the bonus amounts. In addition, they stated that the calculation of the quality-of- care bonus required an additional 3 months to audit and reconcile chart- based data with claims-based data pertaining to the 10 diabetes quality-of- care measures. CMS officials stated that to calculate the cost-savings bonus they chose to use a claims file that was 98 percent complete because they wanted to ensure that the feedback they provided to participants was accurate. CMS officials also stated that the time frames for providing performance feedback and bonus payments to participants in PY1 will be the same for PY2 and PY3. Officials from all 10 participating physician groups expressed concern about the length of time CMS took to provide them with performance feedback and bonus payments. Several participants stated that they had difficulty making adjustments to their programs and improving their overall performance because of delayed feedback and payments. One official from the Novant Medical Group stated that the 15-month time lag in receiving bonus payments would prevent the organization from reinvesting these resources into demonstration-related programs and improving them for subsequent performance years. In addition, two of the participants told us that other pay-for-performance programs they have participated in have used payment methodologies that yielded more timely performance feedback or bonus payments. For example, officials from the University of Michigan Faculty Group Practice indicated that a pay-for- performance program sponsored by Blue Cross Blue Shield of Michigan provided them twice a year with feedback for meeting certain quality-of- care targets. In response to these concerns, CMS has been working to provide each participating physician group with a quarterly Medicare patient claims data set related to beneficiaries they served. Initially, data sets were provided quarterly and focused on identifying patients with chronic conditions who had a hospital admission or emergency room visit. In July 2007, CMS provided each participating group with a data set on hospital inpatient, outpatient, and physician information consisting of the Medicare claims of beneficiaries likely to be included in the PY2 cost-savings calculations. In September 2007, CMS responded to participants’ requests for quarterly claims data that would allow them to assess their cost- savings performance during the performance year, by providing them with a revised data set. CMS’s most recent data set included Medicare inpatient, outpatient, and physician claims data for beneficiaries likely to be included in the year-end cost-savings calculation. The data set includes information on these beneficiaries for the first quarter of PY3. CMS noted that it will not provide equivalent information pertaining to comparison group beneficiaries because these data are too time-consuming to assemble. While CMS’s provision of ongoing quarterly data sets to participants is timelier than the information provided before, most participants told us they do not have the necessary resources to analyze these data sets in a timely manner. This lack of timely actionable data could hinder participants’ ability to adjust their programs on a more “real-time” basis. Officials from only 2 of the 10 participants told us they would able to analyze and use the quarterly data sets CMS provided. Consequently, the data sets are not as useful as providing CMS-generated quarterly reports, similar to the final reports CMS provided on participants’ progress in achieving cost-savings and quality-of-care targets. CMS may not be able to provide quarterly reports that include comparison group trends or provide quality-of-care data that rely on chart-based data because of complexity and cost. However, CMS could provide participants with estimates on their growth in per-beneficiary expenditures each quarter, as well as changes in the profile of the beneficiaries who are likely to be assigned to the participants. CMS could also use more readily available claims data to provide quarterly estimates of participants’ progress in meeting the quality-of-care targets. In PY1, for example, that would have included reporting progress on 4 of the 10 claims-based quality targets on diabetes, such as whether a beneficiary received an eye exam. CMS officials said they adopted the use of a uniform 2 percent threshold to ensure that savings generated really were due to demonstration-related programs. Just as CMS used individual comparison groups for each participant, CMS could have used separate savings thresholds that more closely reflected the market dynamics of each participant’s overall area instead of a uniform savings threshold chosen based on historical data averaged across the 10 participants. However, use of different thresholds for each participant, according to CMS officials, would have been complex and would have generated additional administrative burden in processing bonus payments. Nevertheless, the use of a uniform savings threshold—2 percent—that all participating physician groups had to achieve before becoming eligible for a bonus payment may have made earning bonus payments more challenging for particular providers, specifically those with already low Medicare spending growth rates or those that had comparison group providers with low spending growth rates. Participants with low relative spending may have had difficulty generating annual Medicare savings of greater than 2 percent compare to those participants with high spending growth rates before the demonstration began, some participants argued. Supporting this concern is the wide variation in the amount participants spent per beneficiary in the year prior to the demonstration, which ranged from $6,426 to $11,520, after adjusting for health status. In addition, participants with comparison groups that had relatively low spending growth may have faced more of a challenge reducing their spending below 2 percent of their comparison groups’ spending than participants with comparison groups that had higher relative spending growth. In fact, both participants that received a bonus, Marshfield Clinic and the University of Michigan Faculty Group Practice, were measured against comparison groups with high relative spending growth rates—the 2 largest among the 10 participants. While their success cannot necessarily be attributed to the high relative spending of their comparison groups, the high spending growth of the comparison groups against which they were measured may have had some effect. In addition, several participating physician groups noted selected concerns particular to their local markets. For example, officials from one participating physician group expressed concern that CMS did not adequately adjust for the conversion of several hospitals in their market to critical access hospitals (CAH), which generally receive higher Medicare payments. Participant officials noted that their physician group treat more patients from these hospitals, which resulted in a higher spending trend and lower likelihood of obtaining a cost-savings bonus. CMS stated that the agency will examine this issue as part of its evaluation at the conclusion of the PGP Demonstration. Several participating physician groups were also concerned that their groups had more beneficiaries with specialist visits relative to their comparison groups. As a result, participants providing more specialty care may have had less control over the health outcomes of these beneficiaries. However, analyses conducted by CMS showed that these participants provided 80 percent or more—a predominant share—of the E&M services for most of the beneficiaries assigned to them in PY1, regardless of specialty, and had meaningful opportunities to influence beneficiary health care expenditures. CMS officials stated that they will continue examining this issue and other related issues brought to their attention by the participants as part of their evaluation of the demonstration. The large size of the 10 participating physician groups compared with the majority of physician practices operating in the U.S. gave the participants certain size-related advantages that might make broadening the payment approach used in the demonstration to more participants challenging. The 10 participating physician groups had significantly higher numbers of physicians, higher annual medical revenues, and higher numbers of supporting staff, and were more likely to be multispecialty practices than most practices in the U.S. Specifically, the participating physician groups generally had three unique size-derived advantages: institutional affiliations that allowed greater access to financial capital, access to and experience using EHR systems, and experience prior to the PGP Demonstration with pay-for-performance programs. While all the participating physician groups in the demonstration have 200 or more physicians in their practices, significantly less than 1 percent of the approximately 234,000 physician practices in the U.S. in 2005 had 151 or more physicians in their practice. (See fig. 5.) By contrast, practices with only 1 or 2 physicians comprised 83 percent of all practices. Furthermore, while all 10 participants were multispecialty practices, 68 percent of the all practices in the U.S. were single-specialty practices, which are generally smaller organizations. The 10 participating physician groups were also large compared with other physician practices in terms of annual medical revenues and nonphysician staff. Participants generated an average of $413 million in annual medical revenues in 2005 from patients treated by their group practice, far greater than the revenues generated by single specialty practices in the U.S. Only about 1 percent of single specialty practices had revenues greater than $50 million. In addition, these physician groups and their affiliated entities, such as hospitals, employed approximately 3,500 nonphysician FTEs, over 100 times more than the average single-specialty practice. Their larger relative size gave the 10 physician groups participating in the PGP Demonstration three size-related advantages over smaller physician practices, which may have better prepared them to participate in the demonstration’s payment model and implement programs encouraged by the demonstration. First, participants typically had institutional affiliations with an integrated delivery system, a general hospital, or a health insurance entity. Specifically, 9 of the 10 participating physician groups were a part of an integrated delivery system, 8 were affiliated with a general hospital, and 5 were affiliated with an entity that marketed a health insurance product. In contrast, a representative of the Medical Group Management Association estimated that approximately 15 percent of all physician practices in the U.S. have an affiliation with a general hospital. As a result of these affiliations, participating physician groups generally have greater access to the relatively large amounts of financial capital needed to initiate or expand programs. On average, each participating physician group invested $489,354 to initiate and expand its demonstration-related programs and $1,265,897 in operating expenses for these programs in PY1. (See app. III.) Specifically for individual programs, participants reported spending an average of $190,974 to initiate and $409,332 to operate case management programs in PY1, almost twice the spending associated with any other type of program. (See table 5.) Several participants reported that the majority of their individual program expenditures were labor costs for care managers. Officials from several participating physician groups said that smaller practices might have difficulty implementing similar programs because they may not have the financial resources to do so. The second advantage the 10 large participating physician groups had over smaller physician practices is a greater likelihood of having or acquiring EHR systems, which was essential in participants’ ability to gather data and track progress in meeting quality-of-care targets. Eight of the 10 participating physician groups had an EHR in place before the demonstration began, and the 2 other participants, out of necessity, developed alternative methods for gathering patient data electronically specifically for the demonstration, such as creating patient registries. In contrast, only an estimated 24 percent of all physician practices in the United States had either a full or partial EHR in 2005, and large practices were more likely to have EHRs than small practices. These systems enable physician group practices with multiple locations, such as the 10 participating physician groups, to share patient information and other administrative resources across a wide geographic area. Health care information technology experts believe that the primary reason smaller physician practices have not implemented EHRs is their cost, estimated at $15,000 and $50,000 per physician. In addition, experts estimated that annual maintenance costs add between 15 and 25 percent of the initial per physician investment. Furthermore, experts noted that small practices tend to pay more per physician for EHR systems than larger physician practices because larger physician practices are better able to spread the fixed costs of these systems across more physicians. Finally, the third size-related advantage that most of the 10 participating physician groups had over smaller physician practices was the larger groups’ experience with other pay-for-performance systems prior to participating in the PGP Demonstration. Overall, 8 of the 10 participants had previous experience with pay-for-performance programs initiated by private or public sector organizations. This experience may have eased their adjustment to the PGP Demonstration and afforded them greater initial and overall success. For example, the University of Michigan Faculty Group Practice’s participation in a pay-for-performance system sponsored by Blue Cross Blue Shield of Michigan offered the physician group incentives to upgrade its chronic care infrastructure. In addition to experience with pay-for-performance programs, generally, the majority of the 10 participating physician groups had experience with specific elements of pay-for-performance, such as physician bonus compensation methods and physician feedback processes. Representatives from some of the participating physician groups stated that their exposure to one of these various elements of pay-for-performance elements prior to the PGP Demonstration may have enabled their organizations to adjust to the demonstration more rapidly. The care coordination programs used by the participating physician groups show promise in achieving cost savings and improving patient outcomes for Medicare beneficiaries. As a result of the demonstration, participating physician groups generated several different approaches for coordinating patient care across inpatient and physician settings for high- risk and high-cost patients, such as those with CHF, and better managing patients with diabetes, the quality-of-care target for the demonstration set by CMS. Additional years of the demonstration may be needed, however, for CMS to collect and analyze the information necessary to fully evaluate the effectiveness of these care coordination programs and their potential for cost savings in this demonstration. Only one participant had all of its care coordination programs operational for all 12 months of PY1, and participants did not receive timely feedback from CMS on their progress until PY3 had already begun. While CMS’s demonstration design was generally reasonable, the lengthy time CMS took to provide participating physician groups with performance feedback and bonus payments may limit more widespread use in other demonstrations, or use as an alternative method for paying physicians group in Medicare FFS. The lack of timely and actionable performance feedback also hinders participants’ ability to improve their programs in response to data. Providing performance feedback and bonus payments to participants more than 12 months after the end of the measurement period precludes physician groups from adjusting their program strategies on a more “real-time” basis. CMS has recently taken action to provide participants with quarterly claims data sets on their beneficiaries for PY3, but most participants indicated they would have difficulty analyzing such data to determine their progress in achieving cost-savings and quality-of-care targets. Measuring participants’ performance using a comparison group where beneficiaries were retrospectively assigned after the end of a performance year, as used in this demonstration, may be impractical for more widespread use beyond this demonstration because physician groups cannot accurately predict on an ongoing basis whether they would be able to generate cost-savings and receive bonus payments. In addition, the use of a uniform savings threshold amount, such as the 2 percent used in this demonstration, raises questions about whether this approach provides a disincentive for physician groups that have lower spending. Physician groups with fewer than 200 physicians—the vast majority of practices in the United States—may also have more difficulty than larger practices, such as the participants in this demonstration, absorbing the start-up and annual-operating costs of the care coordination programs and implementing them without EHR systems that many groups believed were necessary to achieve cost savings while maintaining and improving the quality of care. As the PGP Demonstration continues, data will become available to CMS to determine how much influence factors such as the delay in the start-up of participating physician groups’ care coordination programs and its decision to use a uniform 2 percent threshold, and other factors, may have had on participants’ ability to earn bonus payments. Consequently, it is understandably too early to determine the success of the PGP demonstration, but evidence so far indicates that the care coordination programs initiated by the participants show promise, but that the wider applicability of the payment methodology used in the demonstration may be more limited. We recommend that the Administrator of CMS provide participating physician groups with interim summary reports that estimate participants’ progress in achieving cost-savings and quality-of-care targets. CMS reviewed a draft of this report and provided comments, which appear in appendix IV. CMS stated that it appreciated our thoughtful analysis and that our report would provide additional insight into performance year one results and complement its ongoing evaluation efforts. CMS agreed with the intent of our recommendation. CMS stated that it was developing a new quarterly report and refined data set to aid the physician groups in monitoring their performance, coordinating care, and improving quality. CMS stated that these reports would address a key limitation of existing quarterly data sets—that most physician groups do not have the necessary resources to analyze the data sets in a timely manner. We agree that this information would be helpful in improving performance feedback to physician groups, which would allow them to adjust their program strategies on a more “real-time” basis. As the demonstration continues, we encourage CMS to continue its efforts to improve performance feedback to the physician groups participating in the PGP demonstration. We are sending copies of this report to the Administrator of CMS. We will provide copies to others on request. In addition, this report is available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. For the first performance year, we examined three objectives: (1) what actions the participating physician groups took to achieve cost savings and meet the diabetes quality-of-care targets selected by the Centers for Medicare & Medicaid Services (CMS), (2) the extent to which the demonstration design was a reasonable approach to rewarding participating physician groups for cost savings and quality performance, and (3) potential challenges involved in broadening the payment approach used in the demonstration from the 10 large participating physician groups to other physician groups and nongroup practices. For each of our reporting objectives, we analyzed data we collected, by written questionnaire, and supplemented this information with interviews in person and by telephone and site visits to 5 of the 10 locations. We sent questionnaires to individuals CMS identified as points of contact at each of the participating physician groups. These individuals were often physicians or administrative staff tasked with overseeing their physician group’s demonstration efforts. All 10 participants completed and returned our questionnaire. The questionnaire contained three sections. The first section gathered standardized information about the practice’s general characteristics, including organizational structure, size, institutional affiliations, and the extent to which it used electronic health records systems. The second section gathered information about the programs participants used as a part of the Physician Group Practice (PGP) Demonstration. This section confirmed summary statements the individual practices described in their original applications to CMS or in other documents we obtained from CMS’s contractor, Research Triangle Institute (RTI), and provided an opportunity for the group to add new programs, if needed. Summary statements detailed the purpose, type, and characteristics of each program. We also asked participants whether each of their programs was created specifically for the demonstration, was a preexisting program, or was an expansion of a preexisting program. In addition, we asked officials from these physician groups to identify the start-up costs and the annual operating costs of these programs. We also asked about the extent to which the physician groups believe smaller physician practices could implement similar programs. The third section of the questionnaire gathered information about how the participating physician groups compensated their physicians and how any demonstration bonus dollars they may earn would be distributed to individual physicians within the group. We also conducted site visits or telephone interviews with staff of all 10 participating physician groups. Five of these interviews were site visits, which we chose to reflect geographic diversity (region of country and urban/rural), size, and ownership status, among other factors. We conducted site visits to Geisinger Health System in Pennsylvania, Park Nicollet Health Services in Minnesota, Marshfield Clinic in Wisconsin, Billings Clinic in Montana, and the Everett Clinic in Washington. We collected the same information by telephone from the other participants in the demonstration. For these in-person or telephone interviews, we interviewed the demonstration project managers, physicians, care managers, finance officials, and information technology staff. To identify programs used by the participating physician groups to achieve cost savings and meet the CMS-set diabetes quality-of-care targets, we analyzed data we collected by written questionnaires and interviews, supplemented with information we obtained at site visits to 5 of the 10 participants. We included in our analysis new programs or expansions of existing programs created in response to the demonstration. To determine the extent to which the demonstration’s design was reasonable, we analyzed documents on the overall research design and bonus payment methodology obtained from CMS, analyzed data collected through the questionnaire, and used interviews we conducted with the participating physician groups and CMS and its contractor RTI. We also reviewed and analyzed CMS-contracted documents on the design of PGP Demonstration. To determine the potential challenges involved in broadening the payment approach used in the demonstration to other physician groups, we compared selected characteristics of the 10 participating physician groups to physician practices in the United States, using data primarily from the Medical Group Management Association’s (MGMA) annual survey. We also used data we collected from the questionnaire and from our interviews with officials from the physician group practices. We also interviewed experts on health information technology systems. We assessed the reliability of the information we obtained about participating physician group practices and the data we used to compare them to other physician groups in the U.S. in several ways. First, we checked the internal consistency of the information we obtained from the physician groups with information from RTI’s PGP Demonstration site visit reports and CMS’s 2006 Report to Congress on the PGP Demonstration. We verified the information we collected from the questionnaire with detailed follow-up interviews with officials from all 10 participants. Second, we spoke to the survey director for the 2005 MGMA survey to ensure that we used the information from that survey appropriately and that we understood any data limitations. In addition, we compared the data we used on U.S. group practices from the 2005 MGMA survey with data from the 2005 National Ambulatory Medical Care Survey and determined that the results were largely consistent and adequate for our purposes. Third, on the basis of this comparison and discussions with experts knowledgeable about the data, we used broad categories to describe the data. We determined that the data used in our analysis were sufficiently reliable for the purposes of this report. We conducted our work from May 2006 through December 2007 in accordance with generally accepted government auditing standards. Billings Clinic Cancer treatment center—care coordination (disease management) Patients received coordinated cancer care including screening, prevention education, and infusion treatments. Chronic obstructive pulmonary disease (COPD) management—care coordination (disease management) Care managers worked with COPD patients to avoid functional decline, offer preventive services such as immunizations, and treat complications early. Community crisis center—care coordination (case management) High-risk patients with chronic psychiatric conditions were redirected from the emergency room to the psychiatric center for treatment. A patient registry was used to identify diabetes patients and create patient report cards, which displayed treatment confirmation and treatment gaps. Heart failure clinic—care coordination (disease management) Care managers monitored an automated system, which recorded patients’ answers to health status questions, and intervened if necessary. Hospitalist program—care coordination (case management) Hospitalists worked with internists and family practitioners to improve the communication and care provided to patients at hospital discharge. Electronic prescription system better reconciled patients’ medications between inpatient and outpatient settings to reduce adverse events. Nursing home staff educated on consulting a Billings geriatrician before admitting patient to the hospital. To coordinate nursing home and hospital care, physician assistants were assigned to patients entering the hospital’s emergency room from local nursing homes. Cancer care/palliative care—care coordination (case management) Care managers assisted cancer patients and families in coordinating, planning end-of-life care. Health coaching—care coordination (case management) Care managers helped patients follow hospital post- discharge instructions, make physician appointments, and take the correct medications and dosages. Physicians received feedback through intranet, met with management on identified issues. Forms were placed on the front of patients’ medical charts to remind physicians of CAD quality-of-care measures. A director position was created and charged with coordinating program interventions and with overseeing all care for Medicare patients. Hypertension management program— care coordination (disease management) A patient registry was used to identify patients with hypertension and remind physicians to measure and document patients’ blood pressure. Program name and program type Palliative care program—care coordination (case management) Care managers provided patients and families end-of-life planning information on quality-of-life issues, alternative living options, and in-home support. Patient care coordination—care coordination (case management) Care managers coordinated inpatient and outpatient care, helping to ensure proper discharge planning and schedule follow-up appointments with physicians. COPD management—care coordination (disease management) Care managers worked with patients to monitor their health status and to encourage patients to visit their physicians when necessary. Congestive heart failure management— Care coordination (disease management) Care managers monitored patients’ health status through a voice recognition system and contacted patients when their health status became problematic. Diabetes disease management—care coordination (disease management) Care managers worked with patients to educate them on managing diabetes. Moderate risk case management—care coordination (case management) Care managers worked with patients to reduce risk factors associated with potential future hospitalizations. Postacute case management—care coordination (case management) Care managers contacted patients after a hospital discharge to ensure that home health services were received, correct medications taken, etc. Anticoagulation—care coordination (case management) Pharmacists and physicians worked with patients during a hospitalization to ensure prescriptions were correct and to assist in the transition from inpatient to outpatient care. Cancer care management—care coordination (disease management) Care managers worked with colon, breast, and lung cancer patients and their physicians to ensure that evidence-based treatment guidelines are followed for psychological, nutritional, and palliative care. Chronic care management—care coordination (case management) Care managers educated patients admitted to the hospital on disease self-management and proper medication use. Congestive heart failure (CHF)—care coordination (disease management) Care managers helped patients understand and follow their post-discharge instructions. Diabetes disease management—care coordination (disease management) Care managers worked with diabetes patients to coordinate care across providers, provide in-person patient education, and remind patients of appointments. Certified diabetes educators assisted patients in understanding diabetes self management tools. Heart smart program—care coordination (disease management) Care managers provided case management services to cardiac patients enrolled in home care services. HomeMed program—care coordination (case management) Frail, elderly patients with multiple conditions receive a telemedicine device in their home that monitors vital signs. Program name and program type Anticoagulation—care coordination (case management) Care managers worked with patients to ensure that dosages of the anticlotting drug Warfarin were adjusted properly. Also educated patients on recognizing factors that can influence anticoagulation such as diet, activity, other medications, and other illnesses. CHF management—care coordination (disease management) Care managers called patients to check on health status, schedule physician visits, and answer questions. Disease management, compass—care coordination (disease management) Care managers assisted patients with medication management, appointments, and physician referral. Outpatient case management—care coordination (case management) Care managers helped high-risk, high-cost patients recently discharged from the hospital schedule follow-up physician visits and learn of available resources. Physicians and staff were educated on how to talk to patients about palliative care. Physicians and clinical staff were educated on evidence- based guidelines for chronic disease management. Transition of care program—care coordination (case management) Nurses contacted patients after hospital discharges to ensure patients made follow-up physician appointments. Diabetes care management—care coordination (disease management) Care managers educated newly diagnosed diabetes patients on diabetes self management. During a 30-minute office visit, patients received an evaluation of needs, health education, diagnoses, prevention measures, and fitness counseling. Heart failure care coordination—care coordination (disease management) Care managers monitored patients’ medication usage and dietary regimes through an interactive voice response system. Patients telephoned a call center where nurses directed them to care based on their symptoms. Case management systems—care coordination (case management) Care managers provided care to high-risk patients including coordination of inpatient and outpatient care services and guidance on following treatment plans. Disease management—care coordination (disease management) Care managers managed, educated, and coached patients with chronic conditions. CHF program (disease management) Nurses assessed the health status of heart failure patients. Low-income patients were assisted in obtaining free medications from pharmaceutical companies. Program name and program type Complex care coordination—care coordination (case management) Care managers monitor patients with multiple chronic diseases and educate them on self management. Post-discharge transitional care—care coordination (case management) Care managers provided education, medication counseling, guidance on post-acute care treatment, and assistance with making and getting to post-discharge appointments. Start-up investment expenditures for Integrated Resources for the Middlesex Area were not available. In addition to the contact named above, Thomas Walke, Assistant Director; Jennie Apter; Kelly Barar; Zachary Gaumer; and Jennifer Rellick made key contributions to this report.
|
Congress mandated in 2000 that the Centers for Medicare & Medicaid Services (CMS) conduct the Physician Group Practice (PGP) Demonstration to test a hybrid payment methodology for physician groups that combines Medicare fee-for-service payments with new incentive payments. The 10 participants, with 200 or more physicians each, may earn annual bonus incentive payments by achieving cost savings and meeting quality targets set by CMS in the demonstration that began in April 2005. In July 2007, CMS reported that in the first performance year (PY1), 2 participants earned combined bonuses of approximately $7.4 million, and all 10 achieved most of the quality targets. Congress mandated that GAO evaluate the demonstration. GAO examined, for PY1, the programs used, whether the design was reasonable, and the potential challenges in broadening the payment approach used in the demonstration to other physician groups. To do so, GAO reviewed CMS documents, surveyed all 10 groups, and conducted interviews and site visits. All 10 participating physician groups implemented care coordination programs to generate cost savings for patients with certain conditions, such as congestive heart failure, and initiated processes to better identify and manage diabetes patients in PY1. However, only 2 of the 10 participants earned a bonus payment in PY1 for achieving cost savings and meeting diabetes quality-of-care targets. The remaining 8 participants met most of the quality targets, but did not achieve the required level of cost savings to earn a bonus. Many of the participants' care coordination programs were not in place for all of PY1. CMS's design for the PGP Demonstration was generally a reasonable approach for rewarding participating physician groups for achieving cost-savings and quality-of-care targets, but created challenges. CMS's decision to use comparison groups, adjust for Medicare beneficiaries' health status, and include a quality component in the design helped ensure that bonus payments were attributable to demonstration-specific programs and that cost-savings were not achieved at the expense of quality. However, the design created challenges. For example, neither bonuses nor performance feedback for PY1 were given to participants until after the third performance year had begun. CMS provides participants with quarterly claims data sets, but most participants report they do not have the resources to analyze these data sets and generate summary reports on their progress and areas for improvement. The large relative size of the 10 participating physician groups (all had 200 or more physicians) compared with most U.S. physician practices (less than 1 percent had more than 150 physicians) gave the participants certain size-related advantages that may make broadening the payment approach used in the demonstration to other physician groups and non-group practices challenging. Their larger size provided the participants with three unique size-related advantages: institutional affiliations that allowed greater access to financial capital, access to and experience with using electronic health records systems, and prior experience with pay-for-performance programs.
|
Grants constitute a form of federal assistance consisting of payments in cash or in kind for a specified purpose, allocated to a state or local government or to a nongovernmental recipient.state and local governments, grants are an important tool used by the federal government to achieve national objectives. When taken as a whole, federal grant programs are extremely diverse and complex. They vary widely in numerous ways, including size, the nature of the recipients, and the types of programs they fund. For example, grants range from relatively small dollar amounts, such as a research grant from the National Science Foundation for less than a couple of thousand dollars, to By providing funding to much larger dollar amounts, such as Medicaid grants to individual states, with outlays of about $265 billion in fiscal year 2013. Grant programs also vary in two important dimensions: (1) the amount of discretion given the recipient in determining how the funds will be used, and (2) the way they are allocated (or awarded). Grants generally are described as either block grants or categorical grants. Block grants are less restrictive and permit the use of funds for broader categories of activities, such as community development or public health. Block grants generally give greater discretion to recipients in identifying problems and designing programs to address those problems using grant funds. In contrast, categorical grants are the most restrictive, permitting funds to be used only for specific activities related to their purpose, such as for nutrition for the elderly. While the distinction between “block” and “categorical” grants is useful, it is important to recognize that in practice, the labels represent the ends of a continuum: in the middle range, the two types overlap considerably. The degree of discretion represented by the identified types of grants enables a different balance to be struck between the interests of the federal government —that funds be used efficiently and effectively to meet specified national objectives—and the ability of grant recipients to use funds for those (approved) activities that best fit local priorities while also minimizing administrative burdens associated with accepting the grant. Over time, grant program funding has increased steadily, as Congress and federal grant-making agencies have created greater diversity and complexity in federal grants management. According to OMB, federal outlays for grants to state and local governments increased from $91.4 billion in fiscal year 1980 (about $224 billion in 2013 constant dollars) to about $546 billion in fiscal year 2013, (see figure 1). Consolidation rationale: Prior research by the former U.S. Advisory indicated that Commission on Intergovernmental Relations (ACIR) there are two instances where it may be suitable to consolidate categorical grant programs: when categorical programs are too small to have much impact or to be worth the cost of administration; and when multiple programs exist in functional areas (including health, education, and social services) that have a large number of programs, or are in functional areas (including justice, natural resources, and occupational health and safety) where there is fragmentation. The proliferation of grant programs can increase problems related to fragmentation, overlap, and duplication (see figure 2 for our definition of these terms, based on our related framework). As we have previously reported, program consolidations may help address these problems. Consolidations also have the potential to improve the effectiveness and performance of federal assistance programs by simplifying grant administration and facilitating coordination among grant recipients. Approaches for consolidation: For purposes of this report we classified consolidations as employing either a block grant approach or a hybrid approach. A block grant approach is generally broad in scope. It is intended to increase state and local flexibility and generally give recipients greater discretion to identify problems or to design programs addressing those problems using funding from the grant. Block grants funds are provided through less restrictive, broader categories of activities, such as community development or public health. Hybrid approaches can consolidate a number of narrower categorical programs while retaining strong standards and accountability for discrete federal performance goals. Hybrid approaches may also include Performance Partnerships, offer additional flexibility in using funds across multiple programs but are held accountable for meeting certain performance measures. They do so by giving grantees the flexibility to pool discretionary funds across multiple federal programs (or agencies) serving similar populations and communities, in exchange for greater accountability for results. The pooling of discretionary funds is also referred to as “blended” funding. Legislative authority: These grant program consolidations require legislative authorization. Federal agencies do not have inherent authority to consolidate grant programs or to enter into grant agreements without affirmative legislative authorization. In authorizing grant programs, federal laws identify the types of activities that can be funded and the purposes to be accomplished through the funding. Frequently, legislation establishing a grant program will define the program objectives and leave the administering agency to fill in the details by regulation. Grant programs are typically subject to a wide range of accountability requirements under their authorizing legislation or appropriation and implementing regulations: this is done so that funding is spent for its intended purpose. In addition, grant programs are subject to cross-cutting requirements applicable to most assistance programs (see table 1 for more information). The Omnibus Budget Reconciliation Act of 1981 (OBRA) consolidated several dozen categorical grant programs (and three existing block grants) into nine block grants covering health and human services, education, community services and development, and energy assistance. These block grants from the 1980s were designed to be more detailed in their reporting and auditing provisions but had fewer kinds of planning and spending restrictions than earlier block grants. For example, OBRA provisions of general applicability impose reporting and auditing requirements, and require states to conduct public hearings as a prerequisite to receiving funds in any fiscal year.the OBRA programs include such items as limitations on allowable In addition, several of administrative expenses, prohibitions on the use of funds to purchase land or construct buildings, “maintenance of effort” provisions, and anti- discrimination provisions. Applicable restrictions are not limited to those contained in the program statute itself—other federal statutes applicable to the use of grant funds must also be followed. In turn, these additional restrictions may impose legal responsibilities on grantees. Thus, the block grant mechanism does not totally remove federal involvement, nor does it permit the circumvention of federal laws applicable to the use of grant funds. In this latter respect, a block grant is legally no different from a categorical grant. As a result of the 1981 block grants, the states’ role in grants administration changed in a number of ways. As we previously reported, four themes capture these actions (see text box). Lessons Learned from States’ Experience of 1981 Block Grants Fiscal strategies. States used block grants to adopt fiscal strategies in response to federal funding changes. These strategies included the ability to continue using prior categorical grant funds, to transfer funds among certain block grants, and to use their state funds to help offset federal cuts. Programmatic discretion. Block grants reduced the federal role in several domestic assistance areas and gave states discretion to determine needs, set priorities, and fund activities within broadly defined areas. Prior involvement in the categorical grant programs provided an administrative framework for absorbing the new responsibilities. Managerial improvements. An objective of block grants was to promote management improvements by reducing federal requirements. Many management improvements were reported, including reduced time and effort preparing applications and reports, changed or standardized administrative procedures, improved planning and budgeting practices, and better use of staff. Accountability considerations. Monitoring the expenditure of block grant funds to achieve stated national objectives—a theme throughout the block grant reports—has been (and is) a central federal accountability function under past and present block grant legislation. Tracking federally supported activities, recipients, and dollars is a major evaluation function. Whether federal funds support activities that advance national objectives is historically of central interest to Congress. The Moving Ahead for Progress in the 21st Century Act (MAP-21) restructured existing highway programs by eliminating or consolidating numerous programs and establishing a revised, core formula program structure. As part of this major restructuring, a new program–the Transportation Alternatives (TA) program–was authorized under MAP- 21 in 2012; in fiscal years 2013 and 2014 the TA program had authorizations of $809 million and $820 million, respectively. TA program provides a single source of funding generally replacing separate funding for individual programs, including the former Transportation Enhancement (TE) activities (renamed the Transportation Alternatives activities), the Safe Routes to School (SRTS) Program, and the Recreational Trails Program (RTP). However, RTP continues to be separately funded through a set-aside requirement under the TA program. Not all formerly eligible activities may be funded through the TA program, though, as MAP-21 also eliminated some eligible activities formerly included under TE activities. Funds for the TA program—like funds for other federal-aid to highway programs—are annually apportioned to the states through a formula.flexibilities regarding how they administer the programs: for example, each state develops its own process to solicit and select projects for funding. The TA program funds are awarded at the state or metropolitan planning organization (MPO) level through a competitive process, but the authorization does not establish specific standards or procedures for how this should be done. The TA program added new requirements that did not previously exist: for example, 50 percent of a state’s apportionment must be suballocated based on population; states and MPOs must solicit and select projects through competitive processes and only eligible entities may sponsor projects, MPOs, and nonprofit entities are not eligible entities. ESG funds are allocated by formula to metropolitan cities, urban counties, territories, and states for select outreach, emergency shelter, homelessness prevention, rapid re- housing assistance, and homeless management information systems. 42 U.S.C. §§ 11371-11378. providing funding opportunities through the CoC program to nonprofit organizations and state and local governments that use those funds to quickly rehouse homeless individuals and families. The McKinney-Vento Homeless Assistance Act, originally passed in 1987, was the first major federal legislative response to homelessness. In 2009, Congress passed the HEARTH Act, which significantly amended Title IV of the McKinney-Vento Act. The amendments are intended to increase the efficiency and effectiveness of coordinated, community- based systems that provide housing and services to the homeless.Under the CoC program, most of the program components and eligible costs continue to be the same as those funded under the predecessor programs. However, they are consolidated so that applicants only need to apply for CoC program funds, rather than for one of three programs based on the type of assistance provided. Applications for CoC program funds are made by a collaborative applicant, which is an organization that has been designated by the Continuum of Care to submit a joint grant application to apply for CoC program funds on behalf of all applicants for funding in a community. CoC program grants are awarded competitively and HUD awarded nearly $1.7 billion to projects in fiscal year 2012. Grant recipients are nonprofit organizations, states, local governments, and state and local government instrumentalities (such as public housing agencies) that are designated by the local Continuum of Care to apply for HUD’s competitive CoC program grant funding. Continuum of Cares are local groups of providers and key stakeholders in a geographic area that join together to design the housing and service system that will prevent and end homelessness within their geographic area. The National Environmental Performance Partnership System (NEPPS) is a performance-based system of environmental protection designed to improve the efficiency and effectiveness of the partnership between states and the U.S. Environmental Protection Agency (EPA), as both share responsibility for protecting human health and the environment. According to EPA documents and state officials, NEPPS is designed to direct scarce public resources toward improving environmental results, allow states greater flexibility to achieve those results, and enhance accountability to the public and taxpayers. We have previously reported that EPA has had long-standing difficulties in establishing effective partnerships with the states, which generally have the lead responsibility in implementing environmental grant programs. To address these problems and to improve the effectiveness of program implementation, a state may receive funds in individual environmental program categorical grants; alternatively, a state (or interstate agency) may choose to combine funds from two or more environmental program grants into a single grant—a Performance Partnership Grant (PPG). PPG funds can be used for any activity that is eligible under at least one of nineteen environmental programs. PPGs streamline administrative requirements, give states greater flexibility to direct resources to their most pressing environmental problems, and make it easier to fund efforts that cut across program boundaries. Closely affiliated with PPGs, Performance Partnership Agreements (PPA) are designed to complement PPGs, with states free to negotiate agreements (or grants) or to decline participation in NEPPS altogether. Our research identified few grant program consolidations over the last two decades. We identified a total of 15 consolidations from fiscal year 1990 through 2012, (see figure 3 and appendix I). Most of these consolidations either combined a number of grant programs used for specific activities (such as Shelter Plus Care), known as categorical grants, into a broader categorical grant, such as the CoC program, or established a Performance Partnership, which offers additional flexibility in using funds across multiple programs but is held accountable for meeting certain performance measures. The grants we identified in figure 3 are likely not an exhaustive list and determining a definitive number of grant program consolidations is difficult for two reasons. First, our research did not identify an authoritative, government-wide compendium or source that provides an accurate tally of enacted grant program consolidations. The inability to identify an authoritative comprehensive source is consistent with prior work reporting on difficulties associated with determining a definitive number of federal grant programs. Efforts to accurately identify grant program consolidations are further complicated by the fact that different entities have counted grant programs differently for decades, rendering it difficult to get a count of the number of grant programs, let alone consolidations.Second, there is no commonly accepted definition of what constitutes a grant program consolidation. While we were not able to identify a definitive number of grant consolidations, we were able to generally identify two different approaches by which these grants were consolidated. The consolidations from 1990 to 2012 shifted more to hybrid approaches from the earlier block grant approach. Block grant approaches: Previously, Congress showed a strong interest in consolidating narrowly defined categorical grant programs intended for specific purposes into broader purpose block grants. Consolidating closely related categorical programs into these broader purpose grants was intended to improve grant administration, which involves the federal government awarding a grant to a state or local government. While block grants generally delegate primary responsibility for monitoring and overseeing the planning, management, and implementation of activities financed with federal funds to state and local governments, they also can create—and have been designed to facilitate—some accountability for national goals and objectives. One such program—Temporary Assistance for Needy Families—consolidated a number of social service programs which provide families with assistance and related support services. Hybrid approaches: In more recent years, we have noted a rise in hybrid approaches—possibly the result of concluding that the traditional devolution of responsibility found in a block grant may not be the most appropriate approach. Hybrid approaches can provide state and local governments with greater flexibility in using federal funds, in exchange for more rigorous accountability for results. Hybrid approaches vary in the degree to which programmatic flexibility is enabled—in order to balance between or among programs—and in the degree to which grant administration, reporting, and accountability requirements are changed. One grant program consolidation enacted in 2009–HUD’s CoC program— was created by consolidating multiple categorical grant programs. EPA’s Performance Partnership System takes a slightly different approach. It provides states the opportunity to voluntarily enter into agreements with EPA to use funds from two or more environmental categorical grant programs in a more flexible and streamlined manner, while enabling states to delineate which environmental priorities (such as air, water, or waste) are most important to their needs. Each of these hybrid approaches can strike a different balance between the interests of the federal grant-making agency—that funds be used efficiently and effectively to meet national objectives and the interests of the recipient—that funds meet local priorities and that the administrative burdens associated with accepting the grant are minimized. Grants consolidated through hybrid approaches can provide opportunities to achieve improved program outcomes for both categorical and block grants. While block grant approaches to consolidation combine programs for broad purposes, hybrid approaches allow for consolidation by combining programs that have a narrower scope and may provide flexibilities. Hybrid approaches can improve the efficiency of grant administration and may reduce fragmentation, overlap, and duplication. For example, in 2012 we reported on Department of Justice (DOJ) grants that provided a range of program areas that were consolidated using These grants provided a range of several different hybrid methods.program areas such as crime prevention, law enforcement, and crime victim services. At the time, DOJ officials told us that the most comprehensive way to reduce overlap is by consolidating two programs with similar purposes into one and by creating unified management. The use of hybrid approaches to consolidate grants continues to evolve. The Departments of Labor, Health and Human Services and Education and Related Agencies Appropriations Act for fiscal year 2014 provided authority for those entities receiving funds under the act to establish up to 10 Performance Partnership pilots designed to improve outcomes for Under the pilot authority, a state, local, or tribal disconnected youth.government may enter into a Performance Partnership agreement with a lead federal agency, which will allow the pooling of grant funds received under multiple federal programs as well as the additional waiver of requirements associated with the federal programs contributing funds. This pilot is a model and is designed to promote better education, employment, and other key outcomes for disconnected youth and to ease administrative burden. The legislation directs OMB to designate the lead federal agency that will enter into and administer the Performance Partnership agreement on behalf of that agency and the other participating federal agencies. OMB is coordinating across multiple federal agencies to facilitate the design and planning of the Disconnected Youth Performance Partnership pilot. OMB Memorandum M-11-21, Implementing the Presidential Memorandum “Administrative Flexibility, Lower Costs, and Better Results for State, Local, and Tribal Governments” (Washington, D.C., April. 29, 2011) and OMB Memorandum M-13-17, Next Steps in the Evidence and Innovation Agenda (Washington, D.C., July 26, 2013). encourage innovation and participation. Organizational culture and an organization’s ability to perform a joint activity (one intended to produce greater public value than could be produced acting alone), can significantly affect interagency collaboration efforts. In addition, resources and the structure of an organization’s decision-making process affect the design and implementation of hybrid consolidation initiatives. OMB also noted the significant amount of effort needed from multiple stakeholders across different levels of government. In addition, our prior grant work concluded that administering similar programs in different agencies can create an environment in which programs may not serve the grant recipients as efficiently and effectively as possible. Measuring and tracking outcomes: The use of Performance Partnerships, such as the pilot previously discussed, involves multiple funding streams across federal programs and agencies. Agencies, recipients, and subrecipients lose the ability to track program performance for individual categorical grants when multiple funding streams are combined in a Performance Partnership. Performance Partnership initiatives may put additional requirements on agencies to measure and track outcomes. For example, before states can enter into a Performance Partnership grant, they must first negotiate a work plan with EPA that includes expected outputs and outcomes. When designing hybrid consolidations, agencies can mitigate certain challenges because they provide an opportunity to consider new performance measures aligned with the intended consolidation outcome. We previously concluded that establishing measurements across agency and federal programs may be difficult to accomplish due to challenges associated with coordination and agencies reaching agreement on a common outcome and as a result, this may be a challenge in implementing the Disconnected Youth Performance Partnership pilot. Managing administrative challenges: When DOJ used several hybrid approaches to consolidate grants in 2012, officials told us that the statutory creation of grant programs with similar purposes can create administrative challenges. They said that in many cases, DOJ must seek statutory authorization to discontinue or consolidate enacted programs that it believes may be overlapping. In addition, EPA officials told us for the NEPPS, categorical grant programs within a partnership have their own statutory and regulatory requirements, which may create an administrative challenge when states try and focus on achieving program results. State and local officials we interviewed are taking some actions to implement the selected grant program consolidations. In general, the actions taken are similar to the 1980s era of block grant actions because in both instances, state and local governments in the three case study consolidations relied on existing grant management structures and established relationships to facilitate implementation of the selected grant program consolidations. These actions include relying on the existing grant management structure, identifying the existence of carry-over funds from predecessor grant programs, and integrating program requirement changes. In addition to these actions, state officials reported to us that the TA program and NEPPS case study grant program consolidations provided them with flexibility in administering the programs. For example, Delaware officials stated the TA program provides certain flexibilities, such as choosing which recreational trails projects it funds. Grant management structure and established relationships: The HEARTH Act changed neither the eligible recipients nor the delivery structure for administering the CoC grant program. Funds for the CoC grant program, like those from the three predecessor homeless grant programs before it, are distributed to the same eligible recipients, as illustrated in table 2. Under the CoC program, an eligible applicant (known as the recipient to whom HUD awards the project and with whom HUD enters into a grant agreement for the project) must be designated by the Continuum of Care to apply for a grant from HUD on behalf of the Continuum that the collaborative applicant represents. The Continuum of Care is responsible for developing a grant application through a collaborative process and approving the submission of grant applications to HUD, among other things. For example, the Homeless Planning Council of Delaware, a private nonprofit organization in one of our selected states, serves multiple roles in the state, including serving as the collaborative applicant for the statewide Continuum of Care and coordinates the submission, ranking, and application for the federal CoC grant funding through HUD. Delaware state officials reported that because there was no change in the structure of the grant funding stream—meaning the state is not the primary recipient of the three predecessor homeless grant programs— the CoC program consolidation has had little or no impact on the state. Existence of carry-over funds from predecessor grant program: States reported to us that the budget impact of the TA program consolidation was delayed because they relied on carry over funds from predecessor grant programs while these funds were still available. For example, both Massachusetts and Delaware are spending down federal SRTS funds authorized in the 2005 Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Until these previously apportioned SRTS funds are obligated or rescinded, they will continue to be available for their specified period of availability, under the same terms and conditions in effect prior to the effective date of MAP-21. Delaware state transportation officials told us at the time of our interview that funds apportioned under the predecessor program provides a 12- to 18-month period before the consolidation will affect them. In our prior work we found that most federal funds for highway projects require a 20 percent match from state and local governments. In addition, grants with federal matching requirements may promote relatively more state and local spending than non-matching grants, thus reducing the likelihood that states will use the federal funds to replace, rather than supplement, their own spending. GAO, Federal Grants: Design Improvements Could Help Federal Resources Go Further, GAO/AIMD-97-7 (Washington, D.C.: Dec. 18, 1996) and Safe Routes to Schools: Progress in Implementing the Program, but a Comprehensive Plan to Evaluate Program Outcomes is Needed, GAO-08-789 (Washington: D.C.: July 31, 2008). responsible for providing matching funds. State and local transportation officials in Delaware and Florida and local transportation officials in Colorado, also stated that at the time the consolidation was enacted, decisions regarding federal transportation funding were already made (through the statewide transportation planning process) and for this reason, the TA program consolidation had little immediate impact on them. In our past work, we concluded that there are various ways to design grants to encourage performance accountability and that effective performance accountability provisions are of fundamental importance in determining if grant program goals are being met. Two factors that we have previously concluded as important for effectively reporting on grant performance are high-quality performance measures and performance data. Adding to the complexity of grants management, grant programs are typically subject to a wide range of accountability requirements (under their authorizing legislation or appropriation) and implementing regulations so that funding is spent for its intended purpose. Congress may also impose increased reporting and oversight requirements on grant-making agencies and recipients. In addition, grant programs are subject to crosscutting requirements applicable to most assistance programs. Performance accountability challenges persist in the selected case studies, in part because of statutory, regulatory, or administrative program requirements were present in the predecessor programs, and that continued unchanged in the consolidation implementation. These performance accountability challenges identified by federal, state, and local officials include lack of central oversight in the states, lack of or inaccurate performance data, and conflicting reporting requirements, as illustrated in table 3. OMB staff told us that they are identifying opportunities to design grant program consolidation authorizations with greater flexibility. Through its Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance), OMB has consolidated its grants management circulars in an effort to promote consistency among grantees and to reduce administrative burden, such as, eliminating unnecessary and duplicative requirements, on nonfederal entities. OMB officials said this may provide greater flexibilities to address some of these accountability challenges. State officials reported that these identified accountability challenges existed prior to the consolidation and that it is unclear how (if at all) consolidations affect them. The experiences of state and local officials responsible for grant accountability and consolidation implementation suggest opportunities for Congress and the executive branch to improve the development and the implementation of accountability mechanisms when designing grant program consolidations. For example, building accountability into newly proposed grant program consolidations is an important but difficult task—one requiring trade-offs between federal and state control over program finances, activities, and administration. Designing accountability provisions provides an opportunity to consider the potentially conflicting objectives of increasing state and local flexibility, attaining certain national objectives, and improving reporting,—which together, leads to better outcome and impact evaluations. Depending on their focus, evaluations may examine aspects of program consolidation (such as performance measurements or program reporting) or factors in that program’s environment that may impede or contribute to the consolidation’s success. Alternatively, evaluations may assess a consolidation’s effects beyond its intended objectives, or may estimate what would have occurred in the absence of the consolidation, in order to assess the net impact. Striking a balance will inevitably involve philosophical questions about the proper roles and relationships among the levels of government in our federal system. We have previously cited examples of how fragmentation and overlap can lead to inefficient use of resources. We have previously concluded that consolidation may also provide an opportunity to reduce fragmentation, overlap, and duplication. For the three selected program consolidations we reviewed as case studies, federal, state or local officials identified opportunities to either reduce fragmentation, overlap, and duplication. For example, state or local officials in three states (Colorado, Florida, and Massachusetts) identified duplicative reporting requirements for homeless assistance grants. In addition, multiple homelessness grants are available from multiple federal agencies: each offers similar services to similar beneficiaries, and each has its own grant life cycle (i.e., separate grant awards, applications, and reporting requirements).fragmentation of services and overlap is partly a result of a program’s statute and partly result of programs evolving to offer services that meet the varying needs of recipients. More specifically, we have found the following; We have previously concluded that fragmentation, overlap, and duplication in the homeless grant programs may be reduced by grant program consolidation. By authorizing the consolidated CoC grant program, the HEARTH Act helped mitigate this duplication in HUD’s homeless assistance grant programs but does not fully address it because of the underlying structure and operations of providing federal homeless services and grant programs to low-income people remains fragmented. This is because federal programs may not always include service providers with expertise and experience in addressing the needs of homeless people and because these programs may lack incentives that encourage mainstream service providers to serve this population. Also, the fragmented nature of federal mainstream programs can create barriers to providing a coordinated set of services that addresses the multiple needs of homeless people. HUD officials told us that they are working to decrease fragmentation in homelessness grant programs through regulations and by working with other federal stakeholders. GAO, Homelessness: Barriers to Using Mainstream Programs, GAO/RCED-00-184 (Washington, D.C.: July 6, 2000). The ad hoc nature and fragmentation of federal grant program authorization contributes to fragmentation, overlap, and duplication. In the case of the NEPPS program, EPA officials reported three areas where fragmentation, overlap, and duplication exist, and interfere with the ability to achieve the NEPPS goals to promote efficiency and effectiveness. The three areas are: (1) multiple competing reporting guidance, regulations and individual grant reporting requirements, (2) duplication in performance measurement requirements, and (3) grants with similar purposes administered by multiple federal and state agencies. In some instances—such as when a state receives multiple categorical water grants—a PPG may provide opportunities for the state to reduce overlap and duplication by managing the funding streams available from multiple programs with the flexibilities afforded by the PPG. However, fragmentation may be exacerbated by the silo effect (i.e., across multiple federal agencies) of program implementation. The key to any consolidation initiative is identifying and agreeing on the goals of the consolidation, regarding grant administration and changed programmatic outcomes (if any) and designing and planning for successful implementation, according to findings from the case studies and our prior GAO work. Grant consolidations offer the opportunity to improve administration by enlarging the limits of narrowly targeted grants and by reducing fragmentation, overlap, and duplication. In addition, consolidations may be undertaken to improve the programmatic outcomes associated with national goals by designing the consolidation with consideration of the effects on other closely related grant programs excluded from the consolidation. This awareness of consolidation purposes can provide the federal, state and local recipients of the consolidation the opportunity to develop an implementation plan against a realistic expectation of how the consolidation goals can be achieved. While grant program consolidation goals can be compromised by the complexity and number of grant programs affecting a national goal, and by the fragmented structure of authorizing new program initiatives across multiple Congressional committees and subcommittees, OMB officials told us that building a shared understanding of the consolidation goals and outcomes among the affected federal, state, and local program officials can build a strategy for achieving the identified goals. As federal policy makers consider future grant program design—including consolidating categorical grant programs or authorizing performance partnerships—it is important that leaders consider what the consolidation is trying to achieve and what its impact might be on simplifying grant administration and improving the effectiveness and performance of federal assistance programs. Without first identifying goals, a consolidation may not achieve the desired outcome, such as reducing the number of programs while still funding the same original program or activity. Implementing consolidations is not a simple endeavor and may require concentrated efforts of both leadership and employees to accomplish new organization goals. Whether consolidations originate from within an agency in response to changing conditions or from outside pressures, or from the most senior levels of government, it is essential that top government and agency leaders are committed to the consolidation and play a lead role in executing it. Lessons learned from prior work on mergers and transformations have shown that, leadership must set the direction, pace, and tone, as well as provide a clear, consistent rationale to agency staff in order to increase the likelihood of a successful consolidation. For example, states and EPA are jointly responsible for implementing NEPPS program requirements, conducting strategic planning, and setting priorities that identify optimal ways to leverage available federal resources alongside state resources. Federal and state officials involved in implementing EPA’s PPAs or PPGs told us that strong senior leadership plays an important role in ensuring these responsibilities are met. Furthermore, they stated that broad adoption of PPGs requires effective coordination across programs and within EPA program offices, as well as ongoing senior leadership support. Our prior work has shown that communication plays a role in grant management reform. We have concluded that communication is not just “pushing the message out,” but should facilitate a two-way, honest For exchange and allow for feedback from relevant stakeholders.example, HUD officials responsible for managing the CoC program consolidation told us that establishing a help desk to answer questions from grant recipients and other community stakeholders enabled HUD to understand consolidation implementation challenges and to update guidance accordingly in real time. In our prior work, we concluded that given the potential benefits and costs of consolidation, it is imperative that Congress and the executive branch have information to help them effectively evaluate grant program consolidation proposals.requirements and congressional oversight can contribute to successful achievement of national goals in grants and to improve the efficiency and effectiveness of grant programs. Legislative action: Our body of grant consolidation work has identified areas where Congress should consider taking legislative action to consolidate certain programs in the education, housing, welfare, and justice areas. When Congress is considering grant program consolidation proposals it is important, that the proposals be supported by analysis: agencies’ responses to key questions could help inform such proposals (see text box). Such questions would not necessarily be exhaustive, nor would it be necessary to consider all questions in every proposal. Key Questions to Consider When Evaluating Grant Consolidation Proposals What are the goals of the consolidation? What opportunities will be addressed through the consolidation and what problems will be solved? What problems, if any, will be created? Is there a way to track and monitor progress toward the short-term and long-term goals? Does the consolidation proposal include a feedback loop? Does the feedback enable officials to identify and analyze the causes of the program outcomes and how this learning can be leveraged for continuous improvement? What will be the likely costs and benefits of the consolidation? Are sufficiently reliable data available to support a business-case analysis or cost-benefit analysis? How can the up-front costs associated with the consolidation, if any, be funded? Who are the consolidation stakeholders, and how will they be affected? How have the stakeholders been involved in the decision, and how have their views been considered? On balance, do stakeholders understand the rationale for consolidation? If the proposed consolidation approach does not include all programs with similar activities or that address similar goals, how will the new structure interact with those programs not included in the consolidation? To what extent do plans show change management practices will be used to implement the consolidation? Evidence of thinking through some of these considerations may indicate that agency officials have developed a strong grant program consolidation proposal. Conversely, the absence of consideration of these questions could indicate that agency officials have not adequately planned their consolidation proposal. Program consolidation evaluations: Executive branch agencies could conduct and report program evaluations that would assess how well federal programs are working and identify steps that are needed to improve them. Program evaluations typically examine processes, outcomes, impacts, or the cost effectiveness of federal programs. Evaluation can play a key role in program planning, management, and oversight by providing feedback on both program design and execution to program managers, legislative and executive branch policy officials, and the public. However, as our prior work found few executive branch agencies regularly conduct in-depth program evaluations to assess their programs’ impacts or to learn how to improve results. Program evaluations that use the key questions identified are also important when programs are being considered for consolidation. Such analysis is likely to result in more effective and improved outcomes of a consolidation. Annually, through the President’s budget process and congressional budget justification, agencies have the opportunity to present Congress with the rationale for a program consolidation proposal, such as a business case analysis. The congressional budget justification can be used to support a grant program consolidation proposal. For example, the fiscal year 2012 DOJ congressional budget justification recognized the potential for consolidation by stating that “whenever possible, the President’s Budget proposes to consolidate existing programs into larger, more flexible programs that offer state, local, and tribal grantees greater flexibility in using grant funding and developing innovative approaches to their criminal justice needs.” In carrying out its mission, OMB provides general guidance to federal agencies, assesses the effectiveness of programs, and ensures that budget requests are consistent with regulations and presidential priorities. OMB, as the focal point for overall management in the executive branch, plays a key role in improving the performance of federal grant programs and has developed or contributed many tools to encourage improvements to federal grants and program performance. Therefore, in OMB’s capacity to provide agency guidance, OMB could help agencies identify consolidation opportunities and conduct program consolidation evaluations. OMB staff stated there is a need for improved guidance relative to grant program consolidation opportunities. Agencies and, the Congress—as well as grantees—can benefit from guidance, which currently does not exist, to assist with identifying consolidation opportunities, particularly those requiring statutory changes and developing consolidation proposals. In conducting reviews of prior agency budget justifications, we have found opportunities for federal agencies to improve information that could aid congressional stakeholders in resource decision making and program oversight. For example, when proposing a grant program consolidation, all agencies could include a program consolidation evaluation (or business case) in their budget justification review that among other things clearly identifies the consolidation goals, benefits, and stakeholders that will be affected by the consolidation. GAO-12-542. been consolidated) and are therefore unable to identify a benefit for consolidating the predecessor categorical grant programs. EPA’s PPG has elements which illustrate the benefits that can occur when agencies must present facts and supporting details for a consolidation. For instance, states first elect whether to participate in a PPG: if they decide to do so, they identify environmental priorities and determine which eligible grant programs to potentially include in the PPA—which illustrates a degree of intentional and rationale decision making that is consistent with elements of a program consolidation evaluation (or business case analysis). Finally, to design a PPG, states and EPA develop and negotiate a grant work plan consistent with applicable federal statutes; regulations; circulars; executive orders; and EPA delegations, approvals or authorizations. The work plan documents how grantees intend to use federal funds and what they will accomplish. Details included in a work plan include the commitments for each component and a time frame for their accomplishment, a performance evaluation process and reporting schedule, and the roles and responsibilities of the state and EPA in carrying out the work plan commitments. The federal government plays an important role in delivering federal grant-in-aid to state and local governments. Numerous agencies administer fragmented programs, and recent assessments have shown that some programs overlap (that is, provide similar products or serve similar populations). Consolidating programs carries certain implications for recipients (e.g., changes in eligibility, process or procedures for eligible applicants), existing programs, personnel, and associated information systems. Consolidations can reduce, have no impact on, or even increase fragmentation, overlap, and duplication of related grant programs that are not included in the consolidation. In seeking to avoid increasing unnecessary fragmentation, overlap, and duplication, it is critical that federal policy-makers consider what other programs or funding streams exist in related areas and what the impact of the consolidation on these is likely to be. Even if no changes in these other programs are undertaken, design of the consolidation can affect the interaction with other programs and funding streams. As we have previously reported, consolidation initiatives can be complex, costly, and difficult to achieve. For this reason, a case-by-case analysis—one that evaluates the goals of the consolidation against the realistic possibility of the extent to which those goals would be achieved— is important to ensure effective stewardship of government resources in a constrained budget environment. Considering grant program consolidation design features and their implications can help policymakers ensure that accountability and information are adequately provided for, whatever type of consolidation approach is selected. Our findings suggest that the design of a grant program consolidation involves choosing among policy options that, in combination, establish the degree of flexibility afforded to states or localities; prioritize the relevance of performance objectives for grantee accountability; designate whether accountability for performance rests at the federal, state, or local level; and identify prospects for measuring performance through grantee reporting. The design may also allow for an evaluation of program consolidation performance, an overall assessment of whether the program works, and identification of adjustments that may improve the results. The availability of guidance on evaluating grant program consolidation opportunities could assist agencies’ efforts to identify such opportunities. Seeking out the interests and concerns of Congress and key program stakeholders in advance can help ensure that agency evaluations provide the information necessary for effective management and congressional oversight of program consolidations. The experiences of federal, state, and local officials suggest opportunities for Congress and the executive branch to improve grant program consolidation design. These opportunities include, evaluating the delivery of services with a clear national objective across multiple agencies and leveraging lessons learned through feedback from implemented consolidations. Using those lessons can support continuous improvement in future grant program consolidations. To assist federal agencies seeking to streamline and improve the efficiency of grant programs and improve their outcomes, we recommend that the Director of the Office of Management and Budget (OMB) develop guidance that presents a range of potential consolidation methods, such as performance partnerships, and other hybrid approaches. This guidance should assist agencies in identifying consolidation opportunities, including those that require statutory changes, and in developing sound consolidation proposals. The guidance should include questions agencies are expected to include in any consolidation proposals such as, What are the goals of the consolidation? What opportunities will be addressed through the consolidation and what problems will be solved? What challenges, if any, will be created? Is there a way to track and monitor progress toward the short-term and long-term goals? Does the consolidation proposal include a feedback loop? Does the feedback enable officials to identify and analyze the causes of the program outcomes and how this learning can be leveraged for continuous improvement? Who are the consolidation stakeholders and how will they be affected? What will state, local, or nonprofit entities have to do differently? What statutory or regulatory changes are needed to support the consolidation? If the proposed consolidation approach does not include all programs with similar activities or that address similar goals, how will the new structure interact with those programs not included in the consolidation? We provided a draft of this report for review and comment to the Administrator of the Environmental Protection Agency, the Secretaries of the Departments of Housing and Urban Development and Transportation, and the Director of the Office of Management and Budget (OMB). OMB did not comment on the recommendation but provided technical comments, as did each of the other agencies. We incorporated these technical comments as appropriate. Additionally, we provided excerpts of the draft report to state and local officials in the four states we interviewed for this study and incorporated their technical comments as appropriate. We are sending this report to relevant agencies and congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions or wish to discuss the material in this report further, please contact me at (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix III. Appendix I: Summary of Grant Program Consolidations, Fiscal Year 1990 through 2012 (Text for Interactive Fig. 3) Table 4 lists the 15 consolidated grants we identified during the time period of fiscal years 1990 through 2012 along with selected characteristics for each. There is no single resource that maintains a list of grant consolidations; therefore, this list may not be exhaustive. As part of our ongoing body of work associated with improving grant design and management across the federal government, we were asked to identify what federal grant programs have been consolidated in the past and asked to examine whether overlap and duplication may be reduced by consolidating several federal grant programs, and to identify outcomes that have occurred from prior consolidations in terms of savings or improved performance. To accomplish this, we answered the following objectives: 1. Describe approaches taken to grant programs that have been consolidated from fiscal year 1990 through 2012. 2. Examine federal, state, and local actions taken to administer the selected case study consolidated grant programs. 3. Analyze the lessons learned for future consideration of grant program consolidations. For our first reporting objective, we conducted a literature review which consisted of reviewing the federal government websites USAspending.govGrants.gov, the Catalog of Federal Domestic and Assistance (CFDA), and prior presidential budget submissions and conference reports. We also reviewed our prior grant management reports, along with reports from the Congressional Research Service, the Congressional Budget Office, and the Federal Funds Information for States (FFIS) database. We reviewed public laws to clarify when we had identified a potential consolidation through one of these sources. The list we developed represents our best effort to comprehensively identify all grants consolidated from fiscal year 1990 through fiscal year 2012. However, it is possible other grants were consolidated during this time that our methodology did not identify. We selected 1990 because federal, state, and local officials working in these program areas may be more aware of the consolidations that happened at or after that time than before it. During our review, we identified that some of the grants on our list were consolidated using hybrid approaches, and we interviewed officials at OMB to learn more about these approaches. In conducting this research, we intentionally excluded grant waivers: while we have previously reported on them, for this engagement we concluded they were generally for narrow administrative purposes or specific grant waiver cases were unrelated to our scope. Further, for our second and third objectives, we conducted three case study reviews in four states (and selected localities) to examine how selected consolidated grant programs were administered. The selected locations and grant program consolidations are not generalizable, but they provided important insights about grant consolidations. Selected grant consolidated programs: We selected two grant programs—the Department of Housing and Urban Development (HUD) Continuum of Care (CoC) program and the Department of Transportation (DOT) Transportation Alternatives (TA) program—in part because they were both consolidated in the past 6 years, thus increasing the likelihood of receiving sufficient program information (programs consolidated more than 6 years ago are less likely to have sufficient information in part because there are fewer agency officials in the appropriate positions who oversaw the program). We selected the Environmental Protection Agency (EPA) National Environmental Performance Partnership System (NEPPS) because the approach used to consolidate it was different than the other approaches; in addition, at the time we selected our programs, it was the only performance partnership established we identified. For purposes of this report, we considered these three to be consolidated grant programs because they were identified as such through our literature review. We reviewed federal data for each program—such as state population and state grant award amounts—and considered other factors, such as geographic dispersion, agency documents, our prior reports, and likely travel costs. For each of the selected programs, we conducted either in- person or telephone interviews with state and local grant officials. To interview the federal officials involved with these programs, we developed a semi-structured data collection instrument to ensure uniform data collection. Furthermore, the Departments of Labor, Health and Human Services and Education and Related Agencies Appropriations Act for fiscal year 2014 provided authority for those entities receiving funds under the act to establish up to 10 Performance Partnership pilots designed to improve outcomes for disconnected youth. During our case study review, we learned that OMB is leading this initiative and we met with OMB officials to better understand this pilot. Selected states and entities: After selecting the programs, we chose four states—and local entities in each—to conduct a case study review of how the programs were being implemented. The four states we selected were Colorado, Delaware, Florida, and Massachusetts. To select the states, we conducted interviews with subject matter specialists seeking location recommendations for each of the selected programs, including interviews with federal officials who oversee the grant programs and relevant national associations. We also reviewed federal agency documents and data for each program which contained either state specific participation or funding and we reviewed prior reports we have issued on the selected programs and we considered other factors such as states where we have previously conducted prior grant work. To select localities in each state, we used a similar method. Table 5 provides details about the states and selected entities included in our review. Lessons learned. For objective 3, to analyze lessons learned for future consideration of grant program consolidations, we reviewed existing literature pertaining to grants management and identified key questions to consider when evaluating grant program consolidations, and attributes for conducting a program consolidation evaluation. In addition, we interviewed agency officials and reviewed legislation and best practices developed in our prior reports and OMB memos. Susan J. Irving, (202) 512-6806 or irvings@gao.gov. In addition to the contact named above, Stanley J. Czerwinski, William J. Doherty (Assistant Director), and Sandra L. Beattie (Analyst-in-Charge), supervised this review and the development of the resulting report. Alicia P. Cackley, Steven L. Cohen, James Cook, Deirdre Duffy, Karin K. Fangman, Susan E. Iott, Donna Miller, Cynthia Saunders, and Paul J. Schmidt made key contributions to this report. 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Grants Management: Improved Planning, Coordination, and Communication Needed to Strengthen Reform Effort. GAO-13-383. Washington, D.C.: May 23, 2013. Homelessness: Fragmentation and Overlap in Programs Highlight the Need to Identify, Assess, and Reduce Inefficiencies. GAO-12-491. Washington, D.C.: May 10, 2012. Managing For Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Sept. 27, 2012. Grants to State and Local Governments: An Overview of Federal Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: Sept. 25, 2012. Justice Grant Programs: DOJ Should Do More to Reduce the Risk of Unnecessary Duplication and Enhance Program Assessment. GAO-12-517. Washington, D.C.: July 12, 2012. Streamlining Government: Questions to Consider When Evaluating Proposals to Consolidate Physical Infrastructure and Management Functions. GAO-12-542. Washington, D.C.: May 23, 2012. State and Local Governments: Fiscal Pressures Could Have Implications for Future Delivery of Intergovernmental Programs. GAO-10-899. Washington, D.C.: July 30, 2010. Grants Management: Enhancing Accountability Provisions Could Lead to Better Results. GAO-06-1046. Washington, D.C.: Sept. 29, 2006. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Federal Assistance: Grant System Continues to Be Highly Fragmented. GAO-03-718T. Washington, D.C.: April 29, 2003. Homelessness: Consolidating HUD’s McKinney Programs. GAO/T-RCED-00-187. Washington, D.C.: May 23, 2000. Welfare Programs: Opportunities to Consolidate and Increase Program Efficiencies. GAO/HEHS-95-139. Washington, D.C.: May 31, 1995. Program Consolidation: Budgetary Implications and Other Issues. GAO/T-AIMD-95-145. Washington, D.C.: May 23, 1995. Block Grants: Characteristics, Experience, and Lessons Learned. GAO/HEHS-95-74. Washington, D.C.: Feb 9, 1995. State Rather Than Federal Policies Provided the Framework for Managing Block Grants. GAO/HRD-85-36. Washington, D.C.: March 15, 1985. Lessons Learned from Past Block Grants: Implications for Congressional Oversight. GAO/IEP-82-8. Washington, D.C.: September 23, 1982.
|
GAO has previously reported that consolidations may help increase the effectiveness and efficiency of government programs. GAO was asked to review grant program consolidations in regard to reducing overlap and duplication. This report: (1) describes approaches taken to grant programs that have been consolidated from fiscal year 1990 through 2012, (2) examines federal, state and local actions taken to administer the programs, and (3) analyzes lessons learned for future consideration of grant program consolidations. GAO reviewed literature on grant program consolidations. For this review GAO selected three case study grant program consolidations, the TA and CoC programs, and the National Environmental Performance Partnership System. GAO conducted interviews with state and local officials in Colorado, Delaware, Florida, and Massachusetts. GAO selected these states and localities based on several selection criteria, such as state participation and funding. The selected locations and grant program consolidations are not generalizable, but they provided important insights about grant consolidations. Consolidations from fiscal years 1990 through 2012. There is no authoritative, accurate tally of enacted grant program consolidations. In addition, there is no commonly accepted definition of what constitutes a grant program consolidation. From a variety of sources, GAO identified 15 grant program consolidations during this period. Most of these consolidations either combined a number of grant programs used for specific activities (such as Shelter Plus Care), known as categorical grants, into a broader categorical grant, such as the Continuum of Care (CoC) program or established a Performance Partnership, which offers additional flexibility in using funds across multiple programs but maintains accountability for meeting certain performance measures. Block grant approaches to consolidation prior to 1990 combined programs for broad purposes, such as work assistance. The more recent approaches, referred to as hybrid, often combine categorical grant programs and emphasize strong performance standards and accountability. Hybrid approaches can improve the efficiency of grant administration and may reduce fragmentation, overlap, and duplication. State and local government actions. State and local officials in the three case study consolidations GAO selected for review relied on existing grant management structures and established relationships to facilitate implementation of the grant program consolidations. In the Transportation Alternatives (TA) program the impact of the consolidation was delayed by states and local officials' reliance on carryover funds from predecessor grant programs while these funds were still available. Officials reported both benefits and challenges ranging from administrative flexibility such as lack of central oversight by states, lack of or inaccurate performance data, and conflicting reporting requirements. Lessons to consider. The key to any grant program consolidation initiative is identifying and agreeing on goals—such as improved grant administration and changed programmatic outcomes—and to design and plan for successful implementation, according to findings from the case studies and prior GAO reports. Grant consolidations offer the opportunity to improve grant administration by expanding the opportunities of narrowly targeted grants and by reducing fragmentation, overlap, and duplication. Consolidation initiatives that answer key questions can provide a data-driven consolidation rationale and show stakeholders that a range of alternatives has been considered. These evaluations should include responses to key questions such as the following: What are the goals of the consolidation? What opportunities will be addressed through the consolidation and what problems will be solved? GAO's prior work found that few executive branch agencies regularly conduct in-depth program evaluations to assess their programs' impact. The Office of Management and Budget (OMB), as the focal point for overall management in the executive branch, plays a key role in improving the performance of federal grant programs and has developed or contributed many tools to encourage improvements to federal grants and program performance. Agencies, the Congress—as well as grantees—can benefit from guidance, which currently does not exist, to assist with identifying consolidation opportunities, particularly those requiring statutory changes, and developing consolidation proposals. GAO recommends OMB develop guidance on identifying grant program consolidation opportunities and the analysis to improve their outcomes. GAO incorporated technical comments from the Environmental Protection Agency, Departments of Housing and Urban Development and Transportation, and OMB.
|
Selected provisions of federal law explicitly prohibit specific categories of drug offenders from receiving certain federal benefits for specified periods. Table 1 identifies key provisions of federal law that provide for denial of benefits specifically to drug offenders and the corresponding benefits that may or must be denied to drug offenders. Except for federal licenses, procurement contracts, and grants under Denial of Federal Benefits Program, the benefits that may or must be denied are benefits that are generally provided to low-income individuals and families. TANF, food stamps, federally assisted housing, and Pell Grants are low-income programs. The Denial of Federal Benefits Program, established under Section 5301 of the Anti-Drug Abuse Act of 1988, as amended, provides that federal and state court judges may deny all or some of certain specified federal benefits to individuals convicted of drug trafficking or drug possession offenses involving controlled substances. Additional details on each of the programs may be found in appendices II, III, IV, and V. The provisions differ on key elements. For example, they establish different classes of drug offenders that may or must be denied benefits, and they provide for different periods that drug offenders are rendered ineligible to receive a benefit and whether or not benefits can be restored. Some of the provisions allow that drug offenders may become eligible for benefits upon completing a recognized drug treatment program. Provisions established by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), as amended, which govern the TANF and food stamp programs, provide that benefits must be denied to persons convicted of a state or federal felony drug offense that involves the possession, use, or distribution of a controlled substance that occurred after August 22, 1996, the effective date of these provisions. Students become ineligible to receive federal postsecondary education benefits upon a conviction of either a misdemeanor or a felony controlled substances offense. Loss of federally assisted housing benefits can occur if individuals, relatives in their household, or guests under a tenant’s control engage in drug-related criminal activity, regardless of whether the activity resulted in a conviction. Local public housing authorities, which administer federally assisted housing benefits, have discretion in determining the behaviors that could lead to loss of certain federal housing benefits. Under the Denial of Federal Benefits Program, judges in federal and state courts may deny a range of federal licenses, contracts, and grants to persons convicted of controlled substances drug trafficking and drug possession offenses. The period of ineligibility to receive benefits varies. Under PRWORA, as amended, unless states enact laws that exempt convicted drug offenders in their state from the federal ban, TANF and food stamp benefits are forfeited for life for those convicted of disqualifying drug offenses. State laws may also result in a shorter period of denial of these benefits. Students are disqualified from receiving federal postsecondary education benefits for varying periods depending on the number and type of disqualifying drug offense convictions. A first conviction for possession of a controlled substance, for example, results in a 1-year period of ineligibility, while a first conviction for sale of controlled substance results in a 2-year period of ineligibility. Upon subsequent convictions, the period of ineligibility can extend indefinitely. Federally assisted housing benefits may also be denied for varying periods of time, depending upon the number and types of drug-related criminal activities. The minimum loss of benefit is 3 years in certain circumstances, and the maximum is a lifetime ban. For example, for persons convicted of certain methamphetamine offenses, the ban is mandatory and for life. Under the Denial of Federal Benefits Program, the denial of certain other types of benefits by judges, such as federal grants and contracts, can range from 1 year to life depending on the type of offense and number of convictions. In some cases, the period of benefit ineligibility may be shortened if offenders complete drug treatment. For example, students may have their postsecondary education benefits restored if they satisfactorily complete a drug treatment program that satisfies certain criteria and includes two unannounced drug tests. Under the Denial of Federal Benefits Program, the denial of benefits penalties may, for example, be waived if a person successfully completes a drug treatment program. Other than offenders who were convicted of methamphetamine offenses, drug offenders that successfully complete drug treatment may receive federally assisted housing benefits prior to the end of their period of ineligibility. In states that have passed laws so specifying, drug offenders may shorten the period of ineligibility for TANF and food stamp benefits by completing drug treatment. (See table 2.) The legislative history of these provisions is silent as to whether they were intended to do more than provide for denying federal benefits to drug offenders, such as deterring drug offenders from committing future criminal acts. For example, our 1992 report indicated that in the floor debates over the Denial of Federal Benefits Program, some members of Congress expressed the opinion that even casual drug use should result in serious consequences, such as the loss of federal benefits. With respect to prohibiting drug offenders from public housing, congressional findings made in 1990 and amended in 1998 address the extent of drug-related criminal activities in public housing and the federal government’s duty to provide public and other federally assisted low-income housing that is decent, safe, and free from illegal drugs. TANF, food stamps, federally assisted housing, and Pell Grants are means- tested benefits. To receive the benefits, individuals must meet certain eligibility criteria. These criteria vary with the benefit. For instance, states determine maximum earned income limits for TANF, but to receive food stamps, the federal poverty guidelines are generally used in determining eligibility. To receive federally assisted housing, local area median income is used. Additionally, most adults eligible for TANF and some adults eligible for food stamps must meet specified work requirements to participate in the programs. Table 3 summarizes the general eligibility requirements for the federal benefits discussed in this report and identifies the federal, state, and local agencies responsible for administering the programs. Not all persons who meet the general eligibility requirements to receive federal benefits participate in the respective programs. Our recent study on programs that aim to support needy families and individuals shows that the portion of those eligible to receive the benefits that actually enrolled in the programs varied among programs. Among families eligible to participate in TANF in 2001, between 46 percent and 50 percent were estimated to be participating in the program. For food stamps in 2001, between 46 percent and 48 percent of eligible households were estimated to participate in the program. For federally assisted housing, between 13 percent and 15 percent of eligible households in 1999 were estimated to be covered by the Housing Choice Voucher (HCV) Program and between 7 percent and 9 percent of eligible households in 1999 were estimated to be covered by the Public Housing Program. Further, the Department of Education reports that among all applicants for federal postsecondary education assistance in academic year 2001-2002, about 77 percent of the applicants that were eligible to receive Pell Grants applied for and received them. Drug offenders would be directly affected by the federal provisions that allow for denial of low-income federal benefits when, apart from their disqualifying drug offense, they would have qualified to receive the benefits. For example, if a drug offender is not in a financially needy family and living with her dependent child, the drug offender would not be eligible for TANF benefits aside from the drug offense conviction. To be directly affected by the ban on food stamps, a drug offender would have had to meet income tests and work requirements, unless the work requirements are, under certain specified circumstances, identified as not applicable by federal food stamp laws; otherwise, the offender’s ineligibility to receive the benefit would disqualify him, as opposed to his drug offense. Because the ban on the receipt of TANF and food stamps is for life, an offender who is not otherwise eligible to receive the benefits at one point in time might become otherwise eligible to receive the benefits at a later point in time and at that time be affected by the provisions of PRWORA. To be otherwise eligible to receive federal postsecondary education assistance, a person convicted of a disqualifying drug offense would, at a minimum, have to be enrolled in or accepted at an institution of higher education, as well as meet certain income tests. To be otherwise eligible for federally assisted housing benefits, a person would have to meet income tests. We estimated that among applicants for federal postsecondary education assistance, drug offenders constituted less than 0.5 percent on average of all applicants for assistance during recent years. In general, the education attainment level of drug offenders is lower than that of the general population, and this lower level affects drug offenders’ eligibility for federal postsecondary assistance. Among selected large PHAs that reported denying applicants admission into public housing during 2003, less than 5 percent of applicants were denied admission because of drug- related criminal activities. PHAs have discretion in developing policies to deny offenders for drug-related criminal activities. Federal and state court sentencing judges were reported to impose sanctions to federal benefits to fewer than 600 convicted drug offenders in 2002 and 2003, or less than 0.2 percent of felony drug convictions on average. According to Department of Education data on applicants for federal postsecondary education assistance for the academic years from 2001- 2002 through 2003-2004, less than 0.5 percent on average of the roughly 11 million to 13 million applicants for assistance reported on their applications that they had a drug offense conviction that made them ineligible to receive education assistance in the year in which they applied. These numbers do not take into account the persons who did not apply for federal postsecondary education assistance because they thought that their prior drug convictions would preclude them from receiving assistance or any applicant who falsified information about drug convictions. Using these data and Department of Education data on applicants that received assistance for the academic years 2001-2002 through 2003-2004, we estimated that between 17,000 and 20,000 applicants per year would have been denied Pell Grants, and between 29,000 and 41,000 would have been denied student loans if the applicants who self-certified to a disqualifying drug offense were eligible to receive the benefits in the same proportion as the other applicants. (See app. III for details on our methods of estimating these figures.) In general, the educational attainment levels of persons convicted of drug offenses is less than that of persons in the general population. This results in proportionately fewer persons eligible for these education benefits than in the general population. Our analysis of data from the only national survey of adults on probation that also reports on their educational attainment indicates that among drug offenders on probation during 1995, less than half had completed high school or obtained a general equivalency degree (GED)—prerequisites for enrolling in a postsecondary institution. By comparison, according to a Bureau of Justice Statistics report, about 18 percent of adults in the general population had less than a high school degree. More recent data from the U.S. Sentencing Commission on roughly 26,000 drug offenders sentenced federally during 2003 indicate that half of them had less than a high school degree, about one-third had graduated from high school, and about 18 percent had at least some college. In addition, our analysis of BJS data on drug offenders released from prisons in 23 states during 2001 indicate that about 57 percent of these drug offenders had not completed high school by the time they were admitted into prison; about 36 percent had completed high school or obtained a GED as their highest level of education completed; and the remainder had completed some postsecondary education. We obtained data from 17 of the largest PHAs in the nation on the decisions that they made to deny federally assisted housing benefits to residents or applicants during 2003. Thirteen of the 17 PHAs reported data on both (1) the number of leases in the Public Housing Program units that they manage that ended during 2003 and (2) the number of leases that were terminated for reasons of drug-related criminal activities. These 13 PHAs reported terminating leases of 520 tenants in the Public Housing Program because of drug-related criminal activities. The termination of a lease is the first step in evicting tenants from public housing. Tenants whose leases were terminated for reasons of drug-related criminal activities constituted less than 6 percent of the 9,249 leases that were terminated in these 13 PHAs during 2003. Among these PHAs, the percentage of terminations of leases for reasons of drug-related criminal activities ranged from 0 percent to less than 40 percent. These PHAs also reported that the total number of lease terminations in 2003 and the volume of denials for drug-related activities were generally comparable with similar numbers for the 3 prior years. (See app. IV for data for each PHA that responded to our request for information.) Fifteen PHAs acted on 29,459 applications for admission into the Public Housing Program during 2003. Among these applicants seeking residency, we estimated that less than 5 percent were denied admission because of their drug-related criminal activities. The PHAs also reported that they acted on similar numbers of applicants and made similar numbers of denial decisions in the prior 3 years. Table 4 shows the data on lease terminations and denials of admission in two federally assisted housing programs. We also obtained and analyzed data from HUD on the number of evictions and denials of admission into public housing during fiscal years 2002 and 2003 that occurred for reasons of criminal activity, of which drug-related criminal activity is a subset. More than 3,000 PHAs reported to HUD that in each of these years there were about 9,000 evictions for reasons related to criminal activity and about 49,000 denials of admission for reasons of criminal activity. As a percentage of units managed by these PHAs, evictions for reasons of criminal activity in each of these years amounted to less than 1 percent of units managed, and denials of admission amounted to about 4 percent of units managed. Evictions and denials for reasons of drug-related criminal activities would have to be equal to or, much more likely, less than these percentages. On the basis of data that 9 PHAs were able to report about terminating participation in the HCV Program during 2003, we estimated that less than 2 percent of the decisions to terminate assistance in the HCV program (of the roughly 12,700 such decisions) were for reasons of drug-related criminal activities. In addition, 9 PHAs reported that they acted on 21,996 applications for admission into the HCV Program and that less than 1.5 percent of applicants were denied admission for reasons of drug-related criminal activities. Local PHAs that administer federally assisted housing benefits have discretion in determining whether current tenants or applicants for assistance have engaged in drug-related criminal activities that disqualify them from receiving housing benefits. HUD requires PHAs to develop guidelines for evicting from or denying admittance into federally assisted housing to individuals who engage in drug-related criminal activity. A November 2000 HUD study on the administration of the HCV Program described the variation in PHA policies on denying housing to persons who engaged in drug-related criminal activities. HUD concluded that because of the policy differences, some PHAs could deny applicants who could be admitted by others. For example, some PHAs consider only convictions in determining whether applicants qualify for housing benefits, while others look at both arrests and convictions. Some look for a pattern of drug-related criminal behavior, while others look for evidence that any drug-related criminal activities occurred. In addition, among PHAs, the period of ineligibility for assistance arising from a prior eviction from federally assisted housing because of drug-related criminal activities ranged from 3 to 5 years. (See app. IV for a summary of selected PHA policies.) Any imbalance between the supply of and demand for federally assisted housing may also affect whether drug offenders are denied access to this benefit. The stock of available federally assisted housing units in the Public Housing Program is generally insufficient to meet demand. PHAs may have long waiting lists, up to 10 years in some cases, for access to federally assisted housing. As PHAs generally place new applicants at the end of waiting lists, a drug offender who might be disqualified from federally assisted housing but who applies for housing assistance could go to the end of a PHA’s waiting list. Until that applicant moved to the top of the waiting list, the limited supply of federally assisted housing, and not necessarily a drug offense conviction, would effectively deny the applicant access to the benefit. Between 1990 and the second quarter of 2004, BJA received reports from state and federal courts that 8,298 offenders were sanctioned under the Denial of Federal Benefits Program in federal and state courts. This amounted to an average of fewer than 600 offenders per year. The Denial of Federal Benefits Program provides judges with a sentencing option to deny federal benefits such as grants, contracts, and licenses. About 62 percent of the cases reported to be sanctioned under the Denial of Federal Benefits Program occurred in federal courts, and the remaining 38 percent occurred in state courts. For recent years (2002 and 2003), BJA reported that fewer than 600 persons were denied federal benefits under the program. In 2002, there were more than 360,000 drug felony convictions nationwide. On average, less than 0.2 percent of these convicted drug felons were sanctioned under this program. According to the BJA data, state court judges in 7 states have imposed the sanction, and state court judges in Texas accounted for 39 percent of all cases in which drug offenders were reportedly denied benefits under this program by state court judges. Federal judges in judicial districts in 26 states had reportedly imposed denial of benefits sanctions, and federal judges in Texas accounted for 21 percent of the cases in which federal judges reportedly denied benefits. The pattern of use of sanctions under this program, with substantially more use in some jurisdictions than in others, may indicate that there are drug offenders in some locations who could have received the sanction but did not. (See app. V for more information about this program.) We previously reported on the relatively limited use of this sanction. We reported then that many offenders who could be denied access to federal benefits would also be sentenced to prison terms that exceed the benefit ineligibility period; therefore, upon release from prison, the offenders would not necessarily have benefits to lose. BJA officials reported that as of 2004, about 2,000 convicted drug offenders were still under sanction under the Denial of Benefits Program, as the period of denial had expired for the other sanctioned offenders. Most states have acted on the discretionary authority provided them under federal law to enact legislation that exempts some or all convicted drug felons in their states from the federal bans on their receipt of TANF and food stamps. That is, these state laws allow that convicted drug felons may not be banned for life from receiving TANF and food stamps provided they meet certain conditions. For states that had not modified the federal ban on TANF, we estimated that about 15 percent of all offenders and 27 percent of female offenders released from prison during 2001 would have met selected eligibility requirements and would therefore potentially be affected by the ban. We also estimated that among drug offenders released from prison during 2001 in states that had not modified the federal ban on food stamps about a quarter were custodial parents whose reported income was below federal poverty thresholds for food stamps. While food stamps are not limited to custodial parents, and the ban could affect other drug offenders, we limited our analysis to this group. A total of 32 states have enacted laws that exempt all or some convicted drug felons from the federal ban on TANF benefits. Of these states, 9 have enacted laws that exempt all convicted drug felons from the federal ban, and these persons may receive TANF benefits provided that they meet their state’s general eligibility criteria. Another 23 states have passed laws that exempt some drug felons from the TANF ban. The modifications allow that some convicted drug felons may receive benefits and generally fall into any of three categories: (1) Some states permit felons convicted of drug use or simple possession offenses to continue to receive TANF benefits but deny them to felons convicted of drug sales, distribution, or trafficking offenses; (2) some states allow convicted felons to receive TANF benefits only after a period of time has passed; and (3) some states allow convicted drug felons to receive TANF benefits conditioned upon their compliance with drug treatment, drug testing, or other conditions. (See app. II for the status of states’ exemptions to the TANF ban.) Using state-level data on drug arrests as a proxy for state-level data on drug convictions, we estimated that the 9 states that completely opted out of the TANF ban and exempted all convicted drug felons from the ban accounted for about 10 percent of drug arrests nationally in 2002. The 23 states whose exemptions modified the TANF ban accounted for about 45 percent of drug arrests nationally. For these states with various exemptions, it is difficult to determine to which drug felons the ban might apply, as participation in the program is contingent upon a felon’s behavior (such as abiding with conditions of probation or parole supervision, or participating in drug treatment). Finally, the 18 states that fully implemented the TANF ban accounted for about 45 percent of all drug arrests nationwide. Using Bureau of Justice Statistics survey data on the family and economic characteristics of drug offenders in prison and state-level data on the number of drug offenders released from prison during 2001 in 14 of the 18 states that fully implement the ban on TANF, we estimated that about 15 percent of those released from prison were parents of minor children, lived with their children, and had earned income below the maximum levels permitted by their states of residence. That is, but for the ban, they may have been eligible to receive TANF benefits. We estimated that the majority of drug felons—who are single males and not custodial parents— did not meet these TANF eligibility requirements and would therefore not have been qualified to receive the benefit even in the absence of the provisions of PRWORA. (See app. II for additional information about the methods used to estimate these quantities.) Female drug offenders released from prison in the 14 states constituted about 13 percent of drug offenders released from prison in 2001. We estimated that between 25 percent and 28 percent of these female offenders were parents of minor children who lived with their children and whose incomes were below state thresholds, and therefore stood to lose TANF benefits. This percentage among female drug offenders released from prison is about twice that for males. From the available data, we estimated that less than 15 percent of male prisoners were parents who lived with their children and had earned incomes that would qualify them to receive TANF benefits. Other factors, which we could not take into account to estimate the percentages of drug offenders that could be eligible to receive TANF benefits, include citizenship status and total family income. Noncitizens with fewer than 5 years of residence in the United States are generally ineligible to receive TANF. Several of the states for which we obtained data on drug offenders released from prison have relatively large noncitizen populations. Therefore, among those drug offenders that we estimated could have been eligible to receive TANF benefits might be some ineligible noncitizens. In addition, the data that we used to estimate whether drug offenders met state income eligibility requirements included individual income rather than total family income. It is possible that some prisoners would join family units with incomes above state TANF eligibility earned income limits and would thus be disqualified for benefits. Among the drug offenders released from prison during 2001, the percentage that may be affected by the TANF ban at any time during their lifetimes would be greater than our estimate of those initially affected. This is because at a later date some of these offenders may meet the general eligibility criteria for receiving benefits. Thus, the percentage ever affected by the bans would grow over time. Because of data availability, our estimates focus on convicted drug felons who were in prison. We do not have data to assess the effect of the TANF ban on drug felons who received probation or who were sentenced to time in local jails. According to BJS data, nationwide, about one-third of convicted drug felons are sentenced to probation. Moreover, our estimates apply to the states that fully implemented the ban on TANF. Because of complexities associated with state exemptions to the federal ban and the lack of sufficiently detailed data, we cannot provide an estimate of the percentage of convicted drug offenders who could be affected by the ban in the 23 states that modified the TANF ban. We note, however, that state modifications to the ban may allow convicted drug felons to participate in TANF if they abide by the conditions set in the state exemptions (such as abide by conditions of parole or probation supervision or participate in drug treatment). In these states, unlike in the states that fully implement the federal ban, the post-conviction behavior of offenders would help to determine whether they could receive the benefit. Other state modifications allow drug offenders to receive TANF benefits at some point in the future (such as after completing drug treatment or receiving a sufficient number of negative drug test results). In states that require that drug felons wait before becoming eligible to participate in TANF, the federal ban is in effect until the waiting period ends. We would therefore expect estimates of the percentage affected during the waiting period to be similar to the estimates of the percentage affected in the states that fully implemented the federal ban. At the time of our review, 15 states fully implemented the federal ban on food stamp benefits to convicted drug felons, and 35 states had passed laws to exempt all or some convicted drug felons in their own states from the federal ban on food stamps. Of the 35 states with exemptions, 14 states exempt all convicted drug felons from the food stamp ban, and 21 have laws that exempt some convicted drug felons from the food stamp ban provided that they meet certain conditions. In the 21 states that modified the food stamp ban, the modifications are similar to those for TANF and generally include (1) exempting persons convicted of drug possession from the ban, while retaining it for persons convicted of drug sales, distribution, or trafficking; (2) requiring a waiting period to pass before eligibility is restored; and (3) conditioning food stamp eligibility upon compliance with drug treatment, drug testing, or other conditions. (See app. II for the status of states’ exemptions to the food stamps ban.) States’ decisions to exempt all or some convicted drug felons in their states from the ban on food stamps affect the proportion of drug felons that can be affected by the ban. Using the state-level drug arrest data as the proxy for felony drug convictions (as we did for TANF), we find that the 15 states that fully implemented the ban on food stamps accounted for about 22 percent of all drug arrests nationally. Using data from the BJS inmate survey on the family and economic characteristics of drug offenders in prison and state-level data on the number of drug offenders released from prison during 2001 in 12 of the 15 states that fully implemented the ban on food stamps, we estimated that about 23 percent of those released from prison were parents of minor children whose incomes were below the federal poverty guidelines. Among male drug offenders, we estimated that about 22 percent met these conditions, while among female drug offenders, we estimated that about 36 percent did. We are unable to provide an estimate of the percentage of drug offenders that could be eligible to receive food stamps as able-bodied adults without dependent children. According to USDA, in 2003, this class of food stamp recipients constituted about 2.5 percent of food stamp recipients nationwide. Food stamps are not limited to custodial parents. However, we limited our assessment to custodial parents because of data limitations. Because the denial of food stamps is a lifetime ban, the number of drug offenders affected by the ban will increase over time, as additional convicted drug felons are released from prison. Also, as with the TANF estimates, data limitations precluded our providing estimates for the felony drug offenders that were sentenced to probation in 2001 or for the states that modified the federal ban. A complex array of provisions of federal law allow or require federal benefits be denied to different classes of drug offenders. There is also a good deal of discretion allowed in implementing these laws that can exempt certain drug offenders from their application. Our estimates indicate that denial of benefit laws potentially affect relatively small percentages of drug offenders, although the numbers potentially affected in given years may be large. There are a number of reasons why the percentages affected may be relatively small. First, large numbers of drug offenders would not be eligible for these benefits regardless of their drug offender status. For example, those who lack a high school diploma are ineligible for postsecondary educational loans or grants, and many do not meet eligibility requirement for TANF and food stamps. Also, in the case of TANF and food stamps, the majority of states have used their discretion to either partially or fully lift the ban on these benefits for certain drug offenders. It is important to note that although the overall numbers of drug offenders that could be affected by the TANF and food stamp bans are relatively small in comparison with the numbers of drug offenders, our estimates suggest that the effects of the bans disproportionately fall on female offenders. This is because they are more likely to be custodial parents with low incomes and thus otherwise eligible for the benefits. We provided a draft of this report to the Attorney General; the Secretaries of the Departments of Education, Agriculture, and Housing and Urban Development; the Assistant Secretary of the Administration for Children and Families; the Director of the Office of National Drug Control Policy; the Research Director of the United States Sentencing Commission; and the Director of the Administrative Office of the United States Courts for their review and comment. We received technical comments from the Departments of Justice, Agriculture, and Education, and from the United States Sentencing Commission and Administrative Office of the United States Courts, which we incorporated into the report where appropriate. We are sending copies of this report to the Attorney General; the Secretaries of the Departments of Education, Health and Human Services, Agriculture, and Housing and Urban Development; the Director of the Office of National Drug Control Policy; the Research Director of the United States Sentencing Commission, and the Director of the Administrative Office of the United States Courts. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or by e-mail at Ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix VI. Federal law provides that certain drug offenders may or must be denied selected federal benefits, such as Temporary Assistance for Needy Families (TANF), food stamps, federally assisted housing, postsecondary education grants and loans, and certain federal contracts and licenses. Our objectives were to analyze and report on two interrelated questions about the number or percentage of drug offenders that could be affected by the provisions: (1) In specific years, how many drug offenders were estimated to be denied federal postsecondary education benefits, federally assisted housing, and selected grants, contracts, and licenses? (2) What factors affect whether drug offenders would have been eligible to receive TANF and food stamp benefits, but for their drug offense convictions, and for a recent year, what percentage would have been eligible to receive these benefits? In addition, we were asked to address the impact of federal benefit denial laws on minorities and the long-term consequences of denying federal benefits on the drug offender population and their families. Because of severe data limitations, we were unable to provide a detailed response to this matter. The final section of appendixes II, III, IV, and V in this report include discussions of the data limitations that precluded us from estimating the impacts on minorities. Where information was available, we also identify in the appendices some of the possible long-run consequences of denial of benefits. We limited our analysis of federal laws to those that explicitly included provisions that allowed for or required the denial of federal benefits to drug offenders. We excluded other provisions that provide for denial of benefits to all offenders, of which drug offenders are a subset. We also excluded from our analysis provisions that applied to offenders only while they are incarcerated and provisions that applied to fugitive felons. Other federal laws relating to drug offenders but not within the scope of our review include provisions such as those making a person ineligible for certain types of employment, denying the use of certain tax credits, and restricting the ability to conduct certain firearms transactions. Further, because of the limited data available on persons actually denied federal benefits, we provide rough estimates of either the number or the percentage of drug offenders affected by the relevant provisions. We provide an overview of these methodologies below but we discuss the specifics of our methodologies for analyzing and estimating the impacts of denying specific federal benefits in appendices II through V. We assessed the reliability of the data that we used in preparing this report by, as appropriate, interviewing agency officials about their data, reviewing documentation about the data sets, and conducting electronic tests. We used only the portions of the data that we found to be sufficiently reliable for our purposes in this report. We conducted our work primarily in Washington, D.C., at the headquarters of five federal agencies—the Departments of Justice (DOJ), Agriculture (USDA), Housing and Urban Development (HUD), Education (ED), and Health and Human Services (HHS)—responsible for administering the denial of federal benefit laws. We also conducted work at the Office for National Drug Control Policy (ONDCP)—which has responsibilities for national drug control policy—the Administrative Office for United States Courts (AOUSC)—which provides guidance to the courts for the implementation of statutory requirements—and the United States Sentencing Commission (USSC)—which has responsibilities for monitoring federal sentencing outcomes. To estimate how many or what percentage of drug offenders were reported to be denied federal postsecondary education and federally assisted housing benefits and certain grants and contracts under the Denial of Federal Benefits Program, we obtained and analyzed data from agency officials. From ED, we obtained data for several years on the number of applicants using the Free Application for Federal Student Aid (FAFSA), the number of these who reported a disqualifying drug offense conviction, the number eligible for Pell Grants, and the number receiving Pell Grants and student loans. We analyzed these data to generate our estimates of the number of those that reported disqualifying drug offenses that would have been eligible to receive Pell Grants and student loans. We also obtained Bureau of Justice Statistics (BJS) data that reported on the educational attainment of nationally representative sample of offenders on probation. We used these data, along with USSC data on sentenced drug offenders and BJS data on drug offenders released from prison, to assess the education levels of drug offenders. To identify factors that could contribute to the number of drug offenders denied federal postsecondary education benefits, we interviewed officials at ED about federal regulations, guidance, and rulings pertaining to the eligibility to receive benefits. Appendix III describes in more detail our methods for estimating the education of those denied education benefits. From a nonprobability sample of some of the largest public housing agencies (PHA) in the United States, we obtained information about reported actions taken in 2003 in these PHAs to deny persons federally assisted housing benefits for reasons of drug-related criminal activities. We selected large agencies because of the volume of actions that they take in a given year and to provide indications of the range of outcomes in PHAs in different settings with different populations. We also obtained and analyzed data from HUD on persons reportedly evicted from or denied admission into public housing for reasons of criminal activities. From selected PHAs, we obtained, analyzed, and compared termination and admissions policies and procedures used during 2003 or 2004 to deny federally assisted housing to persons involved in drug-related criminal activities. We also spoke with staff from selected research organizations, national associations, and PHAs to review the eligibility criteria to receive federal benefits. Appendix IV describes our methods for assessing denials of federally assisted housing. From the Bureau of Justice Assistance (BJA), we obtained data on drug offenders reported to have been denied federal benefits under the Denial of Federal Benefits Program. We spoke with officials at BJA about the current operations and plans to enhance the program, and we interviewed officials from USSC and AOUSC about the operations of this program. We also interviewed ONDCP officials about the array of federal provisions that provide for denial of federal benefits and federal programs that provide for drug treatment for drug offenders. Appendix V describes our methodology for analyzing the Denial of Federal Benefits Program. Data limitations concerning the actual number of persons denied TANF and food stamp benefits required us to develop estimates of the drug offenders that could be denied these benefits in that they had characteristics that would have allowed them to qualify to receive the benefits except for their drug offense convictions. To determine the extent to which drug offenders were otherwise qualified or eligible to receive federal benefits, we identified key elements of the eligibility to receive federal benefits. We met with officials at the federal agencies responsible for administering TANF—the Department of Health and Human Services— and food stamps—the U.S. Department of Agriculture—to discuss issues related to eligibility to receive these benefits. We obtained and analyzed data from BJS on the characteristics of drug offenders in prison, and we applied this information to the number of drug offenders released from prison during 2001 in states that fully implemented the ban on TANF. To determine the current status of states that have opted out of or modified federal provisions banning TANF and food stamp benefits to persons convicted of drug felony offenses, we reviewed state laws and contacted officials at USDA (which annually surveys states about the status of their laws in relation to the ban on food stamps) and state officials in states that have modified the federal ban on TANF or food stamps to discuss the status of their provisions regarding the exemptions under their state laws. Appendix II provides detailed information on our methodology for assessing the TANF and food stamps bans. From the following sources, we obtained, assessed the reliability of, and analyzed data related to denial of federal benefits that we used in developing estimates of the impacts of the federal provisions. To assess the reliability of the data, as needed, we interviewed agency officials about the data systems, reviewed relevant documentation, and conducted electronic tests of the data. We determined that the data were sufficiently reliable for the purposes of this report. The data sources included Bureau of Justice Assistance: Data on the number of drug offenders reported to BJA by state and federal courts as having been denied federal benefits under the Denial of Federal Benefits Program from 1991 to 2004. Bureau of Justice Statistics: Survey of Inmates of State Correctional Facilities in 1997: We used these data to estimate the number of convicted drug felons in prison that were parents of minor children, lived with their children prior to their incarceration, and had incomes within state earned income limits. We used these estimates to assess the impacts of the provisions allowing for the denial of TANF and food stamp benefits. National Corrections Reporting Program, 2001: We used these data to obtain counts of the number of drug offenders released from prison during 2001 in selected states. We also used these data to provide estimates of the level of education completed from drug offenders released from prison during 2001 and in developing our estimates of the impacts of the TANF and food stamps provisions. Survey of Adults on Probation, 1995: We used these data, from the only national source of data on the characteristics of adults on probation of which BJS is aware, to learn about the education levels of drug offenders on probation and in developing estimates of the impact of denying federal postsecondary education assistance. Selected state corrections and court officials: For selected states that fully implemented the ban on TANF and food stamps, we obtained data on the numbers of convicted drug felons released from prison in during 2001. We used these data in developing estimates of the impacts of TANF and food stamps. Department of Housing and Urban Development: We obtained and analyzed data from HUD’s Public Housing Assessment System (PHAS) and Management Operations Certification Assessment System (MASS) for fiscal years 2002 and 2003 on the number of public housing residents evicted because of criminal activities (of which drug-related criminal activities form a subset), and of the numbers denied admission into the Public Housing Program for reasons of criminal activities. Seventeen of the 40 largest PHAs in the nation: We requested information from the 40 largest PHAs about the number of decisions they made during 2003 to deny federally assisted housing to tenants and applicants for reasons of drug-related criminal activities, and we obtained data from 17 of these PHAs. Not all 17 PHAs provided responses to all of our questions; therefore, we reported data only on the PHAs that were able to provide data relevant to the question under review. We selected these PHAs from among the 1,531 PHAs that managed both Public Housing and Housing Choice Voucher (HCV) programs as of August 31, 2004. We asked them for information about denials of federally assisted housing for reasons of drug-related criminal activities, and we also asked them to provide these data based on the race of tenants and applicants. HUD does not collect this information. We used these data in describing the number of persons denied federally assisted housing and in providing information about the race of persons denied federal housing benefits. Department of Education: We obtained and analyzed data on the number of students applying for federal postsecondary assistance for academic years 2001-2002, 2002-2003, and 2003-2004. In addition, we obtained data on the percentage of these applicants who were eligible to receive Pell Grants and of these, the percentage that received them, and we also obtained data on the percentage of applicants who received student loans. We used these data in developing estimates of the impact of the denial of federal postsecondary education assistance. In addition, we used published statistical reports from various agencies such as BJS; Uniform Crime Reports data on drug abuse violation arrests by state; Department of Health and Human Services reports on the characteristics of TANF recipients; USDA reports on food stamp recipients; and the United States Sentencing Commission’s 2003 Sourcebook of Federal Sentencing Statistics. We were asked to address the impacts of the federal benefit denial laws on racial minorities and the long-term impacts of denying federal benefits on individuals that were denied, their families, and their communities. Although very limited, the available information on these issues is summarized in appendices II through V. To determine the extent of data on the race of persons affected by the denial of federal benefit provisions, we asked the officials that we interviewed about their knowledge of data on the race of persons denied federal benefits. We also spoke with researchers and officials at various organizations about their knowledge of available data. To address data limitations of HUD data on persons denied federally assisted housing because of drug-related criminal activities, we requested, obtained, and analyzed data provided by 17 of the largest PHAs in the nation on the race of persons denied housing for reasons of drug-related criminal activity. To determine the current research and data on the potential economic and social impacts of the loss of federal benefits on individuals, families, and communities, we conducted literature searches to identify and review existing studies that have measured the impacts of the denial of federal benefits on drug offenders and families. We interviewed experts to understand how the incentives for drug treatment, as provided in the laws that deny benefits, are likely to affect drug addicts’ behavior, and we obtained their views regarding the effects that incarceration and drug convictions might have on a drug felon’s potential employment and earnings. We conducted our work from March 2004 to July 2005 in accordance with generally accepted government auditing standards. This appendix describes the legal and administrative framework for denying TANF and food stamp benefits to convicted drug felons and our methods for estimating the percentage of convicted drug offenders that would have been eligible to receive TANF and food stamps but for their drug felony convictions. The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996 provides that persons convicted of certain drug felony offenses are banned for life from receiving TANF and food stamp benefits. Specifically, Section 115 of PRWORA, as amended, provides that an individual convicted (under federal or state law) of any offense that is classified as a felony by the law of the jurisdiction involved and that has as an element the possession, use, or distribution of a controlled substance shall not be eligible to receive TANF assistance or food stamp benefits. The prohibition applies if the conviction is for conduct occurring after August 22, 1996. TANF assistance includes benefits designed to meet a need family’s ongoing, basic needs (for example, for food, clothing, shelter, utilities, household goods, and general incidental expenses) and includes cash payments, vouchers, and other forms of benefits. TANF assistance excludes short-term episodic benefits that are not intended to meet recurrent or ongoing needs and that do not extend beyond 4 months. The federal prohibition on TANF assistance to convicted drug felons does not apply to TANF “nonassistance” benefits, which include benefits meant to assist an individual’s nonrecurring emergency needs. TANF nonassistance can include drug treatment, job training, emergency Medicaid medical services, emergency disaster relief, prenatal care, and certain public health assistance. The Food Stamp Program provides benefits in the form of electronic benefit cards, which can be used like cash for food products at most grocery stores. Eligible households receive a monthly allotment of food stamps based on the Thrifty Food Plan, a low-cost model diet plan based upon National Academy of Sciences’ Recommended Dietary Allowances. For persons between the ages of 18 and 50 who are also viewed as fit to work and who are not the guardians of dependent children, PRWORA provides for a work requirement or a time limit for receiving food stamp benefits. The provision is known as the Able-Bodied Adults without Dependent (ABAWD) provision. ABAWD participants in the food stamp program are limited to 3 months of benefits in a 3-year period unless they meet certain criteria. PRWORA provides that states may enact a legislative exemption removing or limiting the classes of convicted drug felons that are otherwise affected by the federal ban on TANF and food stamps. State laws providing for exemptions need to have been enacted after August 22, 1996. The Office of the Administration for Children and Families (ACF) within the U.S. Department of Health and Human Services provides federal oversight of the TANF program. TANF is funded by both federal block grants and state funds, but states are responsible for determining benefit levels and categories of families that are eligible to receive benefits. State eligibility requirements establish earned income limits, and other rules, and these requirements may vary widely among the states. The U.S. Department of Agriculture’s Food and Nutrition Service (FNS) provides oversight for the Food Stamp Program, which is the primary federal food assistance program that provides support to needy households and to those making the transition from welfare to work. Eligibility for participation is based on the Office of Management and Budget federal poverty guidelines for households. Most households must meet gross and net income tests unless all members are receiving TANF or selected other forms of assistance. Gross income cannot exceed 130 percent of the federal poverty guideline (or about $1,313 per month for a family of two and $1,654 per month for a family of three in 2004), and net income cannot exceed 100 percent of the poverty guideline (or about $1,010 per month for a family of two and $1,272 per month for a family of three in 2004). “Gross income” means a household’s total, nonexcluded income before any deductions have been made. “Net income” means gross income minus allowable deductions. Allowable deductions include a 20 percent deduction from earned income, dependent child care deductions, and medical expenses, among others. According to officials at ACF and FNS, states may implement the provisions to deny convicted drug felons TANF and food stamps in a variety of ways. Some states administer the denial of benefits by requiring applicants to admit to disqualifying felony drug offense convictions at the time that they apply for benefits. Also according to agency officials, neither agency regularly collects and assesses data on the number of persons that self-certify disqualifying drug offenses. We reviewed documentation provided by USDA and for states that exempted some or all convicted drug felons from the federal ban on food stamps, we reviewed states’ laws pertaining to the exemption and we contacted officials to determine the status of their state’s exemptions to the federal bans on TANF and food stamps. Table 5 shows these statuses and for states that have enacted exemptions, provides citations to the state laws. There are several general types of modifications to the federal ban on TANF and food stamps among the states that have modified the ban. These modifications may include one or more of the following elements: (1) removing from the ban drug felons convicted for drug use or simple possession, but implementing the ban for drug sellers or traffickers (e.g., possession with intent to distribute offenses); (2) restoring benefits to drug felons complying with drug treatment program requirements; (3) restoring benefits so long as drug felons have negative drug test results over some period of time; and (4) restoring benefits to drug felons after various waiting periods, such as a number of years after conviction or release from prison. State modifications may also include other conditions. For example, Michigan allows convicted drug felons to receive benefits provided they do not violate the terms of their parole or probation and other conditions are met. Tables 6 and 7 show the types of modifications that states have adopted for the TANF and food stamp bans, respectively. These tables present general categories of different modifications, not an exhaustive listing of all specific requirements. For more detail consult the statutes listed in table 5. Estimating the Percentage of Drug Arrests within States That Implement, Modify, or Opt Out of the Bans on TANF and Food Stamps To obtain a general assessment of the degree to which state decisions to modify or opt out of the federal bans on TANF and food stamps exempt drug felons from the federal ban, we estimated the percentage of drug arrests that occurred within three groupings of states: (1) those that fully implement the bans, (2) those that have modified them, and (3) those that have completely opted out of the bans. We used drug arrests as a proxy for drug convictions, as state-level data on the number of drug felony convictions are not available. We analyzed data from the 2002 Crime in the United States: Uniform Crime Reports on the number of persons arrested for drug offenses in each of the 50 states. Table 8 reports the relative distributions of drug arrests for the states falling into each category for the TANF and food stamp bans. To assess the potential impacts of the bans on TANF and food stamps, we estimated the percentage of a population of drug felons released from prison that would be eligible to receive TANF, and but for their drug offense conviction could receive the benefit. By potentially affected, we refer to convicted drug felons that we estimated met selected eligibility criteria to participate in these benefit programs. According to our use of the term “impact,” only those drug felons who were otherwise eligible to receive benefits actually stood to lose benefits as a result of the bans, and could therefore be affected by the bans. To determine the percentage of drug felons that met selected eligibility criteria, we used data from the Bureau of Justice Statistics’ Survey of Inmates of State Correctional Facilities in 1997. This survey is based upon a nationally representative sample of persons in state prisons during July 1997. The 1997 data represent the most recently available data from this recurrent survey, which BJS conducts about every 5 years. We used information from the survey about prisoners’ parental status, employment, and income prior to incarceration in developing our estimates of the percentages of drug offenders that were custodial parents and had incomes within allowable maximums to qualify for the benefits. For both benefits, we provide estimates that are based on drug offenders released from prison during 2001 in the subset of states that fully implemented either the ban on TANF or food stamps. To the extent possible, we limited the data on drug offenders released from prison to those who entered prison during 1997 or thereafter. This allowed for a period of time between the possible date that a drug felony offense was committed and the date that an offender entered prison, and in this way, we took into account the implementation date of the ban, which was August 22, 1996. Because of data limitations, we did not attempt to develop estimates for states that modified the bans. For example, some states’ exemptions to the bans allow that convicted drug offenders may receive benefits (provided that they are eligible for them) if they do not fail a drug test, if they undergo required drug treatment, if they do not violate conditions of probation or parole supervision, or if they meet certain other conditions. The data that we used did not include this information; therefore, we could not estimate the potential impacts of the bans in the states that modified the bans. We developed estimates of the potential impact of these bans on the population of released prisoners for 1 year, 2001, the most recent year for which we obtained data. We did not attempt to develop estimates for all persons potentially affected by the bans since they went into effect during 1996. We discuss the problems associated with estimating all persons potentially affected by the bans in a later section of this appendix. Data and Methods Used to Estimate the Potential Impacts of the TANF Ban To estimate the potential impacts of the TANF ban, we obtained data from states on drug felons released from prison, and using these data, we applied estimates of the percentages that met selected TANF eligibility requirements. These methods are described more fully below. For 14 of the 18 states that fully implement the ban on TANF, we obtained data on the number of drug offenders released from prison during 2001. We used two sources of data: (1) the Bureau of Justice Statistics’ National Corrections Reporting Program (NCRP) and (2) data from selected other states. From NCRP, we obtained counts of the number of drug felons released from prison during 2001, given that they were committed into prison in 1997 or thereafter for a new conviction that contained a drug offense. We chose 1997 because the TANF ban went into effect on August 22, 1996, and data on the date that ex-prisoners committed their drug offense—which is the factor that determines whether they are under the ban—were not available in the data that we used. From the other states, we obtained comparable data on the number of drug offenders released from prison. The 14 states for which we obtained data were Alabama, Arizona, California, Georgia, Kansas, Mississippi, Missouri, Nebraska, North Dakota, South Carolina, South Dakota, Texas, Virginia, and West Virginia. The 14 states account for approximately 97 percent of the population in the 18 states that maintain the ban on TANF for drug felons. For the 4 states that were excluded from our analysis—-Alaska, Delaware, Montana, and Wyoming—we were unable to obtain data on released prisoners. We also excluded from our analysis states that may have implemented the ban in 2001 but as of January 2005 had modified or opted out of the ban. Across the 14 states, about 96,000 drug offenders were released from prison during 2001, given that they had been admitted during 1997 or thereafter. This population of all drug felons released from prison includes those who were sentenced to prison following their conviction for a drug offense, and it also includes offenders who entered prison because they had violated conditions of supervision. Among offenders who entered prison for a violation of conditions of supervision, some may have committed their offenses before the TANF ban went into effect, and they would not be subject to the ban. However, some of the released prisoners who had violated conditions of supervision may have been convicted after the ban went into effect, but the available information reported only the date of admission for the violation and not for the original sentence. These offenders should be included among the population of drug felons that are subject to the ban. Hence, the population of all released prisoners might over estimate the number of drug offenders in these 14 states who committed offenses after the TANF ban had gone into effect. About 51,000 of the drug offenders released from prison during 2001 were those who had been admitted into prison during 1997 or immediately after their conviction. While this population of released drug offenders includes those whose prison sentence occurred after the ban went into effect, this number may under estimate the number of drug felons in these states who were subject to the ban. It may do so because it will exclude the parole violators who had initially been committed after 1997 but whose most recent commitment was for a violation of parole that also occurred after 1997. About 87 percent of all drug offenders released from prison during 2001 in the 14 states for which we obtained data were males, as were about 86 percent of the first releases. Females constituted 13 percent of all releases and 14 percent of first releases (table 9). To receive TANF assistance, an assistance unit (such as a household) must meet the state-mandated definition of a needy family: It must either contain at least one child living with an adult relative or consist of a pregnant woman. The adult guardian must be related to the child by blood, adoption, or marriage (or, if the state provides, the adult may stand in for parents if none exist). Further, TANF recipients must in general be either U.S. citizens or qualified aliens who entered the United States prior to the passage of PRWORA on August 22, 1996, or who have lived in the United States for a period of 5 years. States may also impose other conditions for receipt of TANF benefits. We used data from the 1997 version of the BJS Inmate Survey to estimate the percentage of drug offenders who were custodial parents and who had monthly incomes within state-determined earned income limits. For estimation purposes, we defined a drug offender in the inmate survey as a custodial parent if the offender met three conditions: (1) reported being the parent of at least one minor child, (2) reported living with the child prior to being incarcerated, and (3) reported that the child was not in foster care or agency care while the offender was in prison. We computed the number of prisoners who met these conditions, and from these counts, we estimated the percentages of drug offenders that met these conditions. As the data were drawn from a sample, we used weighting factors provided by BJS that were based on the original probabilities of being selected into the sample that were adjusted for nonresponse and information about the sex, race, age, prison security level, and type of offense of the total prison population to produce national-level estimates. We estimated the percentages separately by gender and region of the country. Table 10 shows our estimates of the percentage of convicted drug felons that were reported to be parents and custodial parents (based on our definition) of minor children. We also estimated the income distributions for drug offender parents who reported living with their children. In the BJS survey, income is reported as the offender’s total income in the month prior to the arrest leading to the incarceration. Monthly income can be from any source and may include illegal income. We omitted from our analysis those offenders who reported income from illegal sources, and we included only offenders who reported earned income or who were unemployed prior to their imprisonment. Offenders who were unemployed prior to their imprisonment received a value of zero for earned income. We estimated the income distributions separately by gender and region to account for differences in employment and earnings between male and female offenders, and offenders in different states. We applied the regional income distributions to all states within a region, as the BJS data did not report the state in which the offender was incarcerated. From the income distributions, we estimated the gender-specific percentages of drug offenders who had incomes at or below state- determined earned income limits. The BJS inmate survey data report income in intervals, and in many cases, the intervals do not correspond directly with the state earned income limits. Therefore, we selected income intervals that were as near to the state earned income limits as feasible. We generally selected two income intervals for each state: one that contained the state earned income limit level but whose lower bound was less than the state level, and one that contained the state earned income limit but whose upper bound was above the state level. In this way, we obtained upper- and lower-bound estimates of the potential impacts of the TANF ban. To obtain estimates of the percentage of drug offenders released from prison who were both custodial parents and were income eligible for TANF, as defined above, we applied the gender-specific estimates of the percentage of prisoners in each region of the country that met the specific TANF eligibility criteria to state-specific counts of the number of drug felons released from prison. We used the region of the country within which a state was located to obtain estimates for a specific state. The result of these operations was to obtain estimates of the percentage of drug offenders released from prison who were both custodial parents and were income eligible for TANF, as defined above. The estimated percentages of drug offenders released from prison that met these conditions are shown in table 11. We were unable to take into account all of the factors that determine whether drug offenders met the eligibility criteria to receive TANF. Some of these factors could contribute to reducing the estimated percentages of drug offenders who were otherwise eligible; others could possibly contribute to increasing the estimated percentages. In addition, our estimates for drug offenders released from prison in a given year do not apply to drug felons who were sentenced to probation. Finally, we are unable to provide an estimate of the percentage of drug offenders potentially affected by the ban for the entire period since it was implemented. Data limitations preclude our explicitly taking into account all of the factors that are related to TANF eligibility. Factors affecting TANF eligibility for which we do not have data are the citizenship status and length of residency of noncitizens, state-imposed work requirements to receive TANF, and individual choices to participate in the program. While we were unable to estimate the effect of these factors on our estimated percentages that might have been eligible to receive TANF, these factors would contribute to lowering our estimates of the percentage of drug offenders released from prison that might have been eligible to receive TANF. Several of the states whose data we analyzed have relatively large populations of noncitizens. In general, to qualify for TANF, aliens must have at least 5 years of residence in the United States since August 22, 1996. Given that our estimates are for 2001, it is unlikely that many aliens among convicted drug felons would have qualified for TANF. Hence, taking the alien qualification into account would lower our estimates of the percentage of drug felons potentially affected by the TANF ban. For 2003, ACF reports that 8 percent of adult TANF recipients were qualified aliens. Individuals within needy families who do not participate in state- determined work requirements could lose their TANF eligibility. Failing to comply with work requirements would reduce the percentage of drug offenders that were otherwise eligible to receive TANF. In the general population, adult males constitute comparatively small numbers of TANF recipients. According to ACF, in 2001 adult males constituted about 9 percent of all adult TANF recipients. If we applied the general population adult male TANF recipiency rate to our estimates of the percentage of all drug offenders released from prison, our estimated impact of the TANF ban would be revised downward to about 4 percent of all of drug offenders released from prison in 2001. One factor that could change the estimated percentage of convicted drug felons eligible to receive TANF benefits and therefore potentially affected by the ban is a change in a felon’s eligibility to receive TANF. Our estimates of the percentage of prisoners that may be eligible to receive TANF are based on attributes existing at the time that offenders were in prison. Upon release, these attributes may change, and an offender might become otherwise eligible for TANF and therefore potentially be affected by the ban. For example, if a drug offender was reunited with his or her children after release and met other eligibility requirements, this would contribute to increasing the percentage of released prisoners that were eligible to receive TANF. Alternatively, imprisonment may be a factor that reduces contact with children and therefore contributes to decreasing the percentage of drug offenders released from prison that are eligible to receive TANF. In recent years, drug felons sentenced to probation account for about one- third of all convicted and sentenced drug felons. We did not apply the information about drug offenders in prison to the drug felons sentenced to probation. This is because we do not have data on the parental and income characteristics of drug felons sentenced to probation. To the extent the drug felons sentenced to probation have characteristics similar to those of drug felons released from prison, the estimated percentage of probationers that may be eligible to receive benefits would be similar to those estimated percentages among released prisoners. However, if income levels and other factors differ between probationers and prisoners, this could affect the estimates of the percentages that would be eligible to receive benefits. We do not provide an estimate of all drug offenders potentially affected by the ban on TANF since it went into effect. We were unable to obtain data on the number of persons convicted of drug felonies since the ban went into effect in 1996, as only limited data are available. Over time, an individual’s attributes that are related to TANF eligibility may change. Convicted drug felons who did not have characteristics that would make them eligible to receive TANF at one point in time could develop these attributes at a later point in time. Conversely, the circumstances of convicted drug felons who at one point in time were otherwise eligible to receive TANF could change so that they are no longer otherwise eligible. To understand the long-term impacts of the ban therefore would require data that track individuals over time and measure changes in their characteristics that are related to TANF eligibility. We know of no such national data on drug offenders. Our estimates of the percentage of drug offenders released from prison in a given year who are potentially affected by the ban represent lower- bound estimates of the proportion of drug offenders released from prison during that year that would ever be affected by the ban. If, among those released from prison and estimated not to be eligible to receive TANF, any persons became eligible at a later date, this would increase the percentage of persons potentially affected by the ban. Consequently, the long-term impacts of the ban would be greater than the impacts that we estimated for the 1-year release cohort. Similarly, if the 1-year estimates of the percentage potentially affected by the ban were to hold over time, then a larger percentage of all convicted drug felons would be potentially affected by the ban since its inception than the percentages that we estimated for 1 year. Data and Methods Used to Estimate the Potential Impact of the Ban on Food Stamps We focused our analysis of the potential impact of the ban on food stamps on drug offenders that were reported to be custodial parents of minor children. According to USDA, in fiscal year 2003, adult households with children (containing either one or two adults) constituted 73 percent of food stamp recipients. Consequently, this is likely to be the largest group of drug offenders that could be affected by the food stamp ban. We were unable to develop a quantitative estimate of the percentage of able-bodied adults without dependents (ABAWD) that could be affected by the food stamp ban. ABAWDs, in general, may receive food stamps for 3 months within a given 3-year period or longer if they adhere to the work requirements specifically laid out for ABAWDs. However, we were unable to determine which drug offenders constituted the potential pool of ABAWDs. We further gave the potential ABAWDs recipients separate consideration because, according to USDA reports, in 2003, they constitute 2.5 percent of food stamp recipients nationwide even though they form a large share of the general population of such persons. We also did not attempt to develop an estimate of the impact of the ban for elderly and disabled drug offenders. For 2003, USDA reported that adult households with children (containing either one or two adults) constitute 73 percent of food stamp recipients. In contrast, elderly individuals living alone constitute 6 percent of food stamp recipients, and disabled nonelderly individuals living alone constitute 5 percent of food stamp recipients. Single-adult households—which according to USDA do not contain children, elderly individuals, or disabled individuals—constitute 6 percent of food stamp recipients. Therefore, adult households with children receive food stamps at a rate greater than 12 times the rate at which single-adult households receive food stamps. The percentage of single adult households receiving food stamps is higher than the percentage of ABAWDs receiving food stamps because an individual is not considered an ABAWD if the person is pregnant, exempt from work registration, or over 50 years of age. For 12 of 15 states that maintain the full ban on food stamps, we obtained data on the drug felons released from prison during 2001 (given that they entered prison during 1997 or thereafter). The 12 states are Alabama, Arizona, Georgia, Kansas, Mississippi, Missouri, North Dakota, South Carolina, South Dakota, Texas, Virginia, and West Virginia. The 3 excluded states for which we were unable to obtain data were Alaska, Montana, and Wyoming. A total of 67,000 drug offenders were released in 2001 in the 12 states, and of these, 30,000 were first releases from new court commitments. We used the BJS inmate survey data to estimate the percentage of drug felony prisoners who were parents living with their minor children and whose children were not in foster care while they were incarcerated. This was our operational definition for a custodial parent. For these, we estimated the percentage who had gross incomes within the poverty thresholds, based on estimates of family size. Food stamp eligibility is based on gross and net income tests. Because data on the deductions that are used in determining whether households meet the net income tests were not available, our estimates are at best gross income tests. We are unable to determine how our use of the gross income test alone affects our estimates of the percentage of drug felons released from prison that would have been eligible to receive food stamps. In general, ABAWDs may receive food stamp benefits for an extended duration as long as they meet ABAWD-specific work requirements. This means that a large percentage of drug felons could be eligible to receive, and therefore potentially be denied, food stamps as long as they fell within the income threshold to receive food stamps. However, among all food stamp recipients, ABAWDs constitute only 2.5 percent of the total. Hence, while we cannot estimate the percentage of ABAWDs within the drug offender pool that would be otherwise eligible to receive food stamps, the ABAWD participation rate in food stamps in general would suggest that relatively few drug offenders who fall into this category would participate in the program. We assessed the impacts of the denial of TANF and food stamp benefits by estimating the percentage of convicted drug felons released from prison who were otherwise eligible to receive the benefits. To assess whether impacts vary by race, we first assessed whether the percentage of drug offenders who met the same eligibility requirements that we used to assess the overall impacts of the TANF and food stamp bans varied according to race. For example, if larger proportions of black than white drug offenders were custodial parents of minor children and had earned income that permitted them to qualify for TANF, then we would expect to find larger percentages of black drug offenders to be affected by the TANF ban, regardless of the racial composition of the group of all drug offenders released from prison. We used the BJS inmate survey data to compare the estimated percentages of black and white drug offenders who were custodial parents (as we defined the term previously) and had earned incomes that could qualify them to receive TANF. As before, we estimated these percentages by gender and region. Our estimates indicated that in one region (the South), the percentage of black female drug offenders who were otherwise eligible to receive TANF differed from the percentage of otherwise eligible white female drug offenders. A larger percentage of black female drug offenders in that region were estimated to be eligible to receive TANF than white female drug offenders in the region. Among male drug offenders, we estimated differences in eligibility for TANF in two regions. For both female and male drug offenders, the differences in estimated TANF eligibility arose from differences in incomes, as there were no differences in the percentage of black and white drug offenders that were estimated to be custodial parents. This appendix describes the legal framework for denying federal higher education benefits to drug offenders, how the federal provision is administered, our methods for estimating the number of students affected by the federal provisions, and the impacts of the federal provision. The Higher Education Act of 1965, as amended, provides for the suspension of certain federal higher education benefits to students who have been convicted for the possession or sale of a controlled substance under federal or state law. The controlled substance offense may be either a felony or a misdemeanor. Federal higher education benefits that are denied to such individuals include student loans, Pell Grants, Supplemental Educational Opportunity Grants, and the Federal Work- Study program. The Higher Education Act provision outlines different periods for which such drug offenders are ineligible to receive certain federal higher education benefits, depending upon the type and number of controlled substance convictions. The period of ineligibility begins on the date of conviction and ends after a specified interval. Table 12 illustrates the period of ineligibility for the federal higher education benefits, according to the type and number of convictions. This Higher Education Act provision allows for eligibility for federal higher education to be restored prior to the end of the period of ineligibility if either one of two conditions is met. First, a student satisfactorily completes a drug rehabilitation program that includes two unannounced drug tests and complies with criteria established by the Secretary of Education. Second, a student has his or her drug conviction reversed, set aside, or nullified. The provisions of federal law mandating the denial of certain federal higher education benefits were implemented beginning in July 2000 by requiring students who applied for federal assistance to self-report disqualifying drug convictions. Students must self-report disqualifying drug convictions through the Department of Education’s Free Application for Federal Student Aid, a form that any student who wishes to receive federal student aid must complete. The FAFSA is available online and is free to use. ED uses the information that applicants provide on their FAFSA to determine their eligibility for aid from the Federal Student Aid (FSA) programs. Colleges and universities in 49 states also use information from the FAFSA in making their financial aid determinations. ED provides participating colleges and universities with a formula to use when making decisions about financial assistance. Applicants who either report that they have a drug conviction that affects their eligibility or those applicants who do not answer the question about drug convictions are automatically ineligible to receive federal higher education assistance in the academic year for which they sought aid. (Below, we refer to this group as FAFSA ineligibles.) The drug conviction worksheet of the FAFSA also notifies students that even though a drug conviction may render them ineligible to receive federal higher education assistance in the application year, individuals may still be eligible to receive aid from their state or their academic institution. For several reasons, not all of the FAFSA applicants who self-report a disqualifying drug conviction would otherwise have been eligible to receive federal assistance; hence, the number of applications containing self-reported disqualifying drug offenses overstates the number of persons denied federal postsecondary education assistance because of a drug offense conviction. First, not all FAFSA applicants are eligible to receive all types of federal postsecondary education assistance. For example, some applicants may have incomes above the levels required to receive Pell Grants, and even if they self-reported a disqualifying drug conviction, they would not have been eligible to receive Pell Grants. Second, ED officials indicated that not all FAFSA applicants become enrolled in postsecondary education institutions, and these applicants are not eligible to receive federal postsecondary education assistance. Third, some individuals may complete the FAFSA application more than one time, and by counting only the number of applications, some individuals may be double-counted. To assess the impacts of the Higher Education Act’s provisions that render students with disqualifying controlled substances convictions ineligible to receive federal postsecondary education assistance, we estimated the number of students who self-reported a disqualifying drug offense and, absent the controlled substances convictions provisions of the Higher Education Act, would have been qualified to receive assistance but because of the provisions would not have received assistance. We developed estimates of the number of applicants for Pell Grants and subsidized and unsubsidized Stafford loans (two of the best-funded federal postsecondary education assistance programs) and the total amounts of assistance lost, because of their self-reported controlled substances convictions. Our methods for estimating these quantities are as follows: To estimate the number of students who were denied Pell Grants in a given year, we use ED data on the number of FAFSA applicants that either self-reported a disqualifying drug offense conviction or left this question blank, the group that we labeled as FAFSA ineligibles. As applicants must meet needs-based criteria to make them eligible to receive Pell Grants, we then use ED data on the percentage of FAFSA applicants that were eligible to receive Pell Grants; we call this second group Pell Grant eligibles. We use ED data on the percentage of Pell Grant eligibles that actually received Pell Grants, as not all of the students who were eligible to receive Pell Grants received them. By multiplying these quantities, we obtained a rough estimate of the number of persons who, absent the disqualifying drug offense conviction, would have received Pell Grants. To estimate the dollar amount of Pell Grants that these recipients would have received, we multiplied the average amount of Pell Grants (which we obtained from ED) by the estimated number of students denied Pell Grants. To estimate the number of student loan recipients who were denied assistance because of disqualifying drug convictions, we followed a method similar to the one that we used to estimate the numbers denied Pell Grants. Specifically, beginning with the data on FAFSA ineligibles, we applied to this number the percentage of all FAFSA applicants that received a student loan. We could not obtain an estimate of the number of FAFSA applicants that were eligible to receive student loans because, as ED reports, unlike Pell Grants, where there are income limitations that can be used to determine eligibility, with student loans, eligibility is determined by both income and institution-specific factors (such as tuition). Thus, our estimate is of the number of FAFSA ineligibles that would have received a student loan but for their controlled substances convictions. To estimate the amount of student loans denied, we multiplied our estimate of the number denied student loans by the average amount of a student loan. In order to create our estimates for the number of individuals who would have received a Pell Grant or a student loan if not for their drug conviction, we assume that the characteristics of FAFSA eligibles are the same as the characteristics of FAFSA ineligibles. This assumption means that the percentage of FAFSA applicants who are eligible to receive federal higher education assistance should be the same for FAFSA ineligibles (apart from the drug conviction). Income is an important determinant of eligibility for both Pell Grants and student loans. Specifically, financial need is determined by ED using a standard formula established by Congress to evaluate the applicant’s FAFSA and to determine the student’s Expected Family Contribution (EFC). The EFC calculation includes various data elements including income, number of dependents, net assets, marital status, and other specified additional expenses incurred. Different assessment rates are used for dependent students, independent students without dependents, and independent students with dependents. After filing the FAFSA, a student is notified if he or she is eligible for a federal Pell Grant and of the student’s EFC. On the one hand, if FAFSA ineligibles on average have lower incomes than FAFSA eligibles, then our estimates of the number of students denied benefits are likely to be underestimates of the true number denied benefits. This is because we rely on the information about eligibility for Pell Grants and student loans from the persons who were eligible to receive them, not from the population who are otherwise eligible but for their disqualifying drug convictions. On the other hand, if FAFSA ineligibles are less likely to be enrolled in postsecondary education institutions, as compared with FAFSA eligibles, then our estimates of the number denied benefits are likely to overestimate the true number denied benefits. Table 13 shows the data that we used to estimate the numbers and amounts of federal postsecondary education assistance that was forgone to students who, absent their controlled substances convictions, would have received federal postsecondary education assistance. The data are provided annually for academic years 2001-2002 through 2003-2004. The key data elements used to estimate the numbers and amounts of federal assistance denied include the number of FAFSA applicants and FAFSA ineligibles, the percentage of Pell Grant eligibles among all FAFSA applicants, the percentage of Pell Grant recipients among Pell Grant eligibles, the average amount of Pell Grant received, the percentage of FAFSA applicants that received student loans, and the average amount of student loan received. The number of FAFSA ineligibles declined from 58,929 in academic year 2001-2002 to 41,061 in academic year 2003-2004. We note that FAFSA ineligibles amount to less than 0.5 percent of all FAFSA applications. In the academic years from 2001-2002 through 2003-2004, we estimated that between 17,000 and 23,000 students were denied Pell Grants because of their drug convictions, and that the total estimated amount of Pell Grants that these students would have received ranged from $41 million to $54 million. See table 14. We provide annual estimates of the numbers affected because the period of benefit ineligibility can vary, and a student denied benefits in one year may become eligible to receive benefits in a subsequent year. Thus, the estimates for one year do not necessarily affect the estimates for another year. In academic year 2001-2002, there were 58,929 FAFSA ineligibles. During that same year 51.5 percent of FAFSA applicants were eligible to receive Pell Grants, and 76.9 percent of those who were eligible received Pell Grants (as shown in table 13). Multiplying the 58,929 by the 51.5 percent and then multiplying this result by the 76.9 percent results in the estimate of 23,000 individuals denied Pell Grants who otherwise would have received them. To obtain the amount of Pell Grant lost to these students during academic year 2001-2002, we multiplied our estimated number of students denied Pell Grants (23,000) by the average amount of a Pell Grant in academic year 2001-2002 ($2,298). Table 14 also shows that between academic year 2001-2002 and academic year 2003-2004, an estimated 29,000 to 41,000 students per year would have received student loans if not for their drug convictions. The estimated total amount of student loans forgone by these students ranged between $100 million and $164 million per year. The President’s fiscal year 2005 budget contained a proposal that would have changed the administration of the Higher Education Act provision relating to eligibility for federal higher education benefits. Federal law disqualifies students who have been convicted of controlled substance offenses, in accordance with the period of ineligibility in table 12, from receiving federal higher education assistance. As currently implemented by the Department of Education, disqualifying convictions are those drug convictions on a student’s record at the time the student’s eligibility is being determined, using the rules on the FAFSA worksheet. Under the President’s proposal—which was supported by the Office of National Drug Control and Prevention—students would be ineligible for federal higher education assistance only if they committed a disqualifying drug-related offense while they are enrolled in higher education. This proposed change would make eligible all students whose controlled substance convictions occurred prior to enrolling in higher education. Because of data limitations, we are unable to provide reliable estimates of the impacts of the proposed changes contained in the President’s fiscal year 2005 budget proposal. However, we expect that the proposal would lower our estimates for the numbers of students denied benefits because some individuals would regain their eligibility for benefits, and relatively few students enrolled in postsecondary education institutions would be expected to both use drugs and be convicted of drug crimes. While studies consistently show that the economic returns to higher education are positive, we cannot establish a direct link between the denial of federal postsecondary aid to students and a reduction in the amount of postsecondary education completed by those who were denied aid. Moreover, officials at ONDCP also suggested that the provisions of the Higher Education Act that provide for denying educational aid to drug offenders might contribute a deterrent effect on drug use. Similarly, we were unable to identify studies that assess whether provisions of the HEA actually helped to deter drug use. Additionally, we are unable to address the question of whether these provisions of the HEA that deny higher education benefits to drug offenders result in net positive or negative effects on society because we were unable to find research that conclusively indicates whether these provisions of the HEA led individuals to forgo postsecondary education or deterred individuals from engaging in drug use and drug-related criminal activities. Additional formal education—e.g., completing high school or attending or completing postsecondary education—has been demonstrated to increase annual and lifetime earnings. In its review of the returns to education, the U.S. Census Bureau concluded that increases in formal education had a positive impact on annual earnings. For example, the U.S. Census Bureau reported that for full-time workers between the ages of 25 and 64 between 1997 and 1999 the average annual income for those who have not completed high school is $23,400, for high school graduates it is $30,400, and for those completing a bachelor’s degree it is $52,200. Average annual income rises higher yet for those who obtain advanced degrees. This general pattern, that increases in formal education correlate with increases in annual earnings, also holds true across an individual’s lifetime. The U.S. Census Bureau reported that the average lifetime earnings, based upon 1997-1999 work experience, for those who have not completed high school is approximately $1 million, for high school graduates it is $1.2 million, and for those completing a bachelor’s degree it is $2.1 million. Again, the average lifetime earnings rise higher yet for those who obtain advanced degrees. Hence, college graduates can expect, on average, to earn nearly twice as much over a lifetime as those persons who have only a high school diploma and more than twice as much as those who have not completed high school. Similarly, a study published by the congressional Joint Economic Committee in January 2000 concluded that there is a strong consensus among economists that formal education has a positive impact not only on personal income but also on society. The study concluded that among the positive societal economic returns from increases in formal education are the creation of new knowledge (translating into the development of new processes and technologies) and the diffusion and transmission of knowledge (translating into the expansion of innovative techniques such as those found in the high-technology sector). Positive societal noneconomic improvements are also associated with increased amounts of formal education, which help Americans become better mothers, fathers, children, voters, and citizens. These positive noneconomic improvements are sometimes called positive neighborhood effects. Some of the positive neighborhood effects may be (1) more informed and interested voters, (2) decreases in crime, (3) decreased dependence upon certain types of public assistance, and (4) decreased incidence of illegitimate pregnancies. Although the census study and the study conducted by the Joint Economic Committee show positive economic and societal impacts of increased levels of education, the total net impacts of these benefits are difficult to quantify. Moreover, these studies do not comment on whether the loss of federal education assistance (as occurs for drug offenders through the provisions of the HEA) contributes to individuals’ not completing postsecondary education, or whether those individuals who are denied federal education assistance generate the necessary funding to attend institutions of higher education in other ways. Also at issue is whether the provisions of the HEA that deny postsecondary education benefits to drug offenders contribute positively to society by providing a deterrent to drug use. Research on the costs to society from drug use, and drug-associated criminal involvement, demonstrated that these costs to society are high. Therefore, if the denial of federal higher education benefits deters people from engaging in drug crimes, then the provisions might have positive economic and noneconomic impacts on society. Some of the positive affects of deterrence may include reductions in drug-related health care costs, reductions in drug-related crime and associated criminal justice costs, and increased national economic productivity. In addition, for many offenders and in particular for first-time drug offenders, the denial of postsecondary education benefits may delay entry into postsecondary education rather than prevent it. With the available data, we were unable to determine whether the provisions of the Higher Education Act that provide for denial of postsecondary education benefits would affect relatively larger or smaller numbers of minorities. The FAFSA does not request information about applicants’ race; therefore ED does not have data on the racial distribution of applicants or FAFSA ineligibles. Without data on the race of applicants for federal student aid, it is not possible to determine whether minorities are denied aid at higher rates than whites. The Bureau of Justice Statistics’ Survey of Adults on Probation in 1995, which is the only national survey of probationers that includes data on the type of offense of conviction and educational attainment, indicates that there may be racial differences in the levels of educational attainment of drug offenders. The survey indicates that black and Hispanic drug offenders on probation complete high school at a lower rate than white drug offenders on probation. Specifically, while 68 percent of white drug offenders on probation had completed high school, 51 percent of black and 46 percent of Hispanic drug offenders on probation had completed high school. As completing high school (or gaining a general equivalency degree) is a prerequisite for enrollment in postsecondary education, these data suggest that lower proportions of black and Hispanic drug offenders (at least drug offenders on probation) would be eligible to enroll in postsecondary educational institutions and would therefore be eligible for federal higher education assistance. This appendix provides background on the legal and administrative frameworks for denying federally assisted housing benefits to persons who engage in drug-related criminal activities, our methods for estimating the numbers of persons denied benefits, and how we assessed the available data on racial minorities and the limited information on potential impacts. Federal law contains a variety of provisions relating to the denial of federally assisted housing benefits for certain types of drug-related criminal activity. These provisions relate to, among other things, (1) who may lose eligibility for federally assisted housing benefits because of drug- related criminal activity and (2) screening tools for the providers of federally assisted housing to use to determine ineligibility for such housing benefits. Motivation for prohibiting drug offenders from public housing is reflected, in part, in congressional findings made in 1990 and amended in 1998, about drug-related criminal activities in public housing; these findings stated, in part, that (1) “drug dealers are increasingly imposing a reign of terror on public and other federally assisted low-income housing tenants,” (2) “the increase in drug-related crime not only leads to murders, muggings, and other forms of violence against tenants, but also to a deterioration of the physical environment,” and (3) “the Federal government has a duty to provide public and other federally assisted low- income housing that is decent, safe, and free from illegal drugs.” Public housing agencies, which are typically local agencies created under state law that, under Department of Housing and Urban Development guidance, manage and develop public housing units for low-income families, are required, for example, to utilize leases that provide that any drug-related criminal activity on or off the premises by a public housing tenant shall be cause for termination of the tenancy. This provision also specifically applies to drug-related criminal activity by any member of the tenant’s household or any guest or other person under the tenant’s control. Similarly, federal law requires PHAs and owners of federally assisted housing to establish standards or lease provisions that allow for the termination of the tenancy or assistance for any household with a member who the PHA or owner determines is illegally using a controlled substance. Federal law further specifies that tenants evicted from federally assisted housing by reason of drug-related criminal activity are to be ineligible for federally assisted housing for a 3-year period, although evicted tenants that successfully complete an approved rehabilitation program may regain their eligibility before the 3-year period ends. Under federal law and implementing regulations, PHAs have the discretion to evict tenants for drug-related criminal activity but are not required to evict such tenants. Rather, they are required to use leases that provide that any drug-related criminal activity on or off the premises by a public housing tenant shall be cause for termination of the tenancy. Implementing regulations by the U.S. Department of Housing and Urban Development relating to termination provide that a determination of such criminal activity may be made regardless of whether a person has been arrested or convicted of such activity and without satisfying a criminal conviction standard of proof of the activity. With respect to methamphetamine convictions, PHAs are required under federal law to establish standards to immediately and permanently terminate a tenancy as well as permanently prohibit occupancy in public housing for persons convicted of certain methamphetamine offenses occurring on public housing premises. PHAs do not have discretion in evicting these persons, and the standards also require that Housing Choice Voucher Program (formerly Section 8 low-income housing) participation be denied to such persons. Federal law also provides various screening tools to assist with determining possible ineligibility of tenants and applicants for federally assisted housing benefits because of drug-related criminal activity. These tools come primarily in the form of access to certain types of information. For example, under federal law, housing assistance agencies are authorized to request access to criminal conviction records from police departments and other law enforcement agencies for the purposes of applicant screening, lease enforcement, and eviction. PHAs have the authority under certain conditions to request access to such information with respect to tenants and applicants for the Housing Choice Voucher Program. Public housing authorities are also authorized under federal law to require that applicants provide written consent for the public housing authorities to obtain certain types of records, such as criminal conviction records and drug abuse treatment facility records. HUD is responsible for establishing the rules and providing guidance to PHAs in their administration of federally assisted housing benefits. PHAs can manage a single program or multiple HUD programs. HUD’s Office of Public and Indian Housing oversees the two key rental housing assistance programs that we reviewed, namely the Low-Rent Public Housing Assistance Program (also referred to as low-rent, or public housing) and the HCV Program. During the 1990s, PHAs gained broader latitude from HUD and Congress to establish their own policies in areas such as selecting tenants. This included increased latitude in taking actions to deny federally assisted housing benefits to persons receiving housing benefits and to applicants for benefits. HUD requires PHAs to submit for its review and approval annual plans that include, among other things, their policies for continuing occupancy and denying admission for drug-related criminal activities. Recent HUD guidance regarding denying federal housing benefits to persons engaged in drug-related criminal activities was issued in its “Final Rule,” dated May 2001. The rule amended existing regulations regarding implementing the federally assisted housing tenant eviction and applicant screening provisions for drug-related criminal activities. Termination and admission policies can vary substantially among PHAs nationwide. In a baseline study (November 2000) of a stratified random sample of the PHAs that were responsible for managing federally assisted housing units in the HCV Program, HUD reviewed the discretionary authority among PHAs. HUD reported that the variation among PHAs in conducting criminal background checks could legitimately result in an applicant being barred by one PHA even though the applicant could otherwise be admitted by another PHA. Some of the variations reported in the study include differences in (1) the sources used to obtain information about criminal history and drug-related criminal activities (e.g., newspaper stories, resident complaints, self-disclosure, official law enforcement records—federal, state, local); (2) the costs (paid by the PHA) associated with obtaining official law enforcement criminal background records; (3) the time span covered by the criminal history search; and (4) whether consideration is given to repeat offenses, only convictions, or arrests and convictions. We obtained and reviewed policies from seven of the largest PHAs having combined programs—Public Housing and HCV. Our review of their policies with respect to terminations and admissions for drug-related criminal activities showed variations in the policies established to deny housing benefits. For example, policies regarding terminations of leases (for public housing tenants) or termination of assistance (for HCV recipients) vary in how they implement the drug-related criminal activity provisions and in the scope of criminal background that can result in terminations: Drug-related criminal activity provisions can range from certain types of prohibited behaviors (e.g., those that threaten the health and safety of other residents) to certain drug convictions (e.g., drug-related criminal activity, and methamphetamine in particular). Scope of criminal background can vary by the period of prior criminal history that can trigger termination of leases or assistance, the type of prohibited drug-related criminal activities (e.g., personal use, felonious distribution, etc.), or whether there was a conviction in the case. Analogously, PHA policies on admissions into public housing or into HCV can vary based on a number of factors, and these variations in policies can result in differences among PHAs in the types of drug offenders that are denied federally assisted housing. Applicant screenings for drug-related criminal activity can occur in varying forms (such as an application, interview, or eligibility verification) and at varying times—such as before or after placement on the PHAs’ waiting lists. Sources of criminal history information used can vary, so some PHAs cast a wider net than others when searching for prohibited drug-related criminal activity. Sources can range from using only local law enforcement records to using Federal Bureau of Investigation/National Crime Information Center data. Periods of ineligibility for prior evictions from federally assisted housing can vary by time frame and criminal activity (e.g., drug-related or violent). Ineligibility periods ranged from 3 to 5 years. The evidence standard for drug addiction can vary to include a reasonable cause to believe illegal drug use exists or self-disclosure of illegal use on the application itself. We obtained data from HUD on the number of evictions from and applicants denied admission into public housing during fiscal years 2002 and 2003 for reasons of criminal activities. In each year, more than 98 percent of the PHAs that manage public housing responded to HUD’s request for data about security within the units managed by the PHA, including information on evictions and applicants denied admission. HUD’s information pertains to persons evicted or denied admission for reasons of criminal activity; these data do not distinguish between criminal activity and drug-related criminal activities. The HUD data also do not include measures of the number of tenants or of the total number of applicants screened. To adjust for differences in the size of the PHAs, we calculated a rate at which applicants were evicted or denied admission into public housing because of criminal activities that was based on the number of units maintained by all reporting PHAs. These data are reported in table 15. During each of the fiscal years, 2002 and 2003, there were more than 9,000 evictions (amounting to less than 1 percent of all units managed) because of criminal activities. There were about 49,000 applications for admission into public housing that were denied for reasons of criminal activities (amounting to about 4 percent of all units). As drug-related criminal activities are a subset of criminal activities, these data suggest that even if all of those evicted from public housing for reasons of criminal activity had engaged in drug-related criminal activities, terminations leading to evictions would amount to less than 1 percent of the public housing units managed by PHAs. To gauge the extent to which PHAs denied federally assisted housing by terminating leases (leading to possible evictions) for drug-related criminal activities, we contacted 40 of the largest PHAs in the country and asked them to provide data on the number of actions that they took to evict tenants by terminating leases and of these, the numbers that were terminated for criminal activity and for drug-related criminal activity. Of the 40 PHAs that we contacted, we received data from 17. We assessed the data that these PHAs provided for reliability. As shown in table 16, 15 of 17 PHAs that responded to our request provided data on the total number of public housing termination of leases. The rate at which PHAs terminated leases for reasons of drug-related criminal activities varied considerably, from 0 percent in Santa Clara County to 39.3 percent in Memphis. The Philadelphia PHA, which reported the largest number of lease terminations (2,324), reported terminating 50 of these leases (or 2.2 percent) for reasons of drug-related criminal activities. The Santa Clara County PHA terminated the smallest number of leases (1). Combined, the 13 PHAs that reported both the number of lease terminations and the number of terminations for drug-related criminal activities, reported ending a total of 9,249 leases, and 520 of the terminations (or 5.6 percent of the total) were for drug-related criminal activities. Further, although the data on lease terminations for reasons of drug- related criminal activities are not generalizable to all PHAs that manage public housing program units, the information that they provided on leases terminated for reasons of drug-related criminal activities, and our calculation of these numbers as a percentage of terminations for criminal activities, show wide variation in the extent to which drug-related criminal activities predominate among all criminal activities that can result in a termination of a lease. In Cuyahoga County, for example, 82.4 percent of lease terminations for criminal activity were terminations for drug-related criminal activities, but in Oakland, 20 percent of the terminations for criminal activity occurred as a result of drug-related criminal activities. A majority of the PHAs that reported these data also reported that the number of lease terminations and the reasons for them (i.e., criminal or drug-related criminal activities) were similar to or smaller than the numbers in the prior 3 years. As shown in table 17, 16 of the 17 PHA respondents were able to provide some (although often incomplete) data on actions taken to terminate HCV assistance during 2003. Nine of the 16 PHA respondents were able to provide data on the number of actions to terminate HCV assistance for reasons of drug-related criminal activity or criminal activity. However, only 5 of the 9 respondents were able to provide data on the number of actions specifically taken to terminate HCV assistance for reasons of drug- related criminal activity. These 5 PHAs took 9,537 actions related to terminating HCV assistance, of which 54 (or about 0.6 percent) of such actions were for terminating assistance for reasons of drug-related criminal activities. Four of the 9 PHA respondents were able to provide data on the number of actions taken to terminate HCV assistance for reasons of criminal activity, and most, but not all, of them provided (at our request) broad estimates for drug-related criminal activity based on the total number of actions to terminate HCV assistance during 2003. These 4 PHAs took a total of 3,166 actions related to terminating HCV assistance, of which 133 actions (or about 4.2 percent) were for terminating assistance because of criminal activities. Three of the 4 PHAs estimated less than 25 percent of their total actions could have been for reasons of drug-related criminal activities, and 1 PHA did not provide an estimate. Applying the upper-bound broad estimates (25 percent) to each PHA’s total actions would be overstating terminations for reasons of drug-related criminal activity because the resulting number is most likely to be equal to or be a subset of terminations for reasons of criminal activities. From a conservative perspective, it is conceivable that the 133 actions also represent terminations of assistance for reasons of drug-related criminal activity, thereby establishing a maximum rate of denial at 4.2 percent for reasons of drug-related criminal activity. Seven of the 16 PHA respondents provided only the total number of actions taken to terminate HCV assistance, along with a broad estimate of the percentages of terminations that could have been for reasons of drug- related criminal activity. Six of the 7 PHAs reported less than 25 percent of their total actions could have been for reasons of drug-related criminal activities, and one reported 51 to 75 percent. Table 17 provides the data on terminations of assistance from the HCV program. The majority of PHAs that reported data on terminations from the HCV program also reported that the number and types of actions that they took during 2003 were similar to the numbers in the prior 3 years. As shown in table 18, 15 of 17 PHAs that responded to our request provided data on the number of actions taken on applications for public housing. However, only 6 of the 15 respondents provided data on the number of actions specifically taken to deny admission into public housing for reasons of drug-related criminal activity. Collectively, these six PHAs took action on 11,538 applications, of which 330 (or about 2.9 percent of) such actions were for denying admissions for reasons of drug-related criminal activities. Nine of the 15 PHAs did not provide counts of the number of denials for reasons of drug-related criminal activity but provided data on the number of actions taken to deny admission for reasons of criminal activity. In completing our request, 4 PHAs provided broad estimates of denials for drug-related criminal activity based on the total number of actions taken on applications for public housing, and 5 PHAs did not provide estimates. Collectively, these 9 PHAs reported a total of 17,921 actions related to applications for admission into public housing, of which 1,081 actions (or 6 percent) were for denying admission for reasons of criminal activities. On the basis of our assumption that admission denials for reasons of drug- related criminal activity are most likely to be either a subset of or equivalent to the admission denials for reasons of criminal activities, we estimate that the maximum rate of denial for reasons of drug-related criminal activity for these PHAs is 6 percent. As with the other outcomes, the PHAs varied in the extent to which they reported that applicants were denied admission into public housing for reasons of drug-related criminal activities, and the majority of PHAs that provided data for 2003 reported that their activities related to actions and denials in 2003 were similar to the numbers in the prior 3 years. As shown in table 19, 14 of the 17 PHA respondents provided some (although mostly incomplete) data on actions taken on applications for the HCV program. Nine of the 14 PHAs provided data on the number of denials of admission into the program for reasons of drug-related criminal activities or criminal activity. Of the 2 PHAs that provided data on the number of denials for reasons of drug-related criminal activity, 1 PHA reported no denials, and the other PHA reported 10 denials out of 1,483 actions taken on applications, or 0.7 percent. Seven PHAs provided data on the number of denials for reasons of criminal activity. Among these 7 PHAs, there were a total of 20,513 reported actions taken on applicants. Of these, 303 were denied admission for reasons of criminal activities (or about 1.5 percent). On the basis of our assumption that admission denials for reasons of drug-related criminal activity are most likely to be either a subset of or equivalent to the admission denials for reasons of criminal activities, we estimate that the maximum for admission denials for reasons of drug-related criminal activity is 1.5 percent. We could not provide reliable estimates for the remaining 5 PHAs that reported incomplete data. Our review of limited data and interviews with those involved in federally assisted housing suggest a number of factors can contribute to the relatively low percentages of denials being reported for reasons of drug- related criminal activities. As noted in a HUD baseline study, variation among PHAs in conducting HCV criminal background checks could legitimately result in an applicant being barred by one PHA who would otherwise be admitted by another PHA. In addition, a HUD official suggested that the percentages of denials that were reported to us by selected PHAs can be influenced by whether (1) the PHAs place drug users at the bottom of their waiting lists, (2) PHAs differ in the treatment of applicants if a household member rather than the applicant is the subject of the drug-related criminal activity, and (3) local courts presiding over eviction proceedings view the PHAs as the housing provider of last resort. In the last instance, the PHA’s decision to terminate a lease for reason of drug-related criminal activity may not be upheld. Moreover, comments made during interviews with selected officials on matters related to housing were consistent with our analysis of HUD data on PHA denials for criminal activity, and the relatively low number of denials for drug-related criminal activity provided to us by selected PHAs. Regarding the relatively small number of persons whose housing benefits were reported as terminated or persons denied program participation for reasons of criminal or drug-related criminal activities, a representative from the National Association of Housing and Redevelopment Officials stated that PHAs are not looking to turn away minor offenders (e.g., “the type of people that may have only stolen a candy bar”) but rather hardened criminals. On the variation in denials of federally assisted housing for drug-related criminal activities, the Project Coordinator for the Re-Entry Policy Council at the Council of State Governments suggested that the barriers to housing ex-drug offenders revolve around the discretion afforded PHAs, and that these barriers can best be dealt with at the local level by making states more aware of the issue, the applicability of the local rules, and the need for building collegial relationships with PHAs to develop options for housing ex-felons. More generally, assessing the impacts of the denial of federal housing benefits on the housing communities or on the individuals and families that have lost benefits was beyond the scope of our review, given the limited data that are available. Officials at HUD reported that they have not studied this issue, and our review of the literature did not return any comprehensive studies of impacts. In our opinion, any full assessment of the impacts of denial of housing benefits to drug offenders would have to consider a wide range of possible impacts, such as improvements in public safety that result from terminating leases of drug offenders; displacement of crime from one area to another with perhaps no overall (or area-wide) improvements in crime reduction; as well as the impacts on individuals and families, to name a few. Any such impact assessments would be complicated by the market conditions (limited quantity and high demand) for federally assisted housing and the variation in PHAs’ policies and practices that would also need to be considered. We requested the PHAs to provide us with data on the race of persons who were denied federally assisted housing for reasons of drug-related criminal activities. Of the 17 PHAs that responded to our request, few provided data by race on (1) the total number of actions taken and (2) those actions that were specifically for drug-related criminal activity. Only 4 PHAs provided data by race on public housing terminations, and 3 PHAs provided data by race on public housing admission denials. Four PHAs provided data by race on HCV terminations of assistance. Only 1 PHA provided data by race on HCV admission denials. From these limited data, we were unable to develop reliable estimates of racial differences in the frequency of terminations and denials of admission into federally assisted housing. In some cases, the number reported as terminated for drug-related criminal activities was too small to provide stable estimates, and because of the small numbers, the estimates of racial differences could exhibit large changes with the addition of a few more cases. For example, only 4 PHAs provided data by race on the number of leases terminated for reasons of drug-related criminal activities. In 1 PHA, slightly more than 3 percent of all lease terminations of blacks were for drug-related criminal activities, while almost 6 percent of all lease terminations of whites were for drug-related criminal activities. In this PHA, whites were about one and one half times more likely than blacks to have their leases terminated for reasons of drug-related criminal activities. In this PHA, 110 whites had leases terminated during 2003, and 6 of these terminations were for drug-related criminal activities. In a second PHA, blacks were three times as likely as whites to have their leases terminated for reasons of drug-related criminal activities, as 18 percent of blacks and 5 percent of whites had leases terminated for reasons of drug related criminal activities. But in this second PHA, 19 whites had leases terminated, and 1 of these was for drug-related criminal activities. An addition of 2 whites to the number that had leases terminated for drug- related criminal activities would have almost eliminated the racial difference. The Denial of Federal Benefits Program established under section 5301 of the Anti-Drug Abuse Act of 1988, in general, provides federal and state court judges with a sentencing option to deny selected federal benefits to individuals convicted of federal or state offenses for the distribution or possession of controlled substances. The federal benefits that can be denied include grants, contracts, loans, and professional or commercial licenses provided by an agency of the United States. Certain benefits are excluded from deniability under this provision of the law; these include benefits such as social security, federally assisted housing, welfare, veterans’ benefits, and benefits for which payments or services are required for eligibility. Federally assisted housing, TANF, and food stamp benefits may be denied to drug offenders under other provisions of federal law. (See app. II for more information on the denial of TANF and food stamp benefits, and see app. IV for more information on the denial of federally assisted housing benefits.) Federal and state court sentencing judges generally have discretion to deny any of the deniable benefits for any length up to those prescribed by the law, with the exception of the mandatory denial of benefits required for a third drug trafficking conviction. More specifically, depending upon the type of offense, and conviction and the number of prior convictions, the law provides for different periods of ineligibility for which benefits can or must be denied. As the number of convictions for a particular type of drug offense increases, so does the period of ineligibility for which benefits can or must be denied. Table 20 shows these periods. With respect to first-time drug possession convictions, a court may impose certain conditions, such as the completion of an approved drug treatment program, as a requirement for the reinstatement of benefits. In addition, the sentencing court continues to have the discretion to impose other penalties and conditions apart from section 5301 of the Anti-Drug Abuse Act of 1988. Section 5301 of the Anti-Drug Abuse Act of 1988, as amended, also provides that under certain circumstances, the denial of benefits penalties may be waived or suspended with respect to certain offenders. For example, the denial of benefits penalties are not applicable to individuals who cooperate with or testify for the government in the prosecution of a federal or state offense or are in a government witness protection program. In addition, with respect to individuals convicted of drug possession offenses, the denial of benefits penalties are to be “waived in the case of a person who, if there is a reasonable body of evidence to substantiate such declaration, declares himself to be an addict and submits himself to a long-term treatment program for addiction, or is deemed to be rehabilitated pursuant to rules established by the Secretary of Health and Human Services.” Also, the period of ineligibility for the denial of benefits is to be suspended for individuals who have completed a supervised drug rehabilitation program, have otherwise been rehabilitated, or have made a good faith effort to gain admission to a supervised drug rehabilitation program but have been unable to do so because of inaccessibility or unavailability of such a program or the inability of such individuals to pay for such a program. State and federal sentencing judges generally have discretion to impose denial of federal benefits, under section 5301 of the Anti-Drug Abuse Act of 1988, as a sanction. This sanction can be imposed in combination with other sanctions, and courts have the option of denying all or some of the specified federal benefits and determining the length of the denial period within certain statutorily set ranges. When denial of benefits under section 5301 of the Anti-Drug Abuse Act of 1988 is part of a sentence, the sentencing court is to notify the Bureau of Justice Assistance, which maintains a database (the Denial of Federal Benefits Program Clearinghouse) of the names of persons who have been convicted and the benefits that they have been denied. BJA passes this information on to the U.S. General Services Administration (GSA), which maintains the debarment list for all agencies. GSA publishes the names of individuals who are denied benefits in the Lists of Parties Excluded from Federal Procurement or Nonprocurement Programs, commonly known as the Debarment List. The Debarment List contains special codes that indicate whether all or selected benefits have been denied for an individual and the expiration date for the period of denial. Before making an award or conferring a pertinent federal benefit, federal agencies are required to consult the Debarment List to determine if the individual is eligible for benefits. The Department of Justice also has data-sharing agreements with the Department of Education and the Federal Communications Commission. The purpose of these agreements is to provide these agencies with access to information about persons currently denied the federal benefits administered by them. For example, as described in this report, students who are convicted of offenses involving the sale or possession of a controlled substance are ineligible to receive certain federal postsecondary education benefits. In order to ensure that student financial assistance is not awarded to individuals subject to denial of benefits under court orders issued pursuant to section 5301, DOJ and the Department of Education implemented a computer matching program. The Department of Education attempts to identify persons who have applied for federal higher education assistance by matching records from applicants against the BJA database list of persons who have been denied benefits. Officials at the Department of Education report that the department has matched only a few records of applicants for federal higher education assistance to the DOJ list of persons denied federal benefits. The individuals whose names appear on the DOJ list may differ from those individuals who self- certify to a drug offense conviction on their applications for federal postsecondary education assistance. (See app. III for more information on this.) The Administrative Office of United States Courts is responsible for administrative matters for the federal courts. Shortly after the passage of the Anti-Drug Abuse Act of 1988, AOUSC added the Denial of Federal Benefits sentence enhancement to the Pre-Sentence Report Monograph, which provided information to probation officers about the availability of the DFB as sanction along with its requirements. AOUSC also developed a standard form for federal judges to use in reporting the imposition of the Denial of Federal Benefits sanctions; the form is part of the Judgment and Commitment Order that is completed by the court upon sentencing. The United States Sentencing Commission promulgates federal sentencing guidelines and collects data on all persons sentenced pursuant to the federal sentencing guidelines. After the passage of the Anti-Drug Abuse Act of 1988, the USSC prepared a guideline for this sanction and included it in the Sentencing Guidelines Manual. Annually, USSC distributes the Sentencing Guidelines Manual to federal court officials. Bureau of Justice Assistance data show that between 1990 and the second quarter of 2004, 8,298 offenders were reported as having been denied federal benefits by judges who imposed sanctions under the Denial of Federal Benefits Program. About 38 percent (or 3,128) of these offenders were denied benefits in state courts, and about 62 percent (or 5,170) were denied benefits in federal courts. About an average 635 persons per year were denied benefits under the program over the 1992 through 2003 period, and the number denied in any given year ranged from about 428 to 833. The number denied a benefit under the program decreased to 428 in 2002 and increased to 595 in 2003. According to BJA data, judges in comparatively few courts used the denial of federal benefits provisions. State court judges in 7 states and federal judges in judicial districts in 26 states were reported to have imposed the sanction. Among state courts, judges in Texas accounted for 39 percent of the state court totals, while judges in Oregon and Rhode Island accounted for 30 percent and 13 percent, respectively. Among the federal courts, judges in judicial districts in Texas accounted for 21 percent of the federal totals, while judges in North Carolina, Mississippi, Georgia, Florida, Nevada, and Kansas accounted for between 8 percent and 15 percent of the totals. Federal judges in each of the remaining 19 states accounted for less than 3 percent of the federal totals. Not all of the 8,298 offenders recorded as having been denied federal benefits between 1990 and 2004 under the program are currently denied benefits. For about 75 percent of these offenders, the period of denial has expired. Officials at BJA report that as of April 2004, they maintained about 2,000 active records of persons currently under a period of denial. Relative to the total number of felony drug convictions, the provisions of the Denial of Federal Benefits (DFB) Program are reportedly used in a relatively small percentage of drug cases. For example, biannually between 1992 and 2000, there were a minimum of 274,000 and a maximum of 348,000 convictions for drug offenses in state courts, or about 307,000 per year. In federal courts over this same period, there were between 15,000 and 24,000 drug offenders convicted, or about 19,000 per year. As the average annual number of drug defendants in state courts denied benefits under the DFB was 223, the rate of use of the DFB in state courts averaged about 0.07 percent. Among federal drug defendants, the annual average number reported as having received a sanction under the program was about 369, while the average annual number of drug defendants sentenced federally was about 19,000; hence, the percentage of all federal drug defendants receiving a sanction under the program was about 2 percent. Throughout the history of the program, questions have been raised about its apparently limited impacts. In 1992, we reported on the difficulties in denying federal benefits to convicted drug offenders and suggested that there would not be widespread withholding of federal benefits from drug offenders. Officials at BJA also reported that the sanction has not been widely used by judges. In 2001, BJA program managers met with some U.S. attorneys in an attempt to provide them with information about the potential benefits of the program. According to BJA officials, the U.S. attorneys responded that they typically used other statutes for sanctioning and sentencing drug offenders, rather than the sanctions under the Denial of Federal Benefits Program. The benefits that can be denied under the program—federal contracts, grants, loans, and professional or commercial licenses—suggest some reasons as to its relatively infrequent use. Persons engaged in federal contracting, for example, are generally engaged in business activities, and such persons compose small percentages of federal defendants sentenced for drug offenses. Hence, relatively few defendants may qualify to use these federal benefits, and therefore relatively few may be denied the benefits. None of the data sources that we reviewed provided reliable data on the race and ethnicity of persons denied federal benefits under the Denial of Federal Benefits Program. In addition to the contact named above, William J. Sabol, Clarence Tull, Brian Sklar, DuEwa Kamara, Geoffrey Hamilton, David Makoto Hudson, Michele Fejfar, David Alexander, Amy Bernstein, Anne Laffoon, Julian King and Andrea P. Smith made key contributions to this report.
|
Several provisions of federal law allow for or require certain federal benefits to be denied to individuals convicted of drug offenses in federal or state courts. These benefits include Temporary Assistance for Needy Families (TANF), food stamps, federally assisted housing, postsecondary education assistance, and some federal contracts and licenses. Given the sizable population of drug offenders in the United States, the number and the impacts of federal denial of benefit provisions may be particularly important if the operations of these provisions work at cross purposes with recent federal initiatives intended to ease prisoner reentry and foster prisoner reintegration into society. GAO analyzed (1) for selected years, the number and percentage of drug offenders that were estimated to be denied federal postsecondary education and federally assisted housing benefits and federal grants, contracts, and licenses and (2) the factors affecting whether drug offenders would have been eligible to receive TANF and food stamp benefits, but for their drug offense convictions, and for a recent year, the percentage of drug offenders released who would have been eligible to receive these benefits. Several agencies reviewed a draft of this report, and we incorporated the technical comments that some provided into the report where appropriate. For the years for which it obtained data, GAO estimates that relatively small percentages of applicants but thousands of persons were denied postsecondary education benefits, federally assisted housing, or selected licenses and contracts as a result of federal laws that provide for denying benefits to drug offenders. During academic year 2003-2004, about 41,000 applicants (or 0.3 percent of all applicants) were disqualified from receiving postsecondary education loans and grants because of drug convictions. For 2003, 13 of the largest public housing agencies in the nation reported that less than 6 percent of 9,249 lease terminations that occurred in these agencies were for reasons of drug-related criminal activities--such as illegal distribution or use of a controlled substance--and 15 large public housing agencies reported that about 5 percent of 29,459 applications for admission were denied admission for these reasons. From 1990 through the second quarter of 2004, judges in federal and state courts were reported to have imposed sanctions to deny benefits such as federal licenses, grants, and contracts to about 600 convicted drug offenders per year. Various factors affect which convicted drug felons are eligible to receive TANF or food stamps. This is because state of residence, income, and family situation all play a role in determining eligibility. Federal law mandates that convicted drug felons face a lifetime ban on receipt of TANF and food stamps unless states pass laws to exempt some or all convicted drug felons in their state from the ban. At the time of GAO's review, 32 states had laws exempting some or all convicted drug felons from the ban on TANF, and 35 states had laws modifying the federal ban on food stamps. Because of the eligibility requirements associated with receiving these benefits, only those convicted drug felons who, but for their conviction, would have been eligible to receive the benefits could be affected by the federal bans. For example, TANF eligibility criteria include requirements that an applicant have custodial care of a child and that income be below state-determined eligibility thresholds. Available data for 14 of 18 states that fully implemented the ban on TANF indicate that about 15 percent of drug offenders released from prison in 2001 met key eligibility requirements and constitute the pool of potentially affected drug felons. Proportionally more female drug felons than males may be affected by the ban, as about 27 percent of female and 15 percent of male drug offenders released from prison in 2001 could be affected.
|
Since the advent of modern warfare, the presence of mines and minefields has hampered the freedom of movement of military forces. The origins of mine warfare may be traced back to crude explosive devices used during the Civil War. Since that time, the use of land mines has increased to a point where there are now over 750 types of land mines, ranging in sophistication from simple pressure-triggered explosives to more sophisticated devices that use advanced sensors. It is estimated that there are about 127 million land mines buried in 55 countries. Land mines are considered to be a valuable military asset since, by slowing, channeling, and possibly killing opponents, they multiply the combat impact of defending forces. Their attractiveness to smaller military and paramilitary organizations, such as those in the Third World, is further enhanced because they do not require complex logistics support and are readily available and inexpensive. Virtually every combatant can make effective mines, and they will continue to be a viable weapon for the future. U.S. forces must be prepared to operate in a mined environment across the spectrum of military operations, from peacetime activities to large-scale combat operations. Detection is a key component of countermine efforts. In combat operations, the countermine mission revolves around speed and mobility. Mines hinder maneuver commanders’ ability to accomplish their missions because unit commanders need to know where mines are located so they can avoid or neutralize them. In peacekeeping operations, mines are used against U.S. forces to slow or stop daily operations. This gives insurgents a way to control traffic flow of defense forces and affect the morale of both the military and civilian population. Since World War II, the U.S. military’s primary land mine detection tool has been the hand-held metal detector used in conjunction with a manual probe. This method is slow, labor intensive, and dangerous because the operator is in close proximity to the explosive. The Army has also recently acquired a small number of vehicle-based metal detectors from South Africa to be used in route clearing operations and to be issued to units, as needed, on a contingency basis. Metal detectors are also sensitive to trace metal elements and debris, which are found in most soils. This limitation leads to a high level of false alarms since operators often cannot distinguish between a metal fragment and a mine. False alarms translate into increased workload and time because each detection must be treated as if it were an explosive. The wide use of mines with little to no metal content also presents a significant problem for metal detectors. For example, according to DOD intelligence reports, about 75 percent of the land mines in Bosnia are low-metallic and some former Yugoslav mines containing no metal were known to have been manufactured. In fact, the Army has stated that the inability to effectively detect low metal and non-metallic mines remains a major operational deficiency for U.S. forces. Given the limitations of the metal detector, DOD has been conducting research and development since World War II to improve its land mine detection capability. For example, during the 1940s the United States began research to develop a detector capable of finding nonmetallic mines. Since then, DOD has embarked on a number of unsuccessful efforts to develop a nonmetallic detector and to field a vehicle-based land mine detector. DOD now has new programs to develop a vehicle-based detector and an improved hand-held detector. DOD expects to field these new systems, both with nonmetallic capability, within the next 3 years. Airborne detectors are also being developed by both the Army and the Marine Corps for reconnaissance missions to locate minefields. Countermine research and development, which includes land mine detection, is funded by a number of DOD organizations and coordinated through a newly established Unexploded Ordnance Center of Excellence. The Army is designated as the lead agency for DOD’s countermine research, with most of its detection research funding being managed by the Night Vision and Electronic Sensors Directorate (NVESD) and the Project Manager for Mines, Countermine and Demolitions. The Marine Corps and the Navy are also supporting a limited number of land mine detection research efforts. Additionally, the Defense Advanced Research Projects Agency (DARPA) has been involved with a number of land mine detection programs throughout the years. In fiscal years 1998 through 2000, DOD funded over $360 million in countermine-related research and development projects, of which approximately $160 million was aimed specifically toward land mine detection. DOD sponsored an additional $47 million in research during this period for unexploded ordnance detection (which includes land mines) in support of other DOD missions such as humanitarian demining and environmental cleanup. Because of the basic nature of detection, these other efforts indirectly supported the countermine mission. Overall, DOD funding levels for countermine research have been sporadic over the years. Major countermine research initiatives and fieldings of new detectors have coincided with U.S. military actions, such as the Korean War, the Vietnam War, Operation Desert Storm, and the recent peacekeeping operations in the Balkans. Following each influx of countermine research funding has been a corresponding lull in activity. A countermine program assessment conducted for the Army in 1993 concluded that whereas mine developments have benefited from the infusion of leap ahead technologies, countermine tools have been essentially product improved counterparts of World War II ideas. However, according to DOD, countermine development is a slow process because of the technological challenges inherent to land mine detection. Not only must a detector be able to find mines quickly and safely through large variety of soils and at varying depths in battlefield conditions with clutter and even countermeasures, but it must also be able to discriminate between mines (which vary considerably in size, shape, and component materials) and other buried objects. DOD’s ability to develop meaningful land mine detection solutions is limited by the absence of an effective strategy to guide its research and development program. DOD maintains frequent contact with the external research community to constantly learn about new detection approaches and technologies. However, it has not developed a comprehensive set of mission needs to guide its research programs and does not systematically evaluate the broad range of potential technologies that could address those mission needs. In addition, its resources for conducting critical basic research for addressing fundamental science-based questions are threatened. Lastly, because DOD’s testing plans do not require adequate testing of land mine detectors in development, the extent of performance limitations in the variety of operating conditions under which they are expected to be used will not be fully understood. DOD has not developed a comprehensive and specific set of mission-based criteria that reflect the needs of U.S. forces, upon which to base its investments in new technologies in land mine detection. Although DOD’s overall acquisition process sets out a needs-based framework to conduct research and development, DOD has not developed a complete statement of needs at the early stages of research when technologies are first investigated and selected. The process calls for an evolutionary definition of needs, meaning that statements of needs start in very general terms and become increasingly specific as programs mature. Early stages of research are generated from and guided by general statements of needs supplemented through collaboration between the combat users and the research communities. In the case of land mine detection, the Army stated a general need of having its forces be able to operate freely in a mined environment. This need has received a broad definition, as “capabilities for rapid, remote or standoff surveillance, reconnaissance, detection, and neutralization of mines.” Further specification of the need is left to representatives of the user community and researchers to determine. It is only with respect to specific systems at later stages of the acquisition cycle that more formalized and specific requirements were established to guide decisions about further funding. Although we found that a comprehesive set of specific measurable criteria representing mission needs had not been developed, we did find some specific criteria in use to guide research efforts, such as rates of advance and standoff distances. However, a number of these criteria were established by DOD to reflect incremental improvements over the current capabilities of technologies rather than to reflect the optimal needs of combat engineers. For example, the Army was using performance goals to guide its forward looking mine detection sensors program. The objective of this program was to investigate and develop mine detection technologies to increase standoff and speed for route clearance missions beyond current capabilities. Performance goals included developing a system with a standoff of greater than 20 meters with a rate of advance of 20 kilometers per hour. However, these goals were primarily driven by the capabilities and limitations of the systems being considered. According to an Army researcher, they were based on what existing technologies could achieve in a limited time period (3 years) and not on what the combat engineers would ultimately need. During our assessment of technologies, which is described in the next section of this report, we found that the standoff desired by combat engineers was almost 50 meters for route clearance missions with a rate of advance of 40 kilometers per hour. One barrier to DOD’s developing a comprehensive set of mission needs is large gaps in information about target signature characteristics and environmental conditions. For example, significant information gaps exist about the rate at which land mines leak explosive vapors and the environmental pathways that the vapors take once they are released. Also, knowledge gaps about soil characteristics in future battlefields limit DOD’s ability to fully specify mission needs and knowledgeably select among competing technologies. They also reduce the pace of technological innovation by hampering researchers from predicting how their devices will function. DOD is currently funding research to answer several important questions in these areas. But, as discussed below, continued DOD funding is threatened. Just as DOD has failed to adequately specify countermine mission needs for assessing promising technologies, we found that it had not systematically assessed the strengths and the limitations of underlying technologies to meet mission needs. DOD employs a number of mechanisms to obtain ideas for promising land mine detection solutions. These include attending and sponsoring technical conferences, arranging informal system demonstrations, convening workshops, and publishing formal solicitations for research proposals. However, DOD does not systematically evaluate the merits of the wide variety of underlying technologies against a comprehensive set of mission needs to identify the most promising candidates for a focused and sustained research program. Instead, it generally evaluates the merits of specific systems proposed by developers against time-driven requirements of its research programs. One way DOD identifies land mine detection ideas is through sponsoring and attending international technical conferences on land mine detection technologies. For example, it sponsors an annual conference on unexploded ordnance detection and clearance, at which countermine related detection is a major focus. Additionally, DOD research officials have chaired mine detection conferences within annual sensing technology symposia of the International Society for Optical Engineering (SPIE) since 1995. The most recent SPIE conference on mine detection, held in April 2000, included over 130 technical presentations by researchers from DOD and other organizations worldwide. SPIE provides DOD land mine research officials an opportunity to network with researchers working in different areas of sensing technologies. DOD also identifies new technologies through reviewing researchers’ ideas outside of the formal solicitation process by occasionally allowing researchers to demonstrate their ideas at DOD facilities. Technical workshops are another mechanism used by DOD to identify new ideas. For example, DOD’s Unexploded Ordnance Center of Excellence held a workshop, in part, to identify new land mine detection technologies in 1998. This workshop, largely attended by DOD staff and contractors, explored technological approaches that were not receiving a lot of attention. The report of the workshop pointed out several potential paths for future investment for land mine detection. Of all the mechanisms DOD uses to identify new technologies, issuing announcements in the Commerce Business Daily is its principal means for communicating its research needs to the outside research community and receiving ideas and approaches to improve land mine detection capabilities. In our interviews with non-government researchers, we found that they use DOD’s announcements as their principal means for familiarizing themselves about DOD’s needs. In connection with our efforts to identify candidate technologies for land mine detection, we searched databases, such as the Commerce Business Daily, containing DOD announcements. We found that the Army placed 20 of the 25 announcements we identified from 1997 through 2000. NVESD accounted for 17 of the solicitations. Countermine research and development detection funding is concentrated on four primary technologies…There has been increasing emphasis on radar and active electromagnetics as the technologies showing the greatest short term promise for the reliable detection of land mines (emphasis added). At NVESD, which has the largest share of countermine detection research, programs are generally time-limited. As a result, evaluations of proposals are largely based on the maturity of the idea. An example is the Future Combat Systems (FCS) Mine Detection and Neutralization program, which is funded at about $21 million over 3 years. This program is designed to have a system ready for testing by fiscal year 2002, only 3 years after the program started. This pace is necessary to meet the Army’s overall goals for fielding FCS. NVESD officials told us that this time constraint means they are more apt to fund the more mature ideas. This time constraint could therefore result in not selecting potentially promising technologies that might involve more risk. Although NVESD officials stated that they are receptive to less developed ideas that show promise, the requirements of the program may make this difficult to do. We found that DOD did not supplement its frequent announcements with periodic reviews of the underlying technologies that the responses were based on. Such a review would evaluate their future prospects and could suggest a long-term sustained research program in a technological area that required several thrusts, whereas the individual project proposals might appear to have doubtful value in themselves. Along a similar vein, in 1998 a Defense Science Board task force that evaluated DOD’s efforts in a closely related area of research and development also recommended a two-track approach for research and development. The Board found that, “there has been too little attention given to some techniques which may provide capabilities important for particular sites” and recommended that DOD institute a program parallel to the “baseline” program that “would involve an aggressive research and development effort … to explore some avenues which have received too little attention in the past.” Numerous questions about the physics-based capabilities of the various detection technologies make it difficult, if not impossible, to evaluate them against mission needs at the present time. Although DOD has invested funds in basic research to address some of its questions, its efforts are expected to end after fiscal year 2001. In addition to providing support to technology evaluations, a sustained basic research program is needed to support DOD’s ongoing efforts to develop better systems. Independent evaluations, as well as our assessment of candidate land mine detection technologies, which is presented in the next section of this report, have revealed many uncertainties about the strengths and limitations of each of the applicable technologies with respect to addressing countermine mission needs. In addition, DOD has noted a number of fundamental science-based questions regarding detection technologies. For example, 3 years ago the Center of Excellence, through a series of workshops, identified 81 broad research needs critical to improving detection capabilities. Examples of research needs included an improved understanding of the impact of environmental conditions on many of the technologies examined and better characterization of clutter, which contributes to the problem of false alarms currently plaguing a number of technologies. Some of the needs have been addressed since the workshops. For example, the Center sponsored follow-on workshops and independent studies of radar and metal detectors to address research questions specific to these technologies. However, DOD officials told us that the broad set of needs has not been systematically addressed and that many questions still remain. Also, over the past 3 years, DOD has invested about $4 million annually in basic research directed at answering fundamental science-based questions supporting land mine detection. This work has been managed by the Army Research Office, with funding provided by both the Army and DOD through its Multidisciplinary University Research Initiative. However, this research program is expected to end after fiscal year 2001. According to DOD, this basic research has been valuable to its land mine detection program. For example, the 1999 Center of Excellence annual report states that the basic research program has improved physics-based modeling so that it is now possible to examine realistic problems that include soil interactions with buried targets. The results of this modeling have yielded insights into limitations of sensor performance in various environments. The report concludes that this modeling work needs to be continued and expanded to systematically study soil effects. In fact, the report recommends continued investment in basic research to increase understanding of phenomenology associated with detection technologies, stating that the greatest value of basic research comes from a sustained effort. DOD’s policy is that systems be tested under those realistic conditions that most stress them. According to DOD, this testing is to demonstrate that all technical risk areas have been identified and reduced. However, because of questions about the physics-based strengths and weaknesses of land mine detection technologies, there is uncertainty about how well the detectors currently in development will function in the various environmental conditions expected in countermine operations. Some of these questions could be answered through thorough developmental testing. However, DOD’s testing plans do not adequately subject its detectors to the multitude of conditions necessary to address these performance uncertainties. We reviewed the Army’s testing plans for two land mine detection systems currently in development to determine whether the test protocols were designed on a framework of identifying and minimizing technical risks stemming from the uncertainties detailed above. These are the Handheld Stand-off Mine Detection System (HSTAMIDS) hand-held detector and the Ground Stand-off Mine Detection System (GSTAMIDS) vehicle-based detector. We found that the testing plans were not designed around the breadth of environmental conditions expected for those systems or around anticipated limitations and uncertainties. Rather, testing is to be conducted at only a limited number of locations and under ambient climatic conditions. As such, knowledge about the performance of these detectors in the variety of soil types and weather conditions expected in worldwide military operations is likely to be limited. For example, the performance of ground penetrating radar, a primary sensor in both the HSTAMIDS and the GSTAMIDS, is questionable in saturated soils, such as what might occur after a heavy rain. However, neither the HSTAMIDS nor GSTAMIDS testing plans specifically call for testing in wet conditions. The only way this condition would be tested is if there is heavy rain on or just before the days that testing is to occur. As such, knowledge about the performance of these detectors in a variety of conditions is likely to be limited. Incomplete knowledge of the properties of candidate land mine detection technologies makes it difficult to assess whether DOD is investing in the most promising technologies to address countermine detection missions. Because DOD had not performed a systematic assessment of potentially applicable technologies against military countermine mission needs, we performed our own evaluation. Through a broad and systematic review of technological candidates, we identified nine technologies with potential applicability, five of which DOD is currently exploring. However, insufficient information about these nine technologies prevented us from definitively concluding that any could address any of the missions. Additionally, because of these uncertainties, we could not conclude whether a “sensor fusion” approach involving a combination of two or more of the technologies would yield an adequate solution. We conducted a broad search for potential technological candidates for solutions to the countermine problem, and then evaluated the candidates against a set of mission-based criteria to determine which candidates were promising for further research. A more detailed description of our methodology is presented in appendix I. For criteria, we identified operational needs for each of five different types of critical countermine missions: (1) breaching, (2) route clearance, (3) area clearance, (4) tactical reconnaissance, and (5) reconnaissance supporting stability and support operations during peacetime. A more detailed description of these missions is presented in appendix II. We then developed a set of technical criteria to specifically define detection requirements for each mission. The criteria we developed were based on target parameters, operational parameters,and environmental parameters. Target parameters describe the physical characteristics of land mines and the methods by which they are emplaced. These include such characteristics as land mine sizes and shapes, metallic content, explosive content, burial depths and the length of time mines have been buried. Operational parameters describe the operational needs of the military as they relate to countermine operations involving mine detection. These factors include speed of advance, detection distance from the mine (called stand-off),and the level of precision in identifying the exact position of hidden mines. Target and operational parameters can vary among the five types of missions. Environmental parameters, unlike target and operational parameters, do not vary based on the type of mission. Rather environmental parameters are site-specific. They are natural and man-made conditions in and around the battlefield that affect mine detection. These parameters cover a wide array of atmospheric, surface, and sub-surface environmental conditions, such as air temperature, dust or fog obscuration, surface snow, varying soil types and post-blast explosive residue. A more detailed description of the criteria used in our evaluation is presented in appendix II. Our search yielded 19 technological candidates, which span a wide variety of different physical principles and are shown in figure 1. As shown in figure 1, the majority (15) of the technologies use energy from the electromagnetic (EM) spectrum, either to detect emissions from the mine or to project energy at the mine and detect a reflection. The energies used in these technologies span the entire EM spectrum, from radio waves (characterized by long wavelengths/low frequencies) to gamma rays (short wavelengths/high frequencies). Of the remaining four technologies not directly utilizing EM energy, two (biosensors and trace vapor detectors) operate by using a chemical or biological reaction to detect explosive vapor that is emitted from mines into the surrounding soil or the air directly above the ground. Another one is based on sending neutrons toward the target. The last technology works by sending acoustic or seismic energy toward a target and receiving an acoustic or seismic reflection. A more detailed discussion of these 19 technologies is included in appendix III. When we evaluated the 19 technologies against the operational parameters, we found that 10 had one or more physics-based limitations that would prevent them from achieving any of the five countermine missions by themselves (see table 1). As can be seen from table 1, standoff and speed are the most challenging attributes of a detection system that would meet DOD’s countermine mission needs. Nine technologies failed to meet the standoff criterion, and four failed to meet the speed criterion for any of the five missions. We judged that the remaining nine technologies were “potentially promising” because we did not conclusively identify any definitive operational limitations to preclude their use in one or more countermine missions. For all of these nine technologies, our ability to determine their operational capabilities was reduced by significant uncertainty as to their capabilities. Some, such as ground penetrating radar and acoustic technologies, have been studied for many years. Yet continuing improvements to the sensors and the critical mathematical equations that interpret the raw data coming from the sensors made it difficult for us to predict the absolute limits of their capabilities. Our inability to draw a conclusion about these technologies is supported by reports from the Institute for Defense Analyses and other organizations that have found similar uncertainty about their prospects. The critical issue for radar is whether it will ever be capable of doing a good enough job discriminating between targets and natural clutter to allow an acceptable rate of advance. The issue of clutter is the fundamental problem for many sensor approaches. Our uncertainty about three technologies, terahertz imaging, x-ray fluorescence and electromagnetic radiography was different because their capabilities were not as well-studied. As a result, there was not enough information for us to determine whether they could meet mission-based criteria. In addition, DOD officials told us that they believe that two of them (terahertz imaging and x-ray fluorescence) have fundamental limitations that rule them out for countermine missions. They claimed that terahertz energy is unable to penetrate deep enough through the soil and that x-ray fluorescence has inadequate standoff. However, we were not able to resolve these issues. We believe that the lack of consensus about the capabilities of most of the nine technologies is due, in part, to a basic lack of knowledge about the upper limits of their capabilities. The only way to determine whether these technologies can be employed in a detector that meets countermine mission needs is through a systematic research program. DOD is currently investing in five of the nine technologies (see table 2), and it recently stopped funding a project in one of them (passive millimeter wave). In our review of the ability of the nine technologies to operate in different environmental conditions, we could not, with certainty, identify absolute limitations on the ability of four to operate in expected environmental conditions. However, all nine have uncertainties about the range of environmental conditions in which they can adequately perform. The most significant uncertainties relate to performance in various surface and subsurface conditions, such as water saturated soil and differing soil types. In most cases, these uncertainties have not been adequately studied. Examples of environmental limitations and uncertainties for the nine technologies are presented in table 3. The uncertainties about the various detection technologies also prevented us from determining if the technologies could be combined to meet mission needs. While most of the 19 technologies cannot meet operational and environmental mission needs, in theory a combination of different sensors might solve the countermine problem. This type of arrangement, known as sensor fusion, combines different approaches to compensate for the limitations of them individually. Canada and the Army are developing systems that use some form of sensor fusion. Canada’s Defense Research Establishment in Suffield, Alberta, has produced a multisensor land mine detector that employs thermal neutron activation (TNA), a type of neutron activation analysis, as a confirmation detector in a system that also employs a metal detector, infrared (IR), and ground penetrating radar to scan for mines. The TNA sensor is used to confirm or reject suspect targets that the three scanning sensors detect. The Army is developing a detector (HSTAMIDS) that uses sensor fusion to take advantage of the strengths of both metal detector and radar approaches. In this configuration, the radar is used to improve the metal detector’s performance with mines that employ small amounts of metal. However, neither of these systems (Canada’s and the Army’s) will meet the countermine mission needs stated previously because their component sensors are limited. Any detection system utilizing sensor fusion would somehow need to overcome limitations, such as standoff and speed, in underlying technologies. As pointed out previously, the capability of the identified technologies to meet mission needs is uncertain. Another consideration in developing a sensor fusion solution is that it would require significant advances in signal processing. It is unclear whether DOD’s research investments are in those technologies that, either individually or in combination, have the greatest chance of leading to solutions that address the U.S. military’s countermine mission needs given the lack of knowledge about the strengths and the limitations of the various detection technologies. DOD’s strategy of working toward incrementally improving capabilities over current detectors may result in improvements over current capabilities. However, without a systematic and comprehensive evaluation of potential technologies based on a complete set of mission-based needs, DOD does not know if it has invested its funds wisely to address the needs of the military. DOD’s testing plans for its land mine detection systems in development do not provide assurance that these systems will perform adequately under most expected conditions. Demarcating the acceptable operating conditions of a system is a critical part of research and development. This is important not only for determining if developmental systems will meet mission needs but also for defining the operational limitations so that users can make informed decisions about their use. Therefore, systems should be tested under those conditions that most stress them. Given the numerous environmental and climatic conditions that can be expected to affect the performance of any land mine detector, a robust program of developmental testing is essential to fully understand the strengths and limitations in performance under realistic conditions. Failing to test under a plan specifically designed around the expected environmental and climatic conditions of use as well as the anticipated limitations of the technologies could increase the risk of fielding the system. To improve the Department’s ability to identify and pursue the most promising technologies for land mine detection, we recommend that the Secretary of Defense (1) direct the establishment of a long-range research program to periodically evaluate all applicable land mine detection technologies against a complete set of mission-based criteria and (2) provide a sustained level of basic research to sufficiently address scientific uncertainties. Mission-based criteria could include target signatures, operational requirements, and expected environmental conditions. We also recommend that the Secretary of Defense require the services to provide adequate testing conditions for land mine detection systems in development that better reflect the operating environment in which they will likely have to operate. DOD provided written comments on a draft of this report (see app. IV). DOD concurred with each of our three recommendations and augmented its concurrence with additional comments. DOD’s comments describe and illustrate the lack of a focused and systematic approach underlying DOD’s research programs for land mine detectors. It is not clear from DOD’s response what, if any, measures it plans to take to implement our recommendations. In responding to our first recommendation, DOD states that the Army pursues a systematic research, development, and acquisition program to address land mine detection needs. However, we found that its approach lacked elements critical to the success of this program, such as the use of a comprehensive set of mission-based criteria and a systematic evaluation of the capability of competing alternative technologies to address these criteria. In fact, the Army Science Board study cited by DOD in its comments to us also recommended that “operational needs and priorities need to be clearly thought through and quantified.” There is nothing in DOD’s comments that is directed toward bridging these gaps. Therefore, we continue to believe that the changes that we have recommended are required. Regarding our second recommendation, DOD describes the benefits provided by its current basic research program, but does not commit to continuing funding for basic research for land mine detection after this fiscal year. As we discuss in this report, we believe it is extremely important for DOD to continue with a sustained program of basic research to support its land mine detection program given the extent of the uncertainties surrounding the various technologies. This point was also made by the Army Science Board panel. In response to our third recommendation, DOD states that the testing plans we reviewed were not detailed enough to allow us to reach our conclusions, and it describes certain activities that it is engaged in to incorporate realistic environmental conditions into its testing programs for HSTAMIDS and GSTAMIDS. However, we believe that the described activities further illustrate the lack of a systematic strategy to guide testing during product development. DOD acknowledged the threat to the performance of metal detectors from soils that are rich in iron oxide and pointed out that it is seeking to identify a “suitable site to test the HSTAMIDS system in unique soil environments such as laterite.” We feel that this is an important step in the development of this system. But we believe that this step, along with tests in saturated soils and snowy conditions, should have been taken much earlier, before a large commitment had been made to this system. Testing programs should also be driven by a systematic mission-based evaluation framework. Such an approach should delineate at the earliest stages of development the expected environmental operating conditions based on mission needs. An analysis should then be made to identify for testing those conditions that pose substantial challenges or uncertainties for detector performance. Without such a framework, there is a risk that uncertainties about the performance of these systems will remain after they have been fielded and that significant testing will ostensibly be conducted by users rather than by testers. We are sending a copy of this report to the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Acting Secretary of the Army; the Honorable Robert B. Pirie, Jr., Acting Secretary of the Navy; General James L. Jones, Commandant of the Marine Corps; and other interested congressional committees and parties. We will also make copies available to others upon request. Please contact me on (202) 512-2700 if you or your staff have any questions concerning this report. Major contributors to this report were Kwai-Cheung Chan, Dan Engelberg, Cary Russell, and John Oppenheim. To determine whether the Department of Defense (DOD) employs an effective strategy for identifying the most promising land mine detection technologies, we reviewed literature related to research program design and met with experts in this area. We interviewed officials from the Army, the Navy, the Marine Corps and the Defense Advanced Research Projects Agency (DARPA) responsible for running land mine detection research programs. We also reviewed DOD policy and doctrine related to this area including the Defense Technology Area Plan, the Army Science and Technology Master Plan, and Countermine Modernization Plans. To determine whether DOD is investing in the most promising technologies to fully address mission needs, we evaluated the set of potential land mine detection technologies identified through a systematic search against a set of criteria derived from mission needs. We first designed a framework for evaluating potential technologies. This framework assisted in identifying the most promising technologies and research gaps for further investigation. Through our discussions with DOD, we found out that such a framework had not previously been created. Because our framework was mission directed, we identified a set of critical countermine missions that involve detecting land mines by systematically interviewing Army and Marine Corps combat engineers to determine how countermine activities fit into a variety of combat scenarios and reviewing Army and Marine Corps doctrine that discuss mine threats to U.S. forces and corresponding countermine tactics. Next, through a review of documents and discussions with Army and Marine Corps combat engineers, we identified technical criteria that define detection requirements for each mission. Officials representing the two organizations responsible for combat engineer requirements, the Army Engineer School and the Marine Corps Combat Development Command, reviewed and agreed with the set of criteria we developed. The critical missions and the set of criteria we developed are discussed in appendix II. We then identified conventional and alternative technologies that could have value in terms of performing these land mine detection missions. We distinguished between technologies and systems. “Technologies are approaches by which principles of physics are exploited to achieve tasks.” Systems are implementations of technologies. By developing a methodology that was based on identifying and characterizing technologies, rather than systems, we sought to go beyond the strengths and limitations of current devices and thereby provide information on which to base a future-oriented research program. We identified candidate technologies in three ways: One way was to review literature on land mine detection and interview researchers and other experts in the land mine detection field. Another way was to interview experts in related fields, such as geophysics and civil engineering, that involve similar activities (i.e., looking for hidden subsurface objects). In this, our goal was to find out if those fields use any tools that have not been explored by DOD. The final way was to review proposals that had been submitted to DOD in response to recent solicitations for funding. The technologies we identified are presented in appendix III. We evaluated each of the identified technologies against the set of mission criteria to determine which were promising for land mine detection. We identified “potentially promising” technologies by eliminating those that have limitations that would preclude their meeting mission goals. In performing this evaluation, we attended conferences and workshops, reviewed published and unpublished technical literature, interviewed developers of land mine detection systems, and contracted with an expert in the field of land mine detection technologies to review our conclusions. We also obtained comments from technical experts from the Army. Finally, we determined which of the “potentially promising” technologies DOD was exploring by reviewing agency documents and interviewing DOD officials. We performed our work from November 1999 to February 2001 in accordance with generally accepted government auditing standards. Using our methodology, we identified land mine detection requirements. The five critical countermine missions that involve land mine detection are (1) breaching, (2) route clearance, (3) area clearance, (4) tactical reconnaissance, and (5) stability and support operations (SASO) reconnaissance. Breaching is the rapid creation of safe paths through a minefield to project combat forces to the other side. This mission is usually conducted while the force is under enemy fire. Route clearance is the detection and removal of mines along pre-existing roads and trails to allow for the passage of logistics and support forces. Area clearance is the detection and removal of mines in a designated area of operations to permit use by military forces. Tactical reconnaissance is performed to identify mine threats just prior to and throughout combat operations. SASO reconnaissance is used to assist in making decisions about where to locate forces and for planning area clearance operations. A principal difference between tactical and SASO reconnaissance is the time required for performing the mission. Because SASO reconnaissance involves peacetime operations, the speed at which it is conducted is not as critical as that for tactical reconnaissance. We developed a set of technical criteria to specifically define detection requirements for each mission and grouped the criteria into target parameters, operational parameters, and environmental parameters. Target parameters describe the physical characteristics of land mines and the way they are emplaced. Given that there are over 750 types of land mines available worldwide, the target characteristics vary considerably. The parameters we identified are presented in table 4. Operational parameters describe the operational needs of the military as they relate to countermine operations involving mine detection. Our set of operational parameters are also presented in table 4. One critical operational criterion for a mine detector is speed of advance. For time critical missions, like breaching and route clearance, a detector needs to function effectively at the military forces’ operational speeds. The ability of a detector to keep up with the required rate of advance is dependent on two factors: its scanning speed (the time to search a given area for mines) and its false alarm rate, which is based on the number of times a detector indicates the presence of a mine where one does not exist. False alarms reduce the rate of advance because combat forces must stop to confirm whether an alarm is actually a mine. Another key operational parameter is standoff, which is the distance a mine detector (and its operator) can be from a mine and still be able to detect it. The minimum standoff required is the lethal radius of a mine, which is about 35 meters (for an antitank sized mine). This distance requirement increases as speed increases to allow for reaction time once an alarm is sounded. In cases of minefield reconnaissance performed by airborne detectors, the standoff required is the minimum altitude necessary to provide safety for the aircraft from enemy ground fire. One final operational parameter is the ability of a detector to accurately locate the position of a buried mine. This is important for reducing the time necessary to remove or otherwise neutralize the mine and the safety risk associated with manually probing the ground to find the exact mine position. The environmental parameters we identified are presented in table 5. These are natural and man-made conditions in and around the battlefield that affect mine detection and are grouped into atmospheric, surface, subsurface, and other environmental conditions. While the target and operational parameters can vary among the five mission types, the environmental parameters are not mission-specific. Rather environmental parameters are site-specific. In this appendix, we briefly describe the land mine detection technologies and projects that we identified through our methodology. We grouped the individual projects and lines of effort on the basis of their underlying technological approach. Our grouping resulted in 19 distinct approaches. These technologies vary in their maturity. Some, such as metal detectors and radar, have been explored by many researchers for many years. Much less is known about others such as electromagnetic radiography and microwave enhanced infrared. Others, such as x-ray fluoresence, have been used in other applications but have received relatively little attention thus far in this application. The technologies use different principles. Fifteen of the 19 technologies are based on receiving electromagnetic (EM) energy from the target.Eleven of the 15 EM technologies are based on sending energy (in one case energy in the form of neutrons) into the ground. The remaining four EM technologies are “passive electromagnetic”; they are based on receiving energy that is emitted by the land mine. These four technologies are similar in principle; their relative strengths and limitations with respect to addressing countermine missions arise from the different types of energy that they receive. The final 4 of the 19 technologies are primarily not electromagnetic. Two capture and analyze the explosive that the mine releases into the ground or air, one is based on acoustic or seismic energy reflected off of the target, and one is based on sending neutrons toward the target. Eleven technologies use electromagnetic energy and operate under three different approaches (see fig. 2). Those that react with the explosive Those that react with the explosive 1. Conductivity/resistivity 2. Metal detectors 1. Electromagnetic radiography 2. Gamma ray imaging 3. Microwave enhanced infrared 4. Quadrupole resonance 5. X-ray fluorescence Four operate by sending EM energy into the ground, reflecting off the mine. Five operate by sending EM energy into the ground, creating an effect on the explosive substance. Whereas four of the five act on the explosive within the mine casing, one relies on detecting released explosive molecules. Two operate by detecting differences in the low frequency electromagnetic field around the mine. Four of the 11 active EM technologies (radar, terahertz imaging, LIDAR, and x-ray backscatter) are based on projecting energy into the ground and reflecting off the land mine. The presence of a mine or other buried object is detected from differences in the electromagnetic properties of the target and those of the surrounding ground. The relative strengths and limitations of these technologies vary with their wavelengths. Managing the trade-off between depth of penetration and resolution is one of the central research concerns in this area. The choice of frequency is important; lower frequencies allow better ground penetration but will suffer from poor spatial resolution. Radar’s relatively long wavelength (it operates in the microwave part of the electromagnetic spectrum) allows it to penetrate the ground deeply enough to reach buried mines. This ability, along with the fact that it can detect plastic mines, has made radar the focus of much research and development in the United States and in other nations. For example, DOD has incorporated radar into its hand-held system, Handheld Stand-off Mine Detection System (HSTAMIDS). However, whether a system based on radar will meet countermine mission needs remains in dispute. The poor spatial resolution of radar, which makes it difficult at best to distinguish between buried mines and other objects of a similar size and shape, is the largest obstacle. Another issue is its inability to penetrate soils that are saturated with water. The other technologies have greater resolution but have a corresponding loss of depth penetration. Because LIDAR has a shorter wavelength than radar, it has a limited ability to detect buried mines. X-ray backscatter can provide detailed images of shallowly buried mines due to the extremely short wavelength of the x-rays. It operates by detecting the difference in the atomic number between the ground and the mine target. However, the applicability of this technology is limited due to the limited penetration of the x-rays into the ground. In theory, terahertz imaging should have a similar limitation. However, a researcher studying the feasibility of creating images of mines in the terahertz part of the spectrum told us that his system might be able to penetrate more deeply by increasing the power of the energy. Another general approach involves projecting energy into the ground that reacts with the molecules of the explosive, which send a signal that is received by the detector. Because it reacts with the explosive, rather than the container, the approach has the advantage of more specifically targeting land mines and being less prone to the clutter problem that hinders other active electromagnetic approaches. However, technologies that adopt this approach tend to be more complex and expensive. We identified five distinct technologies that have been advanced that utilize this general approach. One of them, quadrupole resonance, is a relatively mature technology in land mine applications and systems have been built around it. Less is known about the other four technologies and how to apply them to detect land mines and what their capabilities are for addressing countermine missions. These four are electromagnetic radiography, microwave enhanced infrared, x-ray fluorescence, and gamma ray imaging. Therefore, our assessments are less complete for these than for the other more well-studied approaches. Quadrupole resonance has been explored for identifying explosives for several years. Much of the basic research was conducted at the Naval Research Laboratory. Quadrupole resonance detectors are also being developed to screen for explosives at airports. In quadrupole resonance, a pulse of long wavelength energy causes the nitrogen nuclei in the explosives to emit a pulse of energy that is characteristic of the molecule. For example, the nitrogen atoms in TNT emit a unique pulse that can be picked up by the detector. One limitation of quadrupole resonance with respect to countermine missions is that the detector head must be close to the target. The speed at which quadrupole resonance can operate is in question. Current systems are fairly slow. In addition, research questions currently exist in several areas, including how to overcome interference from other sources of energy and how to configure a quadrupole resonance detector to detect TNT. Despite these limitations and questions, DOD is developing systems that use this technology. The Marine Corps is developing a hand-held device that uses quadrupole resonance and the Army is developing a land mine detection vehicle that would use an array of quadrupole resonance detectors across the front to confirm targets presented by sensors that use either radar or metal detector. In conversations with individual systems developers, we identified four other examples of this land mine detection approach. The first two technologies are based on scanning the ground with long wavelength microwaves. This energy excites the explosive molecules that emit a signal that is detected. The other two technologies using this approach send shorter wavelength energy toward the target. Electromagnetic radiography operates by scanning the ground with long wavelength microwaves. According to one developer, when it is struck by this energy; the target radiates back in a particular way; exciting molecules at atomic levels. The molecules respond with spin effects that produce “a spectrographic signature of the target substance.” As noted previously, very little is known at the present time about what the limits are of this technology in terms of the operational requirements and environmental conditions for countermine applications. Microwave enhanced infrared detection operates by sending long wavelength microwaves into the ground and then detecting a “unique thermal signature and infrared spectra of chemical explosives.” One limitation with this approach is it cannot be used to detect metallic mines because the microwave energy cannot penetrate metal. In addition, the speed at which it can operate and the standoff distance are both highly uncertain. The third technology illuminates the ground with x-rays that causes a series of changes in the electron configuration of the target atoms that results in the release of an x-ray photon (x-ray fluorescence). Unlike the other technologies in this category x-ray fluorescence detects molecules of explosive that are emitted from the mine. The amount of fluorescence is dependent on the target molecule. A critical issue in dispute at the present time is whether x-ray fluorescence can work at the distances required to address countermine missions The short wavelength of the x-rays used has a corresponding high degree of scattering. Several experts we spoke to expressed reservations about standoff for this technology, although the system developer claims to have surmounted this limitation. The fourth technology is gamma ray imaging. The basis of this technique is an electron accelerator that produces gamma rays that “interact with the chemical elements in explosives to generate a unique signature.” Because of the scattering of the (short wavelength), x-ray and gamma ray detectors operating on these principles must be in close proximity to the target. According to a developer, the detector must be within one foot of the target. Another obstacle is that the detector would require an extremely large source of energy to create the gamma rays. We identified two technologies that are based on detecting an electromagnetic field. The first is electromagnetic induction. As discussed in the background section, metal detectors that utilize this approach are the principal means for detecting land mines at the present time. Metal detectors generate a magnetic field that reacts with electric and/or magnetic properties of the target. This reaction causes the generation of a second magnetic field, which is received by the detector. The restriction to metallic objects is a limitation given the increasing development of mines with extremely small amounts of metal. Increasing the sensitivity of a metal detector to detect extremely small amounts of metal in these mines leads to its detecting other objects in the ground. Metal detectors are also limited by the need to be relatively close to the mine target in order to operate effectively. The second technology is conductivity/resistivity that involves applying current to the ground using a set of electrodes and measuring the voltage developed between other electrodes. The voltage measured at the electrodes would be affected by the objects in the ground, including land mines. The conductivity technique was originally developed to locate minerals, oil deposits, and groundwater supplies. The need to place the electrodes in or on the ground is a concern for land mine detection applications of this technology. We identified four technologies that have been proposed which do not actively illuminate the target, but are based on detecting energy emitted or reflected by the mine. Three detect the energy naturally released by objects. They are ostensibly cameras that operate in a very similar fashion to video cameras, although they view not red, green, and blue frequencies, but other parts of the spectrum. Land mine detectors that use passive sensing principles spot either (1) a contrast between the energy emitted or reflected from the mine and that of the background or (2) the contrast between the (disturbed) soil immediately surrounding a buried mine and the top layer of soil. They can be designed to pick up this energy difference in different wavelength bands. Passive detectors have been designed or proposed to operate in different parts of the EM spectrum. We identified technologies that operate using infrared, millimeter wave, and microwave principles. Infrared, millimeter, and microwave techniques have different strengths and limitations. The trade-offs between scattering and resolution that exist with the active backscatter approaches (radar and LIDAR) also exist for passive EM technologies. For example, the longer wavelengths of microwave and millimeter waves allow them to penetrate through clouds, smoke, dust, dry leaves, and a thin layer of dry soil but provide more limited resolution of targets. These four technologies are capable of greater standoff than others. Several nations are developing systems that use IR detection to detect minefields (tactical reconnaissance). Systems are also being developed to gather information in several infrared wavelength bands at the same time (“multi-spectral infrared”). This approach increases the amount of information available to distinguish mine targets from the background. The Marine Corps is conducting research in this area. One of the constraints with infrared detection systems is that the mines’ signature against the background will tend to be reduced at certain times during the day. To overcome this limitation, researchers funded by DOD’s Multidisciplinary University Research Initiative (MURI) recently investigated amplifying the infrared signal by heating the ground with microwave energy. Their early findings suggest that microwave heating enhances the infrared signature of objects buried under smooth surfaces. However, much work remains. Given continued funding, they plan to add increasing complexity to their experimentation by testing with rough surfaces, random shapes, and different mine and soil characteristics. They will need to conduct additional research to determine whether the rate of heating is consistent with the speed required to meet most countermine missions. The fourth passive electromagnetic approach is based on detecting the energy produced by the circuitry of advanced mines that contain sophisticated fuses. DOD has recently funded work on this approach as part of the MURI initiative. Apart from the limited applicability of this technology, questions remain concerning how feasible it is and how easily a detector operating on these principles might be fooled with a decoy. We identified four technologies that are not based on electromagnetic principles. They are acoustic/seismic, neutron activation, trace vapor and biosensors. Sensors that utilize an acoustic/seismic approach operate by creating an acoustic or seismic wave in the ground that reflects off the mine. The energy can be delivered in a number of different ways such as a loudspeaker, a seismic source coupled with the ground, and a laser striking the ground over the mine. In addition, there are different ways of receiving the signal from the target (electromagnetically through a doppler radar or doppler laser device or acoustically through a microphone). Numerous questions remain about whether an acoustic/seismic approach can meet the operational needs for countermine missions and the environmental factors that would influence its employment. Although we identified no certain, absolute limitations to an acoustic/seismic approach meeting countermine missions, we did identify significant concerns. Acoustic waves are capable of imaging buried land mines. However, clutter is a major concern with acoustic approaches. Interference from rocks, vegetation, and other naturally objects in the environment alter the waves as they travel in the ground. Additional work needs to be conducted to assess the limits of an acoustic/seismic approach for detecting land mines. An acoustic system is one of the technologies that the Army is currently exploring for the Ground Stand-off Mine Detection System (GSTAMIDS). Neutron activation analysis techniques operate on the principle that mine explosives have a much higher concentration of certain elements like nitrogen and hydrogen than naturally occurring objects. There are several neutron-based techniques for detecting these explosive properties in bulk form. All systems are composed of at least a neutron source – continuous or pulsed, emitting in bursts – to produce the neutrons that have to be directed into the ground, and a detector to characterize the outgoing radiation, usually gamma rays, resulting from the interaction of the neutrons with the soil and the substances it contains (e.g. the explosive). Neutron activation analysis cannot be used as a standoff detector. Our review indicated that neutron activation analysis must operate directly over the mine target. The limited speed of this technology is another restriction for most missions. In addition, unanswered questions about this technology concern the depth of penetration and whether it can be used to detect smaller anti-personnel mines. Because of these limitations and questions, neutron activation analysis is currently envisioned as having a role as a confirmation detector alongside faster sensors on systems that are remotely piloted. For example, as described above, Canada’s military has developed a vehicle that incorporates thermal neutron activation as a confirmation sensor. The vehicle would need to stop only when one of the scanning sensors indicated a possible mine target. The other two technologies are trace vapor and biosensors. Trace vapor detectors involve sensing molecules of the explosive that emanate from the buried mine and then analyzing them. There are several different approaches for capturing and analyzing these molecules. In 1997, DARPA initiated a research program aimed at detecting land mines via their chemical signatures, referred to as the “electronic dog’s nose” program. The program was established because DARPA believed that the technologies DOD was developing (metal detectors, radar and infrared) were limited in that they were not seeking features unique to land mines and were susceptible to high false alarm rates from natural and man made clutter. Through this program, DARPA hoped to change the overall philosophy of mine detection in DOD by detecting the explosive, a unique feature of land mines. This work has been transitioned over to the Army. However, the role of trace vapor detectors in most countermine missions is likely to remain limited due to the limited standoff that can be achieved. The central feature of the biosensor technology approach is a living animal. Current examples of biosensors are dogs, bees, and microbes that detect explosives. Many research questions remain with these approaches. Andrews, Anne et al., Research on Ground-Penetrating Radar for Detection of Mines and Unexploded Ordnance: Current Status and Research Strategy, Institute for Defense Analyses, 1999. Bruschini, Claudio and Bertrand Gros. A Survey of Current Sensor Technology Research for the Detection of Landmines, LAMI-DeTeC. Lausanne, Switzerland, 1997. Bruschini, Claudio and Bertrand Gros. A Survey of Research on Sensor Technology for Landmine Detection. The Journal of Humanitarian Demining, Issue 2.1 (Feb. 1998). Bruschini, Claudio, Karin De Bruyn, Hichem Sahli, and Jan Cornelis. Study on the State of the Art in the EU Related to Humanitarian Demining Technology, Products and Practice. École Polytechnique Fédérale de Lausanne and Vrije Universiteit Brusel. Brussels, Belgium, 1999. Carruthers, Al, Scoping Study for Humanitarian Demining Technologies, Medicine Hat, Canada: Canadian Centre for Mine Action Technologies, 1999. Craib, J.A., Survey of Mine Clearance Technology, Conducted for the United Nations University and the United Nations Department of Humanitarian Affairs, 1994. Evaluation of Unexploded Ordnance Detection and Interrogation Technologies. Prepared for Panama Canal Treaty Implementation Plan Agency. U.S. Army Environmental Center and Naval Explosive Ordnance Disposal Technology Division, 1997. Garwin, Richard L. and Jo L. Husbands. Progress in Humanitarian Demining: Technical and Policy Challenges. Prepared for the Xth Annual Amaldi Conference. Paris, France, 1997. Groot, J.S. and Y.H.L. Janssen. Remote Land Mine(Field) Detection, An Overview of Techniques. TNO Defence Research. The Hague, The Netherlands., 1994. Gros, Bertrand and Claudio Bruschini. Sensor Technologies for the Detection of Antipersonnel Mines, A Survey of Current Research and System Developments. EPFL-LAMI DeTeC. Lausanne, Switzerland., 1996. Havlík, Stefan and Peter Licko. Humanitarian Demining: The Challenge for Robotic Research. The Journal of Humanitarian Demining, Issue 2.2 (May 1998). Healey, A.J. and W.T. Webber. Sensors for the Detection of Land-based Munitions. Naval Postgraduate School. Monterey, CA., 1995 Heberlein, David C., Progress in Metal-Detection Techniques for Detecting and Identifying Landmines and Unexploded Ordnance, Institute for Defense Analyses, 2000. Horowitz, Paul, et al., New Technological Approaches to Humanitarian Demining, The MITRE Corporation, 1996. Hussein, Esam M.A. and Edward J. Waller. Landmine Detection: The Problem and the Challenge. Laboratory for Threat Material Detection, Department of Mechnaical Engineering, University of New Brunswick. Fredericton, NB, Canada. 1999. Janzon, Bo, International Workshop of Technical Experts on Ordnance Recovery and Disposal in the Framework of International Demining Operations (report), National Defence Research Establishment, Stockholm, Sweden, 1994. Johnson, B. et al., A Research and Development Strategy for Unexploded Ordnance Sensing, Massachusetts Institute of Technology, 1996. Kerner, David, et al. Anti-Personnel Landmine (APL) Detection Technology Survey and Assessment. Prepared for the Defense Threat Reduction Agency. DynMeridian. Alexandria, VA., 1999. McFee, John, et al., CRAD Countermine R&D Study – Final Report, Defense Research Establishment Suffield, 1994. Mächler, Ph. Detection Technologies for Anti-Personnel Mines. LAMI- DeTeC. Lausanne, Switzerland, 1995. Scroggins, Debra M., Technology Assessment for the Detection of Buried Metallic and Non-metallic Cased Ordnance, Naval Explosive Ordnance Disposal Technology Center, Indian Head, MD, 1993. Sensor Technology Assessment for Ordnance and Explosive Waste Detection and Location. Prepared for U.S. Army Corps of Engineers and Army Yuma Proving Ground. Jet Propulsion Laboratory, California Institute of Technology. Pasadena, CA. 1995. Tsipis, Kosta. Report on the Landmine Brainstorming Workshop of August 25-30, 1996. Program in Science and Technology for International Security, Massachusetts Institute of Technology. Cambridge, MA., 1996.
|
Recent U.S. military operations have shown that land mines continue to pose a significant threat to U.S. forces. U.S. land mine detection capabilities are limited and largely unchanged since the Second World War. Improving the Department of Defense's (DOD) land mine detection capability is a technological challenge This report reviews DOD's strategy for identifying the most promising land mine detection technologies. GAO found that DOD's ability to substantially improve its land mine detection capabilities may be limited because DOD lacks an effective strategy for identifying and evaluating the most promising technologies. Although DOD maintains an extensive program of outreach to external researchers and other nations' military research organizations, it does not use an effective methodology to evaluate all technological options to guide its investment decisions. DOD is investing in several technologies to overcome the mine detection problem, but it is not clear that DOD has chosen the most promising technologies. Because DOD has not systematically assessed potential land mine detection technologies against mission needs, GAO did its own assessment. GAO found that the technologies DOD is exploring are limited in their ability to meet mission needs or are greatly uncertain in their potential. GAO identified other technologies that might address DOD's needs, but they are in immature states of development and it is unclear whether they are more promising than the approaches that DOD is exploring.
|
In the 21st century, older Americans are expected to make up a larger share of the U.S. population, live longer, and spend more years in retirement than previous generations. The share of the U.S. population age 65 and older is projected to increase from 12.4 percent in 2000 to 19.6 percent in 2030 and continue to grow through 2050. In part, this is due to increases in life expectancy. The average number of years that men who reach age 65 are expected to live is projected to increase from just over 13 in 1970 to 17 by 2020. Women have experienced a similar rise—from 17 years in 1970 to a projected 20 years by 2020. These increases in life expectancy have not, however, resulted in an increase in the average number of years people spend in the workforce. While life expectancy has increased, labor force participation rates of older Americans only began to increase in recent years. As a result, individuals are generally spending more years in retirement. In addition to these factors, fertility rates at about the replacement level are contributing to the elderly population’s increasing share in the total population and a slowing in the growth of the labor force. Also contributing to the slowing in the growth of the labor force is the leveling off of women’s labor force participation rate. While women’s share of the labor force increased dramatically between 1950 and 2000—from 30 percent to 47 percent—their share of the labor force is projected to remain at around 48 percent over the next 50 years. While hard to predict, the level of net immigration can also affect growth in the labor supply. Taking each of these factors into account Social Security’s trustees project that the annual growth rate in the labor force, about 1.2 percent in recent years, will fall to 0.3 percent by 2022. The aging of the baby boom generation, increased life expectancy, and fertility rates at about the replacement level are expected to significantly increase the elderly dependency ratio—the estimated number of people aged 65 and over in relation to the number of people aged 15 to 64 (fig. 1). In 1950, the ratio was 12.5 percent. It increased to 20 percent in 2000 and is projected to further increase to 33 percent by 2050. As a result, there will be relatively fewer younger workers to support a growing number of Social Security and Medicare beneficiaries. The age at which workers choose to retire has implications for these trends. If workers delay retirement, the ratio of workers to the elderly will decrease more slowly. The aging of the population also has potential implications for the nation’s economy. As labor force growth continues to slow as projected, there will be relatively fewer workers available to produce goods and services. In addition, the impending retirement of the baby boom generation may cause the net loss of many experienced workers and possibly create skill gaps in certain occupations. Without a major increase in productivity or higher than projected immigration, low labor force growth will lead to slower growth in the economy compared with growth over the last several decades and potentially slower growth of federal revenues. Social Security’s trustees project that real (inflation-adjusted) GDP growth will subside from 2.6 percent in 2007 to 2.0 percent in 2040, in part due to slower growth in the labor force. The prospect of slower economic growth is likely to accentuate the pressures on the federal budget from growing benefit claims and the shrinking proportion of workers to beneficiaries. Later retirement and increases in labor force participation by older workers could help diminish those pressures. Retirement has traditionally been thought of as a complete one-time withdrawal from the labor force. However, such transitions are no longer as common. A recent study found that only half of first-time retirees fully retired from the workforce and remained fully retired after 3 to 5 years. The other half chose to partially retire by reducing their work hours or taking bridge jobs—transitional jobs between career work and complete retirement—or they re-entered the labor force after initially retiring. According to our analysis of the HRS, about one in five workers who fully retire later re-enter the workforce on at least a part-time basis sometime over the next 10 years. There are various reasons behind these trends. In some cases, older workers need the income or benefits a job provides; in other cases, they wish to start a new career in a different field. With no universal definition of retirement, researchers use different definitions depending on their purpose. Since our focus is on labor force participation, we are using definitions of retirement that combine whether or not people say they are retired with measures of their labor force participation. Workers have generally been retiring at younger ages over the last several decades, but over more recent periods, retirement ages appear to have stabilized. This finding holds for a variety of definitions of retirement. Census Bureau data indicate that the average age at which workers left the labor force dropped from about 71 and 70 years for men and women respectively in 1960, to about age 65 for both men and women in 1990 (fig. 2). Since that time, retirement trends appear to have stabilized for men, with their retirement occurring on average between 64 and 65. The retirement age for women continued to decline. Similar trends appear in the age at which workers start drawing Social Security benefits. From 1960 to 1990, the average age of workers starting to draw Social Security benefits declined 3 years for men (from 66.8 to 63.7) and about 2 years for women (from 65.2 to 63.5). Since 1990, these averages have changed little. The averages were 63.7 years for men and 63.8 for women in 2005. In addition, in the 2007 Retirement Confidence Survey, workers responded on average that they planned to retire at age 65, up from age 62 in 1996. We, along with others, have suggested that increasing labor force participation for older workers could lessen problems for the economy and the Social Security and Medicare trust funds, and boost income security for retirees as well. Workers retire for a variety of reasons, some of which are under their control while others are not. Some personal reasons for retiring include workers’ job situation, their financial situation, and social norms regarding retirement. In addition, there are often factors outside of a person’s control that may lead to retirement. According to focus groups that we conducted in 2005 with workers and retirees, we found that health problems and layoffs were common reasons to retire and that few focus group members saw opportunities to gradually or partially retire. Workers also cited what they perceived as their own limited skills and employers’ age discrimination as barriers to continued employment. Similarly to our focus group results, the Employee Benefit Research Institute (EBRI) found that an estimated 37 percent of workers retire sooner than they had expected. Of those, the most often cited reasons were health problems or disability, changes at their company, such as downsizing or closure, or having to care for a spouse or another family member. The role federal policies play in influencing retirement behavior needs to be considered as well. Depending on workers’ circumstances, these policies can provide incentives to retire at certain ages, and send signals or set norms about when it is appropriate to retire. In addition, many employers have structured their own retirement benefits, such as pension eligibility ages, based on federal policies. Federal policies present a mix of retirement incentives, some of which encourage individuals to retire well before their Social Security full retirement age and others that promote staying in the workforce. (See fig. 3 below.) The effect of these incentives also varies substantially with personal circumstances. In general, the availability of Social Security benefits at age 62 offers an incentive to retire before full retirement age, though changes in program rules are progressively weakening that incentive. The recent elimination of the Social Security earnings test for those at full retirement age and beyond, which had formerly reduced benefits for those beneficiaries who had earnings above a certain threshold, also may discourage drawing benefits early. The fact that most individuals are eligible for Medicare at age 65 generally deters them from leaving the labor force before then, especially if they are not covered by retiree health insurance. Federal pension tax policies give employers discretion to set pension plan rules that provide incentives for many workers to retire somewhat earlier than the norms established by Social Security, often age 55, or in some cases earlier. However, these incentives to retire early apply to fewer workers, due to the diminished prevalence of DB plans. Several characteristics of the Social Security program—including eligibility ages and the earnings test—provide incentives to retire at different ages. The Social Security full retirement age, which has traditionally been age 65, is gradually rising to 67. However, workers can begin receiving reduced benefits at 62; benefits are progressively larger for each month workers postpone drawing them, up to age 70. In general, benefits are “actuarially neutral” to the Social Security program; that is, the reduction for starting benefits before full retirement age and the credit for starting after full retirement age are such that the total value of benefits received over one’s lifetime is approximately equivalent for the average individual. However, Social Security creates an incentive to start drawing early retirement benefits for those who are in poor health or otherwise expect to have a less than average lifespan. If a worker lives long enough—past a “break-even” age—he or she will receive more in life- long retired worker benefits by starting benefits at a later, rather than an earlier date. (See figure 4 below for examples of the kinds of considerations workers face in making a decision about when to begin drawing Social Security benefits.) The increase in full retirement age and the larger penalty for early retirement reduce the incentive to start drawing Social Security benefits and retiring early. Because the early retirement age has remained fixed at 62 while full retirement age is gradually rising to 67, workers taking early retirement benefits are progressively incurring bigger reductions. For example, workers who reached 62 in 1999 and started drawing benefits that year faced a reduction of 20 percent because their full retirement age was 65. In contrast, workers drawing benefits when they turn 62 in 2022, when their full retirement age will be 67, will face a 30 percent reduction. On the other hand, workers with health problems may now have a greater incentive to apply for Social Security Disability Insurance as these benefits are not based on age. Social Security rules can pose different incentives for married workers because their decision about when to start drawing benefits has important implications for the surviving spouse. For example, if a retired worker who is entitled to a larger benefit than his spouse starts drawing early benefits and dies shortly thereafter, his widow may be left for many years with a relatively small survivor benefit since her payment would be limited to what he was receiving. This risk affects female survivors in particular. Widow beneficiaries are one of the largest and most vulnerable groups with a relatively high incidence of poverty. The Social Security earnings test gives some workers a disincentive to earn more than a specified amount. Because of the earnings test, people collecting Social Security benefits before their full retirement age who continue to work are subject to further reduction or withholding in their benefits if they earn above a threshold. For example, in 2007, $1 of benefits is withheld for every $2 of earnings over $12,960. Although early beneficiaries generally recoup the amounts withheld because of the earnings test in the form of higher recalculated benefits after they reach full retirement age, workers typically view the earnings test as a tax on work. As such, it provides an incentive to reduce the number of hours worked or stop working altogether. Since 2000, beneficiaries who reach their full retirement age are exempt from the earnings test. The elimination of the test for these individuals is an incentive to start benefits at full retirement age and continue working. Because Medicare provides health insurance coverage for virtually all individuals 65 and older, it has important implications for the decision about when to retire. The Medicare eligibility age, fixed at 65 since the program’s inception, is a strong incentive not to retire before that age, particularly for people who do not have employer-sponsored health benefits as retired workers. These individuals would either have to purchase expensive private coverage if they retired before 65, or remain uninsured until they qualify for Medicare because private health insurance may be difficult to obtain at older ages, especially for those with pre- existing medical conditions. Given the steep rise in health care costs and the high health risks older people face, Medicare’s eligibility age encourages them to delay retirement until age 65. Workers with no employer-based health insurance during their working years are arguably less affected by Medicare eligibility rules because their decision to retire does not affect their health coverage. However, to the extent that they are exposed to the same potentially expensive health problems as they get older, Medicare does provide an incentive to postpone retirement until age 65 because retirement often involves a significant drop in income. The incentive posed by Medicare may become more important if the proportion of workers with no retiree health insurance continues to increase. The share of large private employers offering retiree health insurance declined from an estimated 66 percent in 1988 to 35 percent in 2006. Similarly, a 2003 study found that only about one-quarter of private sector employees worked for companies that offered retiree health insurance. Further, the value of the coverage for retirees is eroding because of higher costs, eligibility restrictions, and other benefit changes. A recent study estimated that the percentage of after-tax income spent on health care by the typical older married couple will almost double from 16 percent in 2000 to 35 percent in 2030. On the other hand, Medicare’s availability at 65 can be an incentive to retire before Social Security’s rising full retirement age. Eligibility for Medicare upon reaching age 65 encourages workers to retire then, rather than wait to collect somewhat higher Social Security benefits when they reach their later full retirement age. Federal tax and pension laws, including the Employee Retirement Income Security Act (ERISA), give employers some discretion to set retirement ages and other terms and conditions that support earlier retirement for workers who have employer-sponsored pension plans. For example, IRS rules on tax-qualified pensions put an upper limit on what may be treated as a “normal retirement age” (NRA). For a DB plan, this can be no greater than age 65. In practice, some employers have set their NRA lower. According to the Department of Labor’s 2003 National Compensation Survey, 17 percent of private workers with DB plans had an NRA less than 65 and 6 percent had no age requirement. Many workers with DB plans could retire with reduced benefits at age 55. IRS rules also state that payouts with specified minimum amounts must generally begin by about age 70 ½. Additionally, tax rules generally permit withdrawals without penalty from both DB and DC plans (including IRAs) as early as age 59 ½. Exceptions to this rule allow for even earlier withdrawals. For example, participants can access their funds without penalty beginning at age 55 if they leave their current employer. Workers taking distributions prior to age 59 ½ may do so without the tax penalty if they receive the distribution in the form of a fixed annuity. For those who are no longer working for the plan’s sponsor, tax law generally requires at a minimum that such a series of payments begin at about age 70 ½ at the latest or that they receive a lump sum payment of the entire amount. If a plan participant is working for the plan sponsor at age 70 ½ the required distributions must generally begin in the calendar year in which he or she stops working for the employer maintaining the plan. Workers who have employer-sponsored pension plans from their current employer constitute only about half of full-time private sector workers. Employers have increasingly shifted from traditional DB to DC pension plans. Specifically, in 1992, about 29 percent of heads of household had a DB plan; by 2004, the figure had dropped to 20 percent. Over this same period, the proportion of household heads with DC plans increased from about 28 percent to 34 percent. As the prevalence of DC plans has increased relative to DB plans, workers face a different set of incentives. The benefits of a worker covered by a DB plan often reach their high value when the worker attains a specific age, and as a result, may offer little incentive to work past that age. The predetermined retirement benefit generally depends on years of service and wages or salaries, and changes little after its peak value, especially if subsequent salary increases are not substantial. Additional years of work after the NRA, often age 65 for private sector workers in 2003, do not necessarily change lifetime retirement benefits because of the shortened retirement period. (See table 1 for an example showing the effect of another year of work with a hypothetical DB pension.) With DC plans, benefit levels depend on total employer and employee contributions and investment earnings; as such, DC plans do not offer the same age-related retirement incentive as DB plans. Individuals typically allocate the balance of their DC accounts among bonds, stocks, and money market funds, bearing all of the investment risks. In addition, since at retirement most DC plans allow people to receive the accumulated value of the funds in their account as a lump sum, individuals also bear the risk of outliving their resources. The fact that different people will make different contribution and investment decisions is likely to lead to a greater variability in retirement ages. (See table 2 below for an example showing the effect of another year of work on lifetime benefits with a DC pension.) While a DB pension plan generally does not encourage continued work after a certain age, recent changes in DB pension provisions have created an incentive to remain in the workforce somewhat longer. First, recent IRS regulations permit workers to receive money from their DB plans while still working after they have reached the plan’s NRA. These regulations also include rules restricting a plan’s NRA. Those reaching a plan’s NRA or age 62 who want to reduce the number of hours they work for a particular employer may be able to do so and at the same time receive prorated pension benefits. As a result, these workers are able to ease out of their jobs while maintaining their previous level of income by combining paycheck and pension. The new provisions are likely to encourage longer careers by formally allowing more flexible work arrangements and the opportunity to gradually transition into retirement rather than make a sudden shift. By comparison, participants in DC plans can often begin receiving their pension at age 59 ½ while continuing to work (if allowed by their plan administrator), so they often face fewer limitations to phased retirement. About half of those in the HRS study group reported being fully retired by the time they reached age 63, and over the last several years SSA data indicate that nearly half started drawing benefits at age 62 and 1 month, their earliest opportunity to do so. However, there is some evidence that this behavior is starting to change to a limited extent. With the graduated rise in full retirement ages for persons born after 1937, a somewhat smaller proportion of these workers are starting to draw benefits at 62. Others are waiting to draw benefits until the higher full retirement ages that apply to them. Also, since the January 2000 elimination of the earnings test for workers at full retirement age and beyond, labor force participation among such older workers has increased. Despite Social Security’s full retirement age of 65 and later, we found that about half of the workers in the HRS study group reported that they fully retired by age 63. Specifically, an estimated 46 percent of workers born in 1931 through 1941 reported fully retiring before their 63rd birthday, based on our analysis of workers interviewed in the HRS sample. As shown in figure 5 below, we found a pattern of retirement marked by a steady increase in retirements among people in their late 50s until ages 62 and 65, when the numbers increase sharply. For workers in the study group the estimated probability of fully retiring prior to age 60 was 28 percent, and the estimated probability prior to age 65 was 60 percent. Social Security Administration data provide similar indications of early retirement patterns. Many workers begin drawing Social Security benefits at age 62. Half the workers born 1935 through 1940 started to draw Social Security benefits before they reached age 62 ½. The most common age was 62 and 1 month—the earliest age at which most workers are eligible. Only about 13 to 17 percent of workers born in these years started to draw benefits at their full retirement age. In a 2005 study, researchers analyzing the characteristics of workers who began drawing Social Security benefits at age 62 found that many had no earnings or comparatively low earnings in the years before they reached age 62. Among workers in this study born in 1937 (who reached 62 in 1999), for example, 20 percent had no earnings at age 55, and this figure rose to 32 percent at age 61 for men who started drawing Social Security benefits at age 62. The comparable figures for those who started drawing benefits between age 63 and 65 ranged from 11 to 12 percent. It is not clear to what extent these low earners or non-earners had chosen to retire before reaching age 62 or whether they were in the labor force, but not able to find work before reaching age 62. As discussed above, EBRI found that an estimated 37 percent of workers retire sooner than they had expected to. The most often cited reasons were health problems or disability, changes at their company, such as downsizing or closure, or having to care for a spouse or another family member. Social Security administrative data for those born between 1935 and 1940 provide evidence of some modest changes in retirement behavior among the first group of workers subject to the increases in the Social Security full retirement age. First, a declining proportion of workers are starting to draw benefits as soon as they are eligible. Whereas 46 or 47 percent of those with a full retirement age of 65 and 0 months (born in 1935 through 1937) started benefits at the earliest opportunity, 45 to 42 percent of those who were subject to an increased full retirement age did so, as shown in table 3 below. That many workers continue to start drawing benefits at the earliest opportunity may, in part, reflect workers’ lack of knowledge about their full retirement age. A 2007 survey indicated that an estimated 56 percent of workers aged 55 and over incorrectly identified or did not know the age at which they can receive unreduced Social Security benefits. Second, along with changes in the proportion of workers drawing Social Security retired worker benefits at the earliest opportunity, we see early indications of changes at workers’ full retirement ages. The traditional rise in the proportion of workers beginning to draw benefits at their 65th birthday has largely shifted in concert with the gradual rise in the age required by Social Security for full retired worker benefits. As shown in figure 6 below, some of the workers in successive cohorts who were born after 1937 have waited additional months to start drawing benefits—that is, until their higher full retirement ages. Along with these modest delays in claiming Social Security benefits that are associated with the rising full retirement age, we found that some increases in labor force participation coincided with the elimination of the earnings test in January 2000. Our analysis of all workers in the HRS sample found that the proportion of 66 and 67 year olds who were employed (full-time, part-time, or partially retired) increased between 2000 and 2004 by 4 percentage points. Another researcher’s analysis of BLS data found that between 1994 and 2005, the proportion of 65 to 69 year olds in the labor force increased by about 7 percentage points for men and by about 6 percentage points for women. While there may be a variety of reasons for this upward trend, some researchers attribute it to the elimination of the Social Security earnings test. After controlling for other factors associated with retirement, one study concluded that the labor force participation rate among those 65 to 69 increased by 0.8 to 2 percentage points and that earnings for this group increased. The authors hypothesized that this increase resulted from the retention of older workers who were still in the workforce instead of attracting retirees to return to work. This study also found that applications for Social Security benefits among individuals at ages 65 or above increased and that earnings for this group increased as well. A second study also concluded that the elimination of the earnings test had increased labor force participation among older workers, and that there was some indication that participation rates among younger workers increased in anticipation of this policy change. A third study found that men aged 66 to 69 had an increase in annual earnings of $1,326 following the earnings test elimination. This study did not find that labor force participation increased overall, but rather that the hours per week worked by men increased. A final study found the effect of the elimination of the earnings test has not only been confined to those above full retirement age. Rather, this change has resulted in men with earnings above the earnings test threshold reporting an increased probability that they will work after full retirement age. These studies also indicate that relatively more workers in the upper- middle income range have responded to the elimination of the earnings test by continuing to work. Specifically, two studies found that earnings increased for those in the higher income percentiles, but not for those in lower income groups. See appendix III for more information on these studies. We found employer-provided retiree health insurance and pension plans are strongly associated with when workers retire based on our analysis of retirement behavior using the HRS. We found that workers with access to retiree health insurance were more likely to retire before age 65 than those without it. However, other factors, such as poor health, could become an overriding factor for some of these workers, in terms of their retirement decisions. At the beginning of the study period (1992), those workers who lacked retiree health insurance tended to be those with lower incomes and levels of education. Pension plans also influenced the timing of workers’ retirements, though this varied by type of pension plan. Men with DB plans were more likely to retire earlier, whereas both men and women with DC plans tended to retire later compared to those who did not have these plans. Our analysis of retirement behavior suggests that workers who have access to health insurance in retirement are substantially more likely to retire before becoming eligible for Medicare at age 65 than those without such access. Men with retiree health insurance either through their own or their spouse’s current or former employer were an estimated 86 percent more likely to retire before they turned 65 than those who were not eligible for benefits in retirement. Women with retiree health insurance were more than twice as likely (139 percent more likely) to retire by this same age. We also found that workers with retiree health insurance were more likely to retire before they became eligible for early Social Security benefits at the age of 62 (109 percent and 76 percent more likely, respectively for men and women). For a complete discussion of our model results, please see appendix II. The population without access to retiree health insurance tended to be those with lower incomes and less education. See Appendix IV for information on the demographic characteristics of people with access to retiree health insurance at the beginning of the study period. These findings are consistent with a larger body of research indicating a strong link between health insurance availability and retirement decisions. For example, a 2002 study found that having retiree health insurance available increased the likelihood of workers retiring before age 65 by an estimated 15 to 35 percent. According to the 2003 Health Confidence Survey, almost 80 percent of current workers over age 40 consider their access to health insurance in planning the age at which they expect to retire. That people without access to retiree health insurance are more likely to wait until they are eligible for Medicare to retire may reflect the scarcity of options for affordable health insurance outside of employer- based plans. Particularly for those in poor health, market-based health insurance coverage may be prohibitively costly. Health problems that limit work lead to earlier retirement for many workers regardless of the availability of retiree health benefits. After controlling for other factors, including whether one had access to retiree health insurance, we found that men who said that their health limited their work were over two times more likely to retire by age 62 and that women were 96 percent more likely to do so. Similarly, men and women reporting these limitations were more likely to retire by age 65 (71 percent and 72 percent, respectively). We found that men with DB plans generally retired earlier than those without, while both men and women with DC plans generally retired later, based on our analysis of the HRS data. After controlling for other factors, men with DB plans through either their employer or their spouse’s employer were 28 percent more likely to retire before age 62. Results for women were not statistically significant. On the other hand, we found that men with DC plans were 47 percent less likely to retire by 62 than those without DC plans. We found a similar effect for women as well; those with DC plans were 37 percent less likely to retire before 62 than those without DC plans. Looking at retirements before or after age 65, we did not find a significant effect of having a DB pension plan. However, we continued to find a diminished likelihood of retiring before age 65 among those with DC plans, with men 35 percent less likely to retire by age 65 and women 45 percent less likely to retire than those without DC plans. Our finding that men with DB pensions were more likely to retire before age 62 is consistent with a larger body of research that finds that the structure of DB plans can lead to earlier retirements. One study found that the differences in retirement patterns for those with DB or DC pensions were related to the ability of DB plans to subsidize retirements at ages as early as age 55. Some of these pensions allow long-tenured individuals to collect early benefits that are high enough to provide an incentive to retire early. DC plans, on the other hand, are generally neutral with regard to retirement age since DC account balances depend on contributions made by both employers and employees instead of years of service. Another study found that retirement patterns for those with DB plans and those with DC plans began to differ at around age 55. Differences increased at around age 60, when the value of lifetime benefit began decreasing for most workers with DB plans. This same study found that the absence of retirement incentives tied to age in DC plans led people with those plans to retire on average almost two years later than those with DB plans. The age at which workers retire is important for the sake of their retirement income security, the cost of federal programs for the elderly, federal tax revenue, and the strength of the U.S. economy. In deciding when to retire, workers weigh their personal circumstances, the features of employers’ benefit plans as well as the mix of incentives and disincentives posed by federal policies. Some of these policies encourage earlier retirement; others encourage later retirement; and different groups of workers face differing incentives. While preliminary evidence indicates that some workers subject to full retirement ages after their 65th birthday are drawing Social Security benefits a little later and working more after age 65 than their predecessors, more time is needed to determine whether these changes foretell any substantial shifts. With so many factors influencing workers’ decisions about when to retire, changes may be gradual and limited. Moreover, changes made to one program have the potential to create an inconsistent set of incentives. For example, as Social Security’s full retirement age rises to age 67, Medicare’s eligibility age remains at 65. Medicare’s eligibility age may become increasingly important in workers’ decisions about when to retire as the availability of employer-sponsored retiree health insurance declines. In recent years, federal policy makers have considered various options to modify policies in hopes of promoting later retirements and continued work in later years. However, the results of any given policy change continue to be difficult to project given the many countervailing forces at work and workers’ sometimes limited understanding of the incentives they face. To date, we see indications of some changes in retirement behavior, but do not yet see large changes. At the same time, trends in employer- provided retirement benefits have clear implications for workers’ retirement decisions. Our results suggest that with declining access to retiree health insurance and DB pension plans, those individuals who can, may indeed choose to work longer. This trend suggests the need for federal initiatives to help support workers who make that choice. These may include policies that encourage employers to hire or retain older workers and provide them with flexible options for continued work. In addition, there will be a continued need for federal policies to ensure that workers are informed about the advantages of continued work, as well as to protect and support those who, due to poor health or disability, are unable to work at older ages. Given the increased pressures that demographic shifts will place on entitlement programs, the mix of incentives offered by programs such as Social Security and Medicare, as well as pension law, becomes more questionable. Ultimately, it will be important for policy makers to understand the incentive structures that their policies create, and to coordinate their decisions to allow for individual flexibility, but send signals that consistently encourage those who are able to continue working to do so. Accordingly, in light of the range of challenges facing the country in the 21st century, Congress may wish to consider changes to laws, programs, and policies that support retirement security, including retirement ages, in order to provide a set of signals that work in tandem to encourage work at older ages. We provided a draft of this report to the Social Security Administration and the departments of Labor, Health and Human Services, and the Treasury. The Department of Health and Human Services commented on the report, generally agreeing with our findings on the incentives posed by Medicare and retiree health insurance. (See appen. V.) In addition, SSA, and the departments of Labor and the Treasury provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Commissioner of Social Security, the Secretary of the Treasury, the Secretary of Labor, and the Secretary of Health and Human Services. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to 1) identify incentives federal policies provide about when to retire; (2) determine recent retirement patterns and whether there is evidence that recent changes in Social Security requirements have resulted in later retirements; and 3) determine if there is evidence that tax- favored private retiree health insurance and pension benefits influence when people retire. To answer our first objective, we reviewed the relevant literature and interviewed agency officials to identify which federal policies may influence the age at which workers retire. To answer our second objective, we analyzed data from the Social Security administration and reviewed studies of the effects of changes in SSA rules. We used the SSA data to look at when workers, who were between the ages of 66 to 71 in 2006, chose to start Social Security retired worker benefits. While these data allowed us to examine patterns in men’s and women’s claiming of Social Security benefits, they did not contain any other personal information that would allow us to control for differences between workers. Therefore, we were able to use these data for descriptive purposes only. We analyzed these data and found them to be reliable for our purposes. To answer the third objective, we first analyzed data from the Health and Retirement Study (HRS), a national, longitudinal survey of older Americans produced by the University of Michigan. In particular, we used a data set that the RAND Corporation compiled on the HRS, which is a more user-friendly subset of the HRS. This rich data set contains information on retirement timing and a wide variety of associated factors, such as demographic characteristics, income, assets, health, health care insurance, workforce status, pensions, and retirement expectations. In addition, it tracks respondents over time, allowing us to look at the initial HRS cohort (those born from 1931 to 1941) over a 12 year period from 1992 to 2004. We conducted both bivariate and multivariate analyses to determine what factors were associated with workers’ decisions about when to retire, with special attention to Social Security, health care, and pension availability. See appendix II for a full description of these analyses. We analyzed this dataset and found it to be reliable for our purposes. We conducted our work between July 2006 and June 2007 in accordance with generally accepted government auditing standards. This appendix is organized into three sections to more fully describe the methods we used to analyze our data, with particular focus on our analysis of the RAND HRS data: Section 1 describes the definitions of retirement used in this analysis. Section 2 describes how we selected our different samples for analysis. Section 3 describes limitations to our analysis. As other researchers have done, we used different definitions for retirement in different parts of our analysis. In particular, we considered workers to be retired based on one of four different definitions, which are explained in table 4 below: We conducted our multivariate analysis based on two of these retirement definitions. Since our focus in this study is on when people decided to fully withdrawal from the labor force, our primary analysis was of those who had fully retired. We also ran an analysis on those who had fully or partially retired and received similar results. For our analysis of the claiming of Social Security benefits, we used the definition of Social Security retirement. Finally, for some of our descriptive results of those who said they retired prior to the beginning of the HRS, we used our definition of reported retirement. Just as we used different definitions of retirement, we also chose different samples of workers. Since our goal in analyzing the HRS data was to model retirement behavior, we sought to look at individuals who had a chance to retire; in other words, they had reached traditional ages of retirement. Therefore, we focused our analysis on those in the HRS cohort who were born between 1931 and 1941. These individuals were between the ages of 63 and 73 in 2004, when the most recent data for the HRS were collected. Second, we chose individuals who had been in the labor force for at least 10 years so that they could qualify for Social Security retired worker benefits based on their own work history. To calculate certain descriptive statistics, we just applied the above two criteria to create a worker sample. For our regression analyses, we added the stipulation that a respondent was in the workforce in 1992 when the HRS began. Applying these criteria excludes respondents who had retired, were out of the labor force (such as homemakers), or those who were not working due to disability in 1992. This allowed us to model the act of retiring from the labor force. In addition, we were not able to observe the behavior of those who retired outside of the 1992 to 2004 study period. See table 5 below for the criteria we used to construct these samples. Although the HRS cohort is a nationally representative sample of those born from 1931 to 1941, the samples that we constructed may not be. In comparing some of the descriptive statistics of our samples with those from the larger HRS sample, there are differences, as shown in table 6 below. In particular, the sample used to analyze full retirement decisions had a greater proportion of those in better health, those with access to retiree health insurance, and higher income than either the HRS cohort or the worker sample. We identified factors associated with the decision about when to retire rather than the causes of that decision. Our analysis of the factors associated with retirement timing is limited to the definition of retirement that we used; others may have different definitions of retirement. Some people working part-time consider themselves retired; others do not. In addition, we cannot generalize our findings beyond the group of workers included in our sample. Our findings do not necessarily apply to younger groups of workers, who may not behave in the same way or face the same constraints. As mentioned earlier, our sub-sample of workers from the larger HRS sample cohort is not entirely representative of the larger US population. In addition, we were unable to observe the retirement behavior of those who retired before and after the study period. Finally, due to limitations in the data and the methods that we used, we did not include in our analysis some variables identified during our research that could potentially affect workers’ retirement timing. For example, the RAND HRS includes information on a respondent’s pension from a current job, but not prior jobs. Our analysis did not include measures of wealth or income other than earnings. Also, we did not analyze lump sum payments from pensions, which could influence retirement decisions. In addition, the RAND HRS data rely heavily on people’s knowledge of their finances, work history, pension options, et cetera. Studies show that workers are sometimes misinformed about the details of their pension benefits or the age at which they are eligible for full Social Security benefits. This appendix describes the results of two separate analyses we did to determine what factors were associated with whether or not men and women retired 1) before or after age 62, and 2) before or after age 65. We conducted both of these analyses separately for men and women due to sizable gender differences in labor force participation and because data published by the census suggested that the factors that affected retirement decisions may be different for the two groups. The data we used in our analyses were from the HRS cohort of men and women who were born from 1931 to 1941 and thus were between the ages of 63 to 73 in 2004, which was the last year for which we had data. We restricted our attention to workers who had been in the labor force for at least 10 years prior to age 62. In our analysis of whether workers retired before age 62, we limited the analysis to those who had reached age 62 at some point in the study period. Similarly, in our analysis of whether workers retired before age 65, we limited the analysis to those who had reached age 65 at some point in the study period, and we eliminated workers who, based on their birth year, could not reach age 65 by 2004. In addition, we excluded those individuals who were not part of the labor force in the first wave of data collection (1992); see comparison of samples in appendix I. The HRS dataset is a longitudinal dataset, meaning there are multiple observations per respondent. Respondents were interviewed every 2 years. Each observation is called a wave. In our data set there were seven waves of data (1992 to 2004). For our analysis we limited the data set to one observation per respondent. We selected the observation by taking the first wave the respondent was noted as retiring in the age specific analysis (62 and 65). If the respondent did not retire in that time frame, we selected the wave closest to when the participant was age 62 or 65. For each observation we calculated an age of retirement if the respondent noted that he or she retired. For example, if the respondent noted retiring in wave five and reported a retirement date that fell between waves four and five, we used the reported retirement date as the age of retirement and used wave 4 responses in our analysis. However, if the respondent did not report a retirement date or if the retirement date did not fall between two previous waves of data collection and the current wave, then we imputed the retirement date using the midpoint between the waves. For example, if a respondent noted retiring in wave six but did not report a retirement date and had data for wave five we imputed their age of retirement as the midpoint between wave five and six. For those respondents who did not retire by the specified age used in our analyses (by age 62 or by age 65), we used their age at the end of the interview to select the observation closest to that specified age. These restrictions meant we had samples of 2,840 men and 2,519 women in our analyses of whether retirement occurred by age 62, and 1,978 men and 1,779 women in our analyses of whether retirement occurred by 65. It should be noted that the sample sizes represent unweighted samples. Our samples differed slightly from the overall HRS sample (see appendix 1 for comparison). The data are from a complex sample, and all analyses were performed using statistical weights and adjusting the standard errors for the sample design. Only respondents with statistical weights greater than zero were included in the analyses (based on HRS documentation for statistical weights). The (weighted) percentages reported in some of the tables of this appendix do not exactly match what would be derived from the (unweighted) numbers reported. The factors or independent variables we considered in the two sets of analyses are shown in table 7, along with the unweighted numbers and weighted percentages of men and women in each category of those factors. These factors included selected demographic characteristics, including occupation, race/ethnicity, education, marital status, age difference with spouse, income (specifically earnings), work tenure, and birth year. Occupation was divided into three categories: white collar, services, and blue collar. White collar included managerial, professional, sales, clerical, and administrative support occupations. Services included cleaning business services, protection, food preparation, health services, and personal services. Blue collar included farming, forestry, fishing, mechanics and repair, construction and extraction, precision production, operators, and members of the armed forces. We based these categories on a previous GAO report that utilized the HRS data. The income variable—the respondent’s earned income—was adjusted for inflation using CPI values to make all dollars comparable to 2003 dollars. The factors also included a general measure of health status, an indicator of whether health limited the ability to work, and measures indicating whether the workers in our sample had any health insurance. In addition, we considered whether the spouse or respondent had retiree health insurance, a DB plan, and a DC plan. For many of our variables, we lagged them to the prior wave to capture workers’ preretirement characteristics. For example, if the respondent is noted as retiring in wave 4, the income variable from wave 3 was used in the regression. If the prior wave was missing, that respondent was not included in the analysis. For all of the lagged variables the data collected from 2 years prior was used in the analysis (the HRS respondents were interviewed every two years). Table 7 also shows the numbers and percentages of men and women who had and had not retired by ages 62 and 65. An estimated 25 percent of the men and 28 percent of the women in our sample had retired by age 62, and of those who had reached age 65 by 2004, an estimated 48 percent of the men and 53 percent of the women had retired. The following results are based on our full retirement definition (see appendix I for definition of full retirement). We used bivariate (one variable) and multivariate (multiple variables) logistic regression models to estimate the likelihood of men and women being retired, first at age 62 and then at age 65. Logistic regression is a widely accepted method of analyzing dichotomous outcomes—variables with two values such as retired or not—when the interest is in determining the effects of multiple factors that may be related to one another. While it is somewhat more common to consider how different categories of workers differ in their likelihoods of being retired by calculating and comparing differences in the percentages of retired and non-retired workers across categories, the use of these models in our analysis requires us to express differences in the likelihoods of being retired using odds ratios. An “odds ratio” is generally defined as the ratio of the odds of an event occurring in one group compared to the odds of it occurring in another group—the reference group. While odds and odds ratios are somewhat less familiar than percentages and percentage differences, they have certain advantages, and can be readily derived from the underlying percentages or from the numbers from which those percentages were calculated. Moreover, odds ratios are amenable to a reasonably simple interpretation, as we show in Table 8. In addition, unadjusted and adjusted odds ratios are the parameters that underlie our logistic regression models. Table 8 shows the numbers and percentages of men who were retired by age 62, first across marital status categories, and then across categories defined by race/ethnicity. Typically we would compare groups by contrasting the percentages of retired or not retired individuals in each group and noting, in this case for example, that the percentage of individuals retired by age 62 is greater among unmarried men (30.1 percent) than married men (23.3 percent), and lower for Hispanic men (17.4 percent) than for Black men (25.4 percent) and white men (25.1 percent). Alternatively, we can calculate the odds on retiring for each group by simply taking the percentage who retired in each group and dividing it by the percentage who had not retired. The odds on retiring were 30.1/69.9 = 0.43 for unmarried men, and 23.3/76.7 = 0.30 for married men. Making similar calculations, the odds were virtually identical for white men and Black men (0.34, apart from rounding) but lower for Hispanic men (0.21). We can compare groups directly by taking the ratios of these odds, given in the “Odds Ratios” column in table 8. As can be seen, the odds on retiring were higher for unmarried men than for married men, by a factor of 0.431/0.304 = 1.42. To compare race/ethnicity categories, we choose (arbitrarily) one group (white men in this case) as the reference category, make similar calculation by taking the ratios of the odds for the other two groups to the odds for white men, and find that Black men have odds on retiring that are only slightly different than white men (higher by a factor of 1.02), while Hispanic men are less likely than white men to retire, by a factor of 0.63. Table 9 shows the gross effects of each of the factors we considered on the odds on men and women retiring before age 62 (in the first two columns) and before age 65 (in the last two columns). By gross effects, we mean the effects of each factor estimated from bivariate regressions, or regressions that ignore or fail to take account of the effects of other factors which may be related to retirement. Table 10, by contrast, shows the adjusted effects of the factors that we found to be significantly related to retiring at age 62 or age 65 after adjusting for other factors. In developing our multivariate models, we controlled for income in the previous wave, birth year categories, DB, and DC pension plans in the previous wave, and retiree health insurance in the previous wave even if the overall p-value for these variables is not statistically significant. We adjusted for income in the previous wave because it is a very strong demographic characteristic, and we adjusted for birth year to account for any possible cohort effect in the HRS data. Similarly, we adjusted for pension type (both DB and DC) and retiree health insurance because we are interested in assessing the impact of these policy variables on a respondent’s decision to retire. In order to assess factors associated with the retirement decisions at specific ages in a multivariate setting, we wanted the most parsimonious model without adding additional noise by factors that were not statistically significant. To do this we iteratively fit a model by first adjusting for all of the variables of interest (see Table 9). After keeping in the five variables mentioned above (income, birth year, and DB and DC pension, and retiree health insurance) we then selected the variables that were statistically significant (p-value <0.05) one at a time. Then after the reduced model was fit we re-entered the variables that we excluded to see if any became statistically significant in the presence of the variables from the reduced model. The results from the multivariate models retain the statistically significant associations (p-value <0.05) and exclude those that reflected insignificant effects, or difference in the sample that could reasonably be assumed to be due to chance or random fluctuations. Some factors that were correlated with other variables and were statistically significant in the bivariate analysis were not statistically significant in the final multivariate model when we adjusted for these other factors. We assessed our final model for goodness of fit using the Hosmer Lemeshow goodness of fit statistic, which tests the hypothesis that the data fit the specified model. All our multivariate models fit the data appropriately (p- values for model fit >0.05). We provide the gross or unadjusted effects in table 9 in order to show what effect each factor has when other factors with which they are associated are ignored, or left uncontrolled. By gross effects, we mean the effects of each factor estimated from bivariate regressions, or regressions which ignore or fail to take account of the effects of other factors which may be related to retirement. We focus our discussion here however, as well as in the body of the report, on the adjusted odds ratios from the multivariate models, shown in table 10. The results in the table 10 only reflect the statistically significant adjusted odds ratios. However, all models include income, birth year, retiree health insurance, and DB and DC pension plans. In addition, some of the factors in the multivariate models have missing data; therefore, the overall sample size from the multivariate models differs from the sample size noted in table 9. We have assumed that the missing values are missing at random. The HRS is based on a probability sample and therefore the estimates are subject to sampling error. The HRS sample is only one of a large number of samples that could have been drawn of this population. Since each sample could have provided different estimates, we express our confidence in the precision of the analysis results as 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study populations. All multivariate models were run using an alternative definition that included partial and full retirement (see appendix I for definitions). Results from these multivariate models were similar to the results presented here. (Data not shown.) Table 10 shows that the odds on men retiring before age 62 were affected by income, job tenure, birth year, health limitations, retiree health insurance, and having DB and DC plans. All of the results can be interpreted as adjusted odds ratios and the net effects of those factors on early retirement for men can be described as follows, after adjusting for the other factors: Men in the highest income category (who made greater than or equal to $50,000 in the previous wave) were 1.76 times more likely than men making less than $10,000 to retire by age 62. Men earning between $10,000 and $25,000 and men earning between $25,000 and $50,000 were not significantly different from men earning less than $10,000 in their decisions to retire before 62. Men who had been working for 15 to less than 25 years were not significantly different from men working less than 5 years at their primary occupation (in the previous wave), but men who had worked 5 to less than 15 years were less likely to retire by age 62 by a factor of 0.61 than men working less than 5 years. However, men working 25 years or more were more likely than men working less than 5 years to be retired by age 62, by a factor of 1. 6. Men born after 1933 were more likely than those born 1931 to 1932 to be retired by age 62, by factors ranging (fairly linearly) from 2.2 (for those born 1933 to 1934) to 5.7 (for those born 1940 to 1941). The odds on retiring before age 62 were more than twice as high for men who reported health limitations as for men without such limitations, and were twice as high for men with retiree health insurance as for those without retiree health insurance. The odds on retiring before age 62 were higher for men with a DB plan than for those without, by a factor of 1.3, and lower for men with DC plans than for those without, by a factor of 0.5. The odds on women retiring before age 62 were affected by marital status, job tenure, birth year, health status, health limitations, retiree health insurance, and having a DC plan. Although not statistically significant the final model also adjusted for income, an important demographic characteristic, and DB plan, to account for policy related variables. The net effects of those factors on early retirement for women can be described as follows, after adjusting for other factors: Unmarried women were only roughly half as likely as married women to retire before age 62; that is, the odds on retiring before that age were lower for unmarried women than for married women, by a factor of 0.57. Women who had been working for 5 to less than 15 years and 15 to less than 25 years were not significantly different from women working less than 5 years at their primary occupation (in the previous wave). However, women working 25 years or more were more likely than women working less than 5 years to be retired by age 62, by a factor of 1.7. As was the case with men, women born after 1933 were more likely than those born 1931 to 1932 to be retired by age 62, by factors ranging (again fairly linearly) from 2.9 (for women born 1933 to 1934) to 4.3 (for women born 1940 to 1941). The odds on retiring before age 62 were 1.5 times greater for women who said they were in fair or poor health as for women in good or excellent health, 2.0 times greater for women with health limitations than for women without, and nearly twice as high for women with retiree health insurance as for those without retiree health insurance. The odds on retiring before age 62 were lower for women with DC plans than for those without, by a factor of 0.6. The odds on men retiring before age 65 were affected by categories of occupation, education, marital status, income, job tenure, health limitations, retiree health insurance, and having a DC plan. Although not statistically significant, we adjusted for birth year to control for any possible cohort effects and DB plan to account for policy related variables. The net effects of those factors on late retirement for men can be described as follows, after adjusting for other factors: Men in the blue collar occupation category were 1.5 times more likely to retire before age 65 than men in the white collar category. Men in the services category were not significantly different from men in white collar professions in their decision to retire prior to 65. Men with college or more education were 0.51 times less likely to retire before age 65 compared to men with less than a high school education. There were no statistically significant differences between men with high school/ GED education and men with some college compared to men with less than a high school education in their decision to retire before age 65. The odds that unmarried men would retire before age 65 were 1.5 times those of married men. Men with income greater than or equal to $10,000 were more likely to retire prior to age 65 than men earning less than $10,000, by factors ranging (fairly linearly) from 2.0 (for those earning between $25,000 to $50,000) to 3.1 (for those earning greater than or equal to $50,000). Men who had been working for 5 to 15 years and those who had been working 15 to 25 years were not significantly different from men working less than 5 years at their primary occupation. But men who had worked greater than or equal to 25 years were more likely than men working less than 5 years to be retired by age 65, by a factor of 1.4. The odds on retiring before age 65 were almost twice as high (1.7) for men who reported health limitations as for men without such limitations and were almost twice as high for men with retiree health insurance as for those without retiree health insurance. The odds on retiring before age 65 were lower for men with DC plans than for those without, by a factor of 0.7. The odds on women retiring before age 65 were affected by marital status, spousal age difference, income, health status, health limitations, retiree health insurance, and having DC plans. Although not statistically significant, we adjusted for birth year to control for a possible cohort effect and DB plan to account for policy-related variables. The net effects of those factors on late retirement for women can be described as follows, after adjusting for other factors: Unmarried women were roughly half as likely as married women to retire before age 65; that is, the odds on retiring before that age were lower for unmarried women than for married women, by a factor of 0.6. Women who were at least 5 years younger than their spouse were more likely to retire before age 65 compared to women with no spouse or women who were within 5 years of their spouses’ age, by a factor of 1.5. There were no statistically significant differences on the odds of retiring before age 65 for women who were more than 5 years older than their spouse compared to women with no spouse or women who were within 5 years. The odds on retiring before age 65 were higher for women earning $25,000 to $50,000 than for those earning less than $10,000, by a factor of 1.6. Women earning between $10,000 to $25,000 and more than $50,000 were not significantly different than the lowest earning women in terms of their odds on retiring before age 65. The odds on retiring before age 65 were 1.5 times greater for women who said they were in fair or poor health compared to women in good or excellent health, 1.7 times greater for women with health limitations than for women without, and nearly twice as high (2.4) for women with retiree health insurance as for those without retiree health insurance. The odds on retiring before age 65 were lower for women with DC plans than for those without, by a factor of 0.6. This appendix summarizes the findings in selected studies concerning changes in labor force participation among older workers following the elimination of the Social Security earnings test for beneficiaries at or above their full retirement age, effective January 1, 2000. Jae G. Song and Joyce Manchester, “New Evidence on Earnings and Benefit Claims Following Changes in the Retirement Earnings Test in 2000,” Journal of Public Economics, vol. 91, nos. 3-4 (April 2007). To examine the effect of the removal of the Social Security earnings test, the authors used SSA administrative data known as the Continuous Work History Sample. The authors examined these data for the years 1996 to 2003 and restricted their sample to those who are fully insured under Social Security. One of the limitations of these data is that they lack information on wages, hours worked, health status, education, and family characteristics for workers. The authors ran two sets of regression models on the following dependent variables: claiming Social Security benefits, work participation, and earnings. They used a “difference in difference” approach for which they compared treatment groups who were affected by this policy change (those turning 65 and those aged 65 to 69) with control groups that were not affected (those aged 62 to 64 and 70 to 72). One of the key assumptions the authors make in running these models is that there was no shock other than the earnings test removal in 2000 that affected treatment groups relative to the control groups. After running these models, the authors concluded that: 1) earnings increased among higher income workers; 2) workforce participation increased among those aged 65 to 69; 3) applications for Social Security benefits among those aged 65 to 69 increased following the test’s removal. Leora Friedberg and Anthony Webb, “Persistence in Labor Supply and the Response to the Social Security Earnings Test,” Working Paper 2006-27 (Boston, Mass.: Center for Retirement Research at Boston College, December 2006). The authors used data from the HRS and Current Population Survey (CPS) to examine the impact on labor supply of changes made to the earnings test in 1996 and 2000. They examine everyone in the CPS aged 55 to 74 between the years 1992 and 2005, and they use several different birth cohorts from the HRS in their analysis. The authors ran regressions on several dependent variables—employment, full-time employment, and earnings. In their regression analysis, the authors focus on those aged 62 to 74 to capture any effect that the earnings test might have on younger workers. Two key assumptions the authors make are that people view the earnings test as a tax instead of a deferral of benefits and that people can choose the number of hours they work. The authors conclude that the earnings test changes in both 1996 and 2000 increased labor force participation for those both aged 65 to 69 along with younger workers who are anticipating its removal. They also found that earnings increased, particularly for higher-income workers, following the 2000 change. Steven J. Haider and David S. Loughran, “The Effect of the Social Security Earnings Test on Male Labor Supply: New Evidence from Survey and Administrative Data” (Forthcoming, Journal of Human Resources: 2007). The authors use data from the CPS, New Beneficiary Data System (NBDS), and the Social Security Benefit and Earnings Public Use File (BEPUF). The authors restrict their analyses to men. Using all three data sources, they conducted a “bunching analysis” to determine the extent to which workers adjust their earnings so that they remain just under the earnings test threshold. They found that the age at which workers adjust their earnings has risen as the earnings test threshold has risen. In addition, they found that the extent of bunching is higher with the administrative data from NBDS and BEPUF. Turning next to labor force responses from the elimination of the earnings test, the authors use CPS and BEPUF data to run a “difference in differences” model. They found that earnings increased among 66 to 69 year-olds along with hours worked per week. This appendix provides supplementary descriptive statistics concerning the prevalence of retiree health insurance, DB, and DC pensions by demographic group among HRS respondents or their spouses included in our full retirement analysis sample. These respondents were born between 1931 and 1941, had 10 years of work experience by the time they reached age 62 and were in the labor force (working part-time or full-time, unemployed, or partially retired) in 1992—the beginning of the study period. Of those in our sample with less than $10,000 in household earnings at the beginning of the study, about 40 percent had employer-based retiree health insurance from either their employer or their spouse’s employer. By contrast, two thirds of people in our sample whose households earned $50,000 or more per year had access to employer-based retiree health insurance. Similarly, we found that a greater proportion of those with higher levels of education were eligible for employer-based retiree health benefits. (See fig. 7.) Others have found similar relationships, with a 2005 study finding declines in the availability of retiree health insurance affecting those with lower levels of education, relative to those with higher levels. Specifically, the authors found that retirees without a college degree have experienced a 34 percent decline between 1997 and 2002 in the likelihood of having retiree health benefits, while those with a college degree experienced a 28 percent decline. On the other hand, those with a post-college degree did not experience any decline in coverage. Finally, we also found that as of the beginning of the study period a lower proportion of Hispanics had retiree health insurance when compared to their White or African-American counterparts. As with our analysis of retiree health insurance, we found that as of the beginning of the study period, access to particular types of pensions varied by respondents’ income and education level. (See fig.8 and fig. 9.) We found that at the beginning of the study period 28 percent of those making less than $10,000 had a DB plan while 65 percent of those making $50,000 or more had them. We also found that 40 percent of those with less than a high school degree had a DB pension while 62 percent of those with a college degree or more advanced degree had a DB pension. We found similar results for DC plans with a larger proportion of those with higher income and more education having a DC plan compared to those who did not. In addition to the contact named above, Alicia Puente Cackley, Assistant Director; Benjamin P. Pfeiffer; Scott R. Heacock; Mary E. Robison; Joseph Applebaum; Cynthia L. Grant; Lisa B. Mirel; Daniel A. Schwimer; Douglas M. Sloane; Walter K. Vance; and Seyda G. Wentworth made key contributions to this report. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. Baby Boom Generation: Retirement of Baby Boomers Is Unlikely to Precipitate Dramatic Decline in Market Returns, but Broader Risks Threaten Retirement Security. GAO-06-718. Washington, D.C.: July 28, 2006. Older Workers: Labor Can Help Employers and Employees Plan Better for the Future. GAO-06-80. Washington, D.C.: December 5, 2005. Redefining Retirement: Options for Older Americans. GAO-05-620T. Washington, D.C.: April 27, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Private Pensions: Participants Need Information on Risks They Face in Managing Pension Assets at and during Retirement. GAO-03-810. Washington, D.C.: July 29, 2003. Retiree Health Insurance: Gaps in Coverage and Availability. Washington, D.C.: GAO-02-178T. November 1, 2001. Pension Plans: Characteristics of Persons in the Labor Force Without Pension Coverage. GAO/HEHS-00-131. Washington, D.C.: August 22, 2000. Social Security Reform: Implications of Raising the Retirement Age. GAO/HEHS-99-112. Washington, D.C.: August 27, 1999. Social Security Reform: Raising Retirement Ages Improves Program Solvency but May Cause Hardship for Some. GAO/T-HEHS-98-207. Washington, D.C.: July 15, 1998.
|
While many factors influence workers' decisions to retire, Social Security, Medicare, and pension laws also play a role, offering incentives to retire earlier and later. Identifying these incentives and how workers respond can help policy makers address the demographic challenges facing the nation. GAO assessed (1) the incentives federal policies provide about when to retire, (2) recent retirement patterns and whether there is evidence that changes in Social Security requirements have resulted in later retirements, and (3) whether tax-favored private retiree health insurance and pension benefits influence when people retire. GAO analyzed retirement age laws and SSA data and conducted statistical analysis of Health and Retirement Study data. Under the Comptroller General's authority, GAO has prepared this report on its own initiative. Federal policies offer incentives to retire both earlier and later than Social Security's full retirement age depending on a worker's circumstances. The availability of reduced Social Security benefits at age 62 provides an incentive to retire well before the program's age requirement for full retirement benefits; however, the gradual increase in this age from 65 to 67 provides an incentive to wait in order to secure full benefits. The elimination of the Social Security earnings test in 2000 for those at or above their full retirement age also provides an incentive to work. Medicare's eligibility age of 65 continues to provide a strong incentive for those without retiree health insurance to wait until then to retire, but it can also be an incentive to retire before the full retirement age. Meanwhile, federal tax policy creates incentives to retire earlier, albeit indirectly, by setting broad parameters for the ages at which retirement funds can be withdrawn from pensions without tax penalties. Nearly half of workers report being fully retired before turning age 63 and start drawing Social Security benefits at the earliest opportunity--age 62. Early evidence, however, suggests small changes in this pattern. Traditionally, some workers started benefits when they reached age 65. Recently, workers with full retirement ages after they turned 65 waited until those ages to start benefits. Also, following the elimination of the earnings test, some indications are emerging of increased workforce participation among people at or above full retirement age. GAO's analysis indicates that retiree health insurance and pension plans are strongly associated with when workers retire. After controlling for other influences such as income, GAO found that those with retiree health insurance were substantially more likely to retire before the Medicare eligibility age of 65 than those without. GAO also found that men with defined benefit plans were more likely to retire early (before age 62) than those without, and men and women with defined contribution plans were less likely to do so.
|
E-government is seen as promising a wide range of benefits based largely on harnessing the power of the Internet to facilitate interconnections and information exchange between citizens and their government. A variety of actions have been taken in recent years to enhance the government’s ability to realize the potential of e-government. The President designated e-government as one of five priorities in his fiscal year 2002 management agenda for making the federal government more focused on citizens and results. According to the agenda, e-government is expected to provide high-quality customer services regardless of whether the citizen contacts the agency by phone, in person, or on the Web; reduce the expense and difficulty of doing business with the government; cut government operating costs; provide citizens with readier access to government services; increase access for persons with disabilities to agency Web sites and e-government applications; and make government more transparent and accountable. As the lead agency for implementing the President’s management agenda, OMB developed a governmentwide strategy for expanding e-government, which it published in February 2002. In its strategy, OMB organized the 25 selected e-government initiatives into five portfolios: “government to citizen,” “government to business,” “government to government,” “internal efficiency and effectiveness,” and “cross-cutting.” Figure 1 provides an overview of this structure. For each initiative, OMB designated a specific agency to be the initiative’s “managing partner,” responsible for leading the initiative, and assigned other federal agencies as “partners” in carrying out the initiative. Partner responsibilities can include making contributions of funding or in-kind resources (e.g., staff time). Most of the initiatives do not have direct appropriations but rely instead on a variety of alternative funding strategies. Table 1 summarizes the funding strategies employed by the 25 OMB-sponsored e-gov initiatives in fiscal years 2003 and 2004. A common strategy used in fiscal years 2003 and 2004 was to reach agreement among the participating agencies on monetary contributions to be made by each—10 of the 25 initiatives used this strategy. Initiatives used different approaches in determining how much an agency should contribute. For example, some adopted complex allocation formulas based on agency size and expected use of the initiative’s resources, while others decided to have each agency contribute an equal share. In most cases, the funding strategy and allocation formula adopted for an initiative was determined by its governing board, with input from partner agencies and OMB. To further reinforce the strategy of having partner agencies make financial contributions, OMB generally reflected planned agency allocations in its annual budget guidance to partner agencies, known as passback instructions. The remaining 15 initiatives used other funding approaches. Specifically, for 7 of the 15, the managing partner contributed all necessary funds. Seven others used a combination of managing partner funding and other sources, such as charging fees for services provided, or received support from the E-Government Fund, established by the E-Government Act of 2002. The E-Government Fund was intended to be used to support projects that enable the federal government to expand its ability to conduct activities electronically. The Director of OMB, supported by the E-Government Administrator, is responsible for determining which projects are to receive support from the E-Government Fund. Table 2 summarizes support from the E-Government Fund given to the 25 OMB-sponsored initiatives in fiscal years 2003 and 2004. As shown in table 2, $5.4 million of the available $8 million in the E-Government Fund was spent on, among other things, 4 of the 25 initiatives. In addition to its use for the e-gov initiatives, OMB also used the E-Government Fund to support development of its “lines of business” initiatives (a total of $1.9 million) in fiscal years 2003 and 2004. For fiscal years 2003 and 2004, agencies generally made funding contributions in the amounts originally planned by the managing partners of the 10 initiatives that relied on funding contributions. Table 3 shows the specific numbers of partner agencies that made such contributions as planned. Although most contributions were made in the amounts planned, 6 of the 10 initiatives experienced funding shortfalls from their planned budgets in fiscal year 2003, and 9 experienced shortfalls in fiscal year 2004. Shortfalls in fiscal year 2003 totaled approximately $31 million (22 percent) of a planned budget of $138.7 million. In fiscal year 2004, shortfalls totaled approximately $25.4 million (20 percent) of a planned $124.2 million. The rationale provided by agencies for contributions that were less than planned included (1) substitution of in-kind resources in lieu of funds, (2) lack of budget guidance from OMB reflecting the original planned amounts, (3) inability to obtain congressional approval to reprogram funds from other accounts, and (4) organizational realignments associated with creation of DHS in fiscal year 2003. Figure 2 shows the shortfalls in contributions for each fiscal year and the primary rationale provided by agencies for those shortfalls. As shown in figure 2, in some cases partner agencies negotiated with the initiatives’ managing partners for reductions in monetary contributions, which often included an agreement for transfer of in-kind resources. For example, in fiscal year 2004, the Social Security Administration provided in-kind resources in lieu of requested funding to the e-Authentication initiative, managed by GSA. Specific details of all initiative shortfalls and associated agency explanations can be found in appendix II. Most of the shortfalls that occurred in each fiscal year were concentrated in one or two of the initiatives. For example, shortfalls in fiscal year 2003 experienced by the Project SAFECOM initiative—which is to serve as the umbrella program within the federal government to help local, tribal, state, and federal public safety agencies improve public safety response through more effective and efficient interoperable wireless communications— accounted for 57 percent of the total shortfall in that year. According to program officials, these shortfalls resulted from two major causes: (1) the inability of the Departments of Justice and the Interior to obtain congressional approval to reprogram funds from other accounts and (2) the impact of organizational realignments associated with the creation of DHS in fiscal year 2003. SAFECOM officials reported that the fiscal year 2003 shortfalls resulted in delays in the development of standards and architecture efforts related to communications interoperability. For example, the timeline for development of a methodology for assessing communications interoperability nationwide was postponed until sufficient funding could be made available. In fiscal year 2004, shortfalls experienced by the e-Rulemaking and Integrated Acquisition Environment (IAE) initiatives accounted for nearly two-thirds (64 percent) of the total shortfall. The e-Rulemaking initiative, managed by EPA, received only $5,850,208 (51 percent) of its planned fiscal year 2004 budget of $11,505,000 in partner agency contributions. Although the initiative’s funding plan had called for an expanded number of funding partners (from 9 to 35) over the previous fiscal year, OMB did not reflect that plan with passback instructions to the new funding partners. According to OMB officials, the disconnect between the initiative’s funding strategy and OMB’s passback instructions represented a “timing problem,” in that the passback instructions were based on the previously defined project scope of 9 partners. However, according to e-Rulemaking’s funding plan for fiscal year 2004, the project’s scope had already been broadened at the time OMB issued its passback instructions. Without passback instructions in fiscal year 2004, planned partner agencies did not make contributions, except in a few instances. E-Rulemaking officials reported that the resulting shortfall in funds, compounded with delays in reaching agreements regarding contributions from other agencies, required them to significantly scale back agency migration to the Federal Docket Management System (FDMS), the centerpiece of the initiative. Specifically, the number of agencies planned to migrate to the system in its first phase of implementation was reduced from 10 to 5, and 2 of those represented only component organizations rather than entire agencies. In IAE’s case, the shortfall in fiscal year 2004 also resulted in part from OMB passback instructions to the Department of Energy not reflecting the amount originally planned by GSA. According to OMB and GSA officials, the passback instructions did not reflect the planned amount due to an administrative error. IAE officials reported that as a result of this shortfall, implementation of several planned systems applications was postponed indefinitely. In addition, IAE received a smaller than anticipated contribution in fiscal year 2004 from the Department of Defense, because Defense provided in-kind resources in lieu of the originally planned funding contribution. Although initiatives generally received funding contributions from federal agencies in the amounts planned, in most cases, funds were contributed in the third and fourth quarters of the fiscal year. Specifically, seven of the initiatives reported that they had finalized half or more of their agreements with partner agencies in the third or fourth quarter of the fiscal year. In providing a rationale for contributions made late in the fiscal year, officials from both managing and funding partner agencies reported that the administrative burden associated with drafting, negotiating, and signing interagency agreements, as well as the timing of appropriations bill enactment, contributed to these delays. For illustrative purposes, figure 3 shows the timing of funding obligations for one of the initiatives, IAE. As the figure shows, most funding obligations were finalized in the last quarter of the fiscal year. Both managing and funding partner agencies reported that the extended process of drafting, negotiating, and signing interagency agreements contributed significantly to the timing of funding contributions in fiscal years 2003 and 2004. Officials from 5 of the 10 initiatives that relied on funding contributions from partner agencies specifically cited the administrative burden as a factor in interagency agreements being reached in the third and fourth quarters of the fiscal year. Officials from the Geospatial One-Stop initiative, managed by Interior, reported that potential partner agencies that could have provided modest funding contributions were sometimes not invited to do so because the resource investment required to reach interagency agreements was prohibitively high. In addition to the administrative burden associated with finalizing interagency agreements, managing and funding partner agencies also attributed the timing of contributions to the enactment of appropriations bills relatively late in the fiscal year. For example, in fiscal year 2003, appropriations were not enacted for most agencies until February 20, 2003, almost 5 months into the fiscal year. Further, managing partner agencies did not begin the process of establishing memorandums of understanding with partner agencies until after relevant appropriations had been enacted. Although OMB instructed agencies in fiscal year 2004 to make their funding obligations to managing partner agencies within 45 days of enactment of appropriations, agencies reported that this deadline was rarely achieved. According to OMB officials overseeing the initiatives, partner agencies should make every effort to provide promised contributions as early as possible within a funding cycle because of the benefits in facilitating implementation of the initiatives. However, for both fiscal years, agency officials generally did not report that obtaining funds late in the fiscal year caused their initiatives to suffer significant setbacks in executing planned tasks or achieving planned goals. Further, several agency officials noted that the process of drafting and negotiating memorandums of understanding among agencies had improved over time and was becoming more efficient in fiscal year 2005, for example, than in the two fiscal years we examined. These officials attributed the greater efficiency to increased knowledge and experience among officials involved in managing the e-gov initiatives. Most e-gov initiative partner agencies made contributions as planned to the 10 initiatives that relied on such contributions in fiscal years 2003 and 2004, although shortfalls occurred for a variety of reasons. In fiscal year 2004, the e-Rulemaking and IAE initiatives experienced shortfalls when OMB did not reflect the initiatives’ funding plans in budget guidance to partner agencies. Without corresponding budget guidance from OMB, partner agencies generally did not make planned contributions, and as a result, officials had to delay implementation of elements of the planned initiatives. Agreements on contributions often were not finalized until late in the fiscal year, in large part because the administrative burden in obtaining funds through interagency agreements was cumbersome. However, managing partners generally did not report significant disruptions in their planned milestones and objectives, and several commented that the interagency agreement process was becoming more efficient over time. In order to avoid errors and to better assist the managing partner agencies in obtaining funds to execute the OMB-sponsored e-gov initiatives, we recommend that the Director of OMB take steps to ensure that OMB’s budget guidance to partner agencies correctly reflects the funding plans of each of the initiatives that rely on funding contributions. We received oral comments on a draft of this report from representatives of OMB’s Office of E-Government, including the Associate Administrator for E-Government and Information Technology. These representatives generally agreed with the content of our draft report and our recommendation and provided technical comments, which have been incorporated where appropriate. OMB officials stated that, while there had been some problems in administering the funding of the e-government initiatives in fiscal years 2003 and 2004, agencies had made substantial progress in fiscal year 2005 in executing memorandums of understanding as early as possible. Specifically, OMB officials reported that as of April 8, 2005, about 80 percent of fiscal year 2005 funding commitments had been finalized. Although we did not evaluate fiscal year 2005 as part of our review, we noted in the report that the process of drafting and negotiating memorandums of understanding among agencies had reportedly improved over time. As described in the report, agency officials attributed the greater efficiency to increased knowledge and experience among officials involved in managing the e-gov initiatives. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will provide a copy of this report to the Director of OMB. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at (202) 512-6240 or John de Ferrari, Assistant Director, at (202) 512-6335. We can also be reached by e-mail at koontzl@gao.gov and deferrarij@gao.gov, respectively. Other key contributors to this report included Barbara Collier; Felipe Colón, Jr.; Wilfred Holloway; Sandra Kerr; Frank Maguire; and Jamie Pressman. Our objectives were, for fiscal years 2003 and 2004, to (1) determine whether federal agencies made contributions in the amounts planned to the 10 e-gov initiatives that relied on such contributions, and (2) determine the timing of these contributions and reasons for any contributions made late in each fiscal year. To determine whether federal agencies made monetary contributions to OMB-sponsored e-gov initiatives for fiscal years 2003 and 2004 in the amounts planned, we analyzed detailed funding data and supporting documentation from both managing partner and funding partner agencies. This documentation included the initiative’s agreed-upon funding plans for both fiscal years, as well as signed interagency agreements for each contribution. We also held follow-up discussions with agency officials to clarify the timing and amounts of contributions. For example, to determine shortfalls, we compared planned contributions with amounts obligated by funding partner agencies in their signed agreements and obtained rationale from agency officials regarding any differences. We determined the timing of partner agency contributions based on when in the fiscal year funds were obligated—the dates on which formal agreements such as memorandums of understanding and/or interagency agreements were signed by both managing and funding partner agencies. We also obtained rationale from agency officials regarding the major reasons why monetary contributions were made late in the fiscal year. Our work was conducted in the Washington, D.C., metropolitan area, from September 2004 to April 2005, in accordance with generally accepted government auditing standards. Managing partner agency: Department of Homeland Security (DHS) Purpose: Provide federal, state, and local emergency managers online access to disaster management–related information, planning, and response tools. Funding: Disaster Management project officials reported that their fiscal year 2003 and 2004 funding plans were developed by the Office of Management and Budget (OMB) and communicated to partner agencies through passback instructions. In fiscal year 2003, seven of nine partner agencies were to make equal contributions totaling approximately $1.5 million each, with DHS contributing a larger share than the others. In fiscal year 2004, for most partners the per-partner contribution was decreased to $681,250, again with DHS contributing a larger share. The decrease was due to a rescoping of the initiative that cancelled plans to develop new tool sets and reduced funding for the Disaster Management Web portal. Table 4 details contributions to the initiative for fiscal years 2003 and 2004. Funding shortfalls occurred that were related to two funding partner agencies: Interior did not make planned fiscal year 2003 or fiscal year 2004 contributions, and Commerce did not make its planned fiscal year 2004 contribution. Interior officials stated that their request to reprogram funds in 2004 to support Disaster Management was not approved by Congress. Commerce officials reported that they did not make their fiscal year 2004 contribution because Commerce’s appropriations bill included a restriction preventing the National Oceanic and Atmospheric Administration (NOAA), the principal Commerce participant for Disaster Management, from contributing fiscal year 2004 funds to any of the e-gov initiatives. DHS officials reported that the late timing of contributions was predominantly the result of agencies having to reformulate internal financial plans to meet the unforeseen e-government requirement. For example, Justice and two agencies transferred from Transportation made their fiscal year 2003 contributions in fiscal year 2004 for a variety of reasons. Justice officials reported that they were not permitted to reprogram the required funds during fiscal year 2003. Instead, they negotiated with OMB to make their fiscal year 2003 contribution in fiscal year 2004. A portion of Transportation’s fiscal year 2003 contribution was delayed by the transfer of key organizations—the Coast Guard and Transportation Security Administration—to DHS. According to Disaster Management officials, interruptions caused by late funding contributions and shortfalls included a delay in adding new responder groups to Disaster Management Interoperability Services (DMIS), delays in holding meetings and workshops with the emergency management community (including first responders) to facilitate development of interoperability standards, and delays in implementing an alternative site to ensure continuity of operations for the DMIS and DisasterHelp.gov servers. Managing partner agency: General Services Administration (GSA) Purpose: Minimize the burden on businesses, the public, and government when obtaining services online by providing a secure infrastructure for online transactions, eliminating the need for separate processes for the verification of identity and electronic signatures. Funding: In fiscal year 2003, e-Authentication had 14 funding partner agencies, and the funding plan called for $25 million in agency contributions to be divided among these partners based on criteria such as expected transaction volume and agency size. For fiscal year 2004, the total funding requirement was divided equally among the partner agencies ($377,000 per partner), with GSA contributing a larger amount ($600,000). On June 29, 2004, the e-Authentication project manager sent a memorandum out to members of the Executive Steering Committee explaining that the new federated identity architecture approach that the initiative had decided to adopt could be completed at a lower cost ($1.86 million less) than the original approach (developing an e-authentication gateway), and therefore, the initiative was reducing expected fiscal year 2004 agency contributions from $377,000 to $244,361. Table 5 details contributions to the initiative for fiscal years 2003 and 2004. GSA officials reported that five agencies did not make their full monetary contributions as planned in fiscal year 2003 and two agencies did not do so in fiscal year 2004. In each of these instances, reductions were negotiated between GSA and the funding partner agency and generally included a provision for in-kind resources (e.g., staff time) in lieu of the full monetary amount. For example in fiscal year 2003, in lieu of the $2.3 million contribution planned for EPA, GSA agreed to a $350,000 cash transfer; a $125,000 grant to be funded, administered, and managed by EPA; and various in-kind contributions. As another example, the Department of Education agreed to lead a proof-of-concept effort to test the use of its federal student aid personal identification number identity credential through the planned E-Authentication gateway in lieu of providing the full requested monetary amount. Project officials reported that NASA, the Treasury, and Housing and Urban Development’s contributions were reduced because these agencies simply did not have the funds available to contribute the planned amounts. In fiscal year 2004, the Social Security Administration provided in-kind resources in lieu of its planned funding contribution. Finally, in fiscal year 2004, Commerce did not make its full planned contribution because a stipulation in the fiscal year 2004 omnibus appropriations bill prohibited NOAA from spending any fiscal year 2004 appropriations on the OMB-sponsored e-government initiatives. GSA officials reported that the administrative burden associated with the memorandum of understanding process and appropriations enacted late in the fiscal year contributed to the late timing of contributions in fiscal years 2003 and 2004. For example, the officials noted that in some cases proposed memorandums of understanding that had already been signed by GSA officials were lost in the process of traversing partner agency offices, resulting in the need to obtain signatures from high-level officials multiple times. Managing partner agency: Department of Education Purpose: Create a single point of access for citizens to locate information on federal loan programs, and improve back-office loan functions. Funding: Although the funding allocation for each of the e-Loans initiative’s five partners is very simple ($397,000 per agency for both fiscal years), the initiative is somewhat unusual in that the managing partner does not centrally manage all the funds or activities of the initiative. Rather, the initiative is divided into four work streams with partner agencies taking the lead on specific work streams. The lead agencies used their own funding up to $397,000 per fiscal year, and if planned costs exceeded that amount, they obtained contributions from other funding partner agencies. For example, the Department of Housing and Urban Development manages one of the four work streams and used its own fiscal year 2003–2004 funds to support it, as well as receiving contributions from Agriculture in 2003 and 2004 and from Veterans Affairs in 2004. Table 6 details monetary contributions to the e-Loans initiative for fiscal years 2003 and 2004. Two shortfalls from planned amounts were associated with the Small Business Administration (SBA); however, both instances represent negotiated reductions. In fiscal year 2003, as a result of e-Loans budget negotiations between SBA and OMB and the expected contract cost of work stream deliverables, SBA’s fiscal year 2003 contribution was reduced. Education officials stated that this decision was supported by the partner agencies. In fiscal year 2004, SBA originally intended to spend $300,000 of its $397,000 commitment for activities that SBA subsequently determined could be supported under an existing contract at no additional cost. Accordingly, SBA reallocated the funds to support other e-gov work. Education officials noted that all partner agencies were affected by the enactment of appropriations late in fiscal years 2003 and 2004, which affected agencies’ ability to transfer or make funds available. Nevertheless, the officials reported that despite the timing of appropriations, partner agencies made their contributions in a timely manner. Managing partner agency: Environmental Protection Agency (EPA) Purpose: Allow citizens to easily access and participate in the rulemaking process. Improve access to, and quality of, the rulemaking process for individuals, businesses, and other government entities while streamlining and increasing the efficiency of internal agency processes. Funding: In fiscal year 2003, the e-Rulemaking project management office requested $100,000 apiece from nine partner agencies to support the initiative’s activities. These allocations were reflected in OMB’s passback instructions to the agencies. In addition, the Department of Transportation—the former managing partner of the initiative—was asked to transfer $5 million to EPA. For fiscal year 2004, the initiative’s funding workgroup developed a plan allocating a budget of $11.5 million among 35 anticipated funding partners, based on criteria such as agency budget size and average number of rules issued per year. OMB, however, issued passback instructions to only eight of the nine agencies that had funded the initiative in fiscal year 2003 and DHS. Table 7 details contributions to the initiative for fiscal years 2003 and 2004. E-Rulemaking officials reported that the combination of shortfalls and late contributions negatively affected the initiative, specifically in fiscal year 2004. In fiscal year 2003, two agencies, the Nuclear Regulatory Commission (NRC) and Transportation, did not make their full contributions as planned. Although OMB’s passback to NRC for fiscal year 2003 included the $100,000 amount allocated to each partner, NRC asserted that it was not subject to OMB’s budget guidance because it derives most of its budget from user fees. Accordingly, NRC did not make its planned contribution. Transportation, the former managing partner of the initiative, provided monetary funds and in-kind support in lieu of its full planned contribution in fiscal year 2003. In fiscal year 2004, Transportation did not make its full contribution because it believed the amount should be reduced because of the transfer of the Transportation Security Administration and the Coast Guard to DHS in fiscal year 2003. However, based on subsequent discussions between E-Rulemaking and Transportation officials, Transportation officials told us that they have agreed to pay the remaining fiscal year 2004 balance in fiscal year 2005. Additionally, the Department of Energy did not make its full contribution as planned because the OMB passback did not reflect the planned amount. This occurred because OMB erroneously assessed the Department of Energy at the same total contribution to e-gov initiatives in the passback as it had the Department of Education (the two departments have similar abbreviations). In fiscal year 2004, although e-Rulemaking requested funds from 35 agencies based on the budget workgroup’s funding plan, OMB issued passback instructions to just nine agencies, resulting in a shortfall of $5.6 million, nearly half of the initiative’s planned budget. Of the agencies that did not receive passback instructions, only three agencies contributed monetary resources in fiscal year 2004, and one agency contributed in-kind resources in lieu of funds. E-Rulemaking officials reported that they were not provided with an explanation as to why OMB did not issue passback instructions to all 35 agencies as had been planned. According to OMB officials, the disconnect between the initiative’s funding strategy and OMB’s passback instructions represented a “timing problem,” in that the passback instructions were based on the previously defined project scope of nine partners. The OMB officials did not state that the planned expansion of e-Rulemaking was inappropriate, noting that fiscal year 2005 passback instructions did reflect the larger number of partners. However, without passback instructions in fiscal year 2004, planned partner agencies did not make contributions, except in a few instances. E-Rulemaking officials reported that the resulting shortfall in funds, compounded with delays in receiving funds from other agencies, required them to scale back agency migration to the Federal Docket Management System (FDMS), the centerpiece of the initiative. Specifically, although the initiative planned to migrate 10 agencies to the FDMS in its first phase of implementation, the revised schedule now includes only 5 agencies, 2 of which are component agencies of larger departments. Managing partner agency: Department of the Interior Purpose: Provide federal and state agencies with a single point of access to map-related data to enable consolidation of redundant data. Funding: Planned contributions to the Geospatial One-Stop initiative were initially distributed among the agencies that were major federal geospatial data producers or were members of the Federal Geographic Data Committee. Partner agency concurrence in both fiscal years was obtained at a meeting hosted by the Interior. Partners willing to contribute more than the minimum $100,000 agency allocation indicated their intention to do so. Table 8 details funding for the initiative for fiscal years 2003 and 2004. Although the Geospatial One-Stop initiative experienced no shortfalls from its overall planned budget in fiscal year 2003, one agency, Transportation, contributed less than planned because in-kind resources were provided in lieu of the full requested amount. In fiscal year 2004 there were two shortfalls. Geospatial One-Stop officials reported that shortfall from Interior arose because its fiscal year 2004 requested increase was not funded by Congress and an agreement was made for Interior to provide $200,000 of in-kind resources in lieu of monetary funds. Additionally, there was a shortfall of $200,000 from Commerce because of the prohibition on NOAA contributing funds to e-government projects in fiscal year 2004. Project officials stated that extensive paperwork and staff time were invested in getting agreements drafted, reviewed, finalized, and signed. Some potential partners who could have participated at a lower level of funding were not invited because of the high overhead required to establish interagency agreements. The project officials stated their belief that the considerable amount of staff time required for managing the cross-agency approach to funding could be more effectively used carrying out the actual work of the project. They also stated that the burden of the administrative overhead to administer agreements made it infeasible to allocate costs fairly among partner agencies. Managing partner agency: Department of Labor Purpose: Provide a single point of access for citizens to locate and determine potential eligibility for government benefits and services. Funding: Planned funding partner contributions for the GovBenefits initiative were based on a funding plan developed in October 2002 that placed each of the 10 partner agencies, including Labor, into one of three categories based on the anticipated volume of benefit program information each agency would generate for the GovBenefits Web site. The same approach was used in fiscal years 2003 and 2004. As managing partner, Labor contributed the largest share. Table 9 details GovBenefits funding for fiscal years 2003 and 2004. The GovBenefits initiative received all planned contributions from funding partners in fiscal year 2003. In fiscal year 2004, only one agency, DHS, failed to make its contribution as planned, resulting in a $491,000 shortfall. According to OMB officials, the planned allocation for GovBenefits was erroneously not included in its annual budget guidance to DHS. GovBenefits project officials reported that funding partner agencies transferred funds as soon as memorandums of understanding were agreed upon. Managing partner agency: Health and Human Services Purpose: Create a single portal for all federal grant customers to find, apply for, and ultimately manage grants online. Funding: Planned contributions for Grants.gov’s 11 partner agencies were allocated based on a fiscal year 2002–2004 funding algorithm that classified grant-making agencies by size. In addition to the 11 partner agency requests, OMB identified development, modernization, and enhancement funds in specific agencies’ budgets for Grants.gov funding. Health and Human Services also received contributions in fiscal year 2004 from the Department of Energy and GSA. Table 10 details funding for Grants.gov for fiscal years 2003 and 2004. Grants.gov received all of its planned partner contributions from its funding partner agencies in fiscal year 2003. In fiscal year 2004, Grants.gov received almost all of its planned partner contributions from its funding partner agencies; the exception was Commerce, because of the appropriations bill restriction on NOAA contributing funds to e-gov initiatives. Commerce contributed in-kind resources in lieu of the full requested funds in fiscal year 2004. Managing partner agency: General Services Administration Purpose: Create a secure business environment that will facilitate and support the cost-effective acquisition of goods and services by agencies, while eliminating inefficiencies in the current acquisition environment. Funding: In addition to funding from GSA’s General Supply Fund, the IAE initiative relied on monetary contributions from partner agencies in fiscal years 2003 and 2004. Planned contributions were allocated based on each agency’s procurement volume as reported in the Federal Procurement Data System. Table 11 details funding for the IAE initiative for fiscal years 2003 and 2004. IAE project officials reported that for fiscal years 2003 and 2004, the Department of Energy contribution was lower than the planned amount because of an error by OMB that assessed the Department of Energy the same amount as the Department of Education (the two departments have similar abbreviations). This resulted in nearly a $10 million shortfall over fiscal years 2003 and 2004. Officials reported that this indirectly impacted the initiative and reported that several applications were postponed and are now indefinite because the funds were not available at that time. In fiscal year 2004, remaining shortfalls from the planned amount represented negotiated reductions. For example, as reported by both IAE and DOD officials, DOD contributed less than the planned amount, instead providing in-kind support (e.g., staff time and existing IT resources) for project activities. Commerce’s fiscal year 2004 shortfall was again attributable to the fiscal year 2004 appropriations bill language that prohibited NOAA from contributing to any of the e-gov initiatives. GSA officials reported that continuing resolutions and “red-tape” issues such as paperwork and lost documents prolonged the transfer of funds. Specifically noted was the administrative burden on both the managing and funding partner agencies in crafting interagency agreements. Although in fiscal year 2003, requests were made from the smaller agencies (including the Broadcasting Board of Governments, the Equal Employment Opportunity Commission, the Executive Office of the President, the Securities and Exchange Commission, the Smithsonian Institution, and the Peace Corps), only the Peace Corps made its requested contribution. The IAE project manager reported that a decision was made that the administrative costs to process the memorandum of understanding and funding requests could not be offset by the funds collected and therefore fiscal year 2003 contributions were not pursued and funds were not sought from these agencies in fiscal year 2004. Managing partner agency: Department of Homeland Security Purpose: Serve as the umbrella program within the federal government to help local, tribal, state, and federal public safety agencies improve public safety response through more effective and efficient interoperable wireless communications. Funding: According to SAFECOM project officials, contributions were determined by OMB and communicated through budget passback instructions. Table 12 summarizes funding for fiscal years 2003 and 2004. SAFECOM officials reported experiencing shortfalls and receiving funds from partner agencies late in the fiscal year. As we previously reported, SAFECOM has been managed by three different agencies since its inception. In fiscal year 2003, SAFECOM received only about $17 million of the $34.9 million OMB had allocated as contributions from funding partners. According to program officials, these shortfalls resulted from two major causes: (1) the inability of the Departments of Justice and the Interior to obtain congressional approval to reprogram funds from other accounts and (2) the impact of organizational realignments associated with the creation of DHS in fiscal year 2003. SAFECOM officials reported that the shortfall experienced in fiscal year 2003 resulted in delays in the development of the standards and architecture efforts related to communications interoperability. For example, the timeline of the National Baseline Methodology and Assessment of communications interoperability was extended until sufficient funding was available. According to agency officials, Justice was not authorized to reprogram funds and negotiated to provide its fiscal year 2003 allocation in fiscal year 2004. The total amount contributed was a reduced amount negotiated with OMB. Although SAFECOM re-requested Interior’s unpaid fiscal year 2003 contribution in addition to its fiscal year 2004 allocation, Interior officials reported that their reprogramming request was denied. According to DHS officials, fiscal year 2003 unpaid amounts from FEMA and the Transportation Security Administration were not re-requested in fiscal year 2004 at the direction of the DHS Under Secretary for Management. SAFECOM officials also reported that in fiscal year 2004, they were unable to collect funding resources in a timely manner because of enactment of the fiscal year 2004 appropriation bill late in the fiscal year. Project officials reported that this affected the initiative’s progress by delaying start dates for certain tasks and creating breaks in project service and performance. Managing partner agency: Department of the Interior Purpose: Provide a single-point-of-access, user-friendly, Web-based resource to citizens, offering information and access to government recreational sites. Funding: The Recreation One-Stop initiative relied on monetary contributions from four partner agencies, including Interior, in fiscal years 2003 and 2004. Additionally, the initiative received $800,000 from the E-Government Fund in fiscal year 2003. According to project officials, Recreation One-Stop partners agreed that agencies receiving major benefits from the initiative would contribute $50,000 annually, and agencies receiving fewer benefits would contribute $25,000 annually, with the managing partner contributing a larger share. Table 13 details contributions for fiscal years 2003 and 2004. Recreation One-Stop officials reported that all fiscal year 2003 and 2004 planned contributions had been received; however, officials noted that the logistics of transferring funds according to agency-specific procedures was time-consuming, and as a result funding requests from “minor partners” were eliminated for fiscal year 2006.
|
In accordance with the President's Management Agenda, the Office of Management and Budget (OMB) has sponsored initiatives to promote electronic government--the use of information technology, such as Web-based Internet applications, to enhance government services. Generally, these "e-gov" initiatives do not have direct appropriations but depend on a variety of funding sources, including monetary contributions from participating agencies. GAO was asked to review the funding of e-gov initiatives that relied on such contributions: specifically, to determine, for fiscal years 2003 and 2004, whether agencies made contributions in the amounts planned and to determine the timing of these contributions. Most federal agencies contributed funds as originally planned by the managing partners of the 10 initiatives that relied on such contributions in fiscal years 2003 and 2004. Nevertheless, 6 of the 10 initiatives experienced shortfalls from their funding plans in fiscal year 2003 and 9 in 2004. The rationale provided by agencies for contributions that were less than planned included: (1) substitution of in-kind resources in lieu of funds, (2) lack of budget guidance from OMB reflecting planned funding amounts, (3) inability to obtain permission to reprogram funds from other accounts, and (4) organizational realignments associated with creation of the Department of Homeland Security in fiscal year 2003. For example, the e-Rulemaking initiative (managed by the Environmental Protection Agency) received only 51 percent of its planned fiscal year 2004 contributions. Although the initiative's funding plan called for adding new funding partners in that year, OMB did not reflect this expansion when it issued its annual budget guidance to agencies. As a result, the newly added agencies generally did not contribute. According to E-Rulemaking officials, the resulting shortfall in funds, along with delays in receiving funds from other agencies, required them to significantly scale back their plans. In most cases, fiscal year 2003 and 2004 contributions from partner agencies were made in the third and fourth quarters of the fiscal year. Agency officials identified the administrative burden associated with drafting, negotiating, and signing interagency agreements, as well as the delayed enactment of the fiscal year 2003-2004 appropriations bills, as contributing to this timing of contributions. However, according to officials from several agencies, although the administrative burden is still high, agencies have become more accustomed to funding strategies based on partner agency contributions.
|
FDA’s responsibilities related to medical devices include premarket and postmarket oversight—spanning, for example, both premarket review of devices and postmarket surveillance (the collection and analysis of data on marketed devices). As part of both premarket and postmarket oversight, FDA is responsible for inspecting certain foreign and domestic establishments to ensure they meet required manufacturing standards. Relative to the PMA process, the 510(k) premarket review process is generally: Less stringent. For most 510(k) submissions, clinical data are not required and substantial equivalence will normally be determined based on comparative device descriptions, including performance data. In contrast, in order to meet the PMA approval requirement of providing reasonable assurance that a new device is safe and effective, most original PMAs and some PMA supplements require clinical data. Faster. FDA generally makes decisions on 510(k) submissions faster than it makes decisions on PMA submissions. FDA’s fiscal year 2009 goal is to review and decide on 90 percent of 510(k) submissions within 90 days and 98 percent of them within 150 days. The comparable goal for PMAs is to review and decide upon 60 percent of original PMA submissions in 180 days and 90 percent of them within 295 days. Less expensive. The estimated cost to FDA for reviewing submissions is substantially lower for 510(k) submissions than for PMA submissions. For fiscal year 2005, for example, according to FDA the estimated average cost for the agency to review a 510(k) submission was about $18,200, while the estimate for a PMA submission was about $870,000. For the applicant, the standard fee provided to FDA at the time of submission is also significantly lower for a 510(k) submission than for a PMA submission. In fiscal year 2009, for example, the standard fee for 510(k) submissions is $3,693, while the standard fee for original PMA submissions is $200,725. In general, class I and II device types subject to premarket review are required to obtain FDA clearance through the 510(k) process, and class III device types are required to obtain FDA approval through the more stringent PMA process. With the enactment of the Medical Device Amendments of 1976, Congress imposed requirements under which all class III devices would be approved through the PMA process before being marketed in the United States. However, certain types of class III devices that were in commercial distribution in the United States before May 28, 1976 (called preamendment device types) and those determined to be substantially equivalent to them may be cleared through the less stringent 510(k) process until FDA publishes regulations requiring them to go through the PMA process or reclassifies them into a lower class. Prior to 1990, FDA issued regulations requiring some class III device types to go through the PMA process but many class III device types continued to be reviewed through the 510(k) process. The Safe Medical Devices Act of 1990 required FDA (1) to reexamine the preamendment class III device types for which PMAs were not yet required to determine if they should be reclassified to class I or II or remain in class III and (2) to establish a schedule to promulgate regulations requiring those preamendment device types that remain in class III to obtain FDA approval through the PMA process. Accordingly, all class III devices are eventually to be reviewed through the PMA process. In addition to its responsibilities for premarket review of devices, FDA’s postmarket activities to help ensure that devices already on the market remain safe and effective include collecting and analyzing reports of device-related adverse events and reviewing annual reports required from manufacturers. FDA’s reporting framework for device-related adverse events includes both mandatory and voluntary components. Under FDA’s Medical Device Reporting regulation, manufacturers are required to report device-related deaths, serious injuries, and certain malfunctions to FDA and user facilities, such as hospitals and nursing homes, are required to report device-related deaths to FDA and to the device manufacturer and to report serious injuries to the manufacturer (or, if the manufacturer is unknown, to FDA). Manufacturers and user facilities, as well as health professionals and consumers, may also voluntarily report less serious device-related events to FDA. FDA maintains databases that include both mandatory and voluntary reports of device-related adverse events, which agency officials can search to conduct research on trends or emerging problems with device safety. FDA scientists review these reports, request follow-up investigations, and determine whether further action is needed to ensure patient safety. Such action may include product recalls, public health advisories to notify health care providers and the public of potential device-related health and safety concerns, or requiring a manufacturer to change the instructions in its device labeling. Finally, as part of both premarket and postmarket oversight of medical devices, FDA is responsible for inspecting certain foreign and domestic establishments to ensure they meet required manufacturing standards. Such inspections are FDA’s primary means of assuring that the safety and effectiveness of devices are not jeopardized by poor manufacturing practices. Requirements governing domestic and foreign inspections differ. Specifically, FDA is required to inspect domestic establishments that manufacture class II or III devices every 2 years. There is no comparable requirement to inspect foreign establishments. In 2002, in response to concerns about FDA’s ability to meet its responsibilities for inspecting device manufacturing establishments, Congress included certain provisions in the Medical Device User Fee and Modernization Act of 2002 (MDUFMA). These provisions were designed to (1) increase the number of inspected device manufacturing establishments and (2) help device manufacturers meet the inspection requirements of both the United States and foreign countries in a single inspection. Specifically, MDUFMA required FDA to accredit third-party organizations to conduct inspections of certain foreign and domestic establishments. In response, FDA implemented its Accredited Persons Inspection Program, which permits certain establishments to voluntarily request inspections from third-party organizations to meet inspectional requirements. Additionally, in September 2006, in partnership with Health Canada, FDA established another program for inspection by accredited third parties—the Pilot Multi-purpose Audit Program—that allows accredited organizations to conduct a single inspection to meet the regulatory requirements of both countries. Although Congress envisioned that all class III devices would eventually be approved through the more stringent PMA process, we found that this was not always the case. In January 2009, we reported that in fiscal years 2003 through 2007, FDA reviewed all submissions for class I and II devices through the 510(k) process, and reviewed submissions for some types of class III devices through the 510(k) process and others through the PMA process. FDA reviewed all 13,199 submissions for class I and class II devices through the 510(k) process, clearing 11,935 (90 percent) of these submissions. FDA also reviewed 342 submissions for class III devices through the 510(k) process, clearing 228 (67 percent) of these submissions. In addition, the agency reviewed 217 original PMA submissions and 784 supplemental PMA submissions for class III devices and approved 78 percent and 85 percent, respectively, of these submissions. Table 1 summarizes the FDA review decisions, by class of device, in fiscal years 2003 through 2007 for 510(k) and PMA submissions. With respect to class III devices, in fiscal years 2003 through 2007, FDA reviewed submissions for some types of class III devices through the 510(k) process, and other types of class III devices through the PMA process. Specifically, FDA reviewed 342 submissions for new class III devices through the 510(k) process, determining 228 (67 percent) of these submissions to be substantially equivalent to a legally marketed device. During the same time period, FDA reviewed 217 original PMA submissions and 784 supplemental PMA submissions for class III devices and approved 78 percent and 85 percent of them, respectively. (See fig. 1.) The 228 class III device submissions FDA cleared through the 510(k) process in fiscal years 2003 through 2007 were for devices such as artificial hip joints, implantable blood access devices, and automated external defibrillators. Class III 510(k) submissions were more likely than other 510(k) submissions to be for device types that were implantable; were life sustaining; or pose a significant risk to the health, safety, or welfare of a patient. Of the 228 510(k) submissions for class III devices that FDA cleared in fiscal years 2003 through 2007, FDA’s databases flagged 66 percent as being for device types that are implantable, life sustaining, or of significant risk. This compares to no 510(k) submissions for class I devices and 25 percent of 510(k) submissions for class II devices. Although the Medical Device Amendments of 1976 imposed requirements under which all class III devices would be approved through the PMA process, and the Safe Medical Devices Act of 1990 required that FDA either reclassify or establish a schedule for requiring PMAs for class III device types, this process remains incomplete. The 228 class III device submissions cleared by FDA through the 510(k) process in fiscal years 2003 through 2007 represented 24 separate types of class III devices. As of October 2008, 4 of these device types had been reclassified to class II, but 20 device types could still be cleared through the 510(k) process. FDA officials said that the agency is committed to issuing regulations either reclassifying or requiring PMAs for the class III devices currently allowed to receive clearance for marketing via the 510(k) process, but did not provide a time frame for doing so. We recommended that the Secretary of Health and Human Services direct the FDA Commissioner to expeditiously take steps to issue regulations for each class III device type currently allowed to enter the market through the 510(k) process. These steps should include issuing regulations to (1) reclassify each device type into class I or class II, or require it to remain in class III, and (2) for those device types remaining in class III, require approval for marketing through the PMA process. In commenting on a draft of our report, HHS agreed with our recommendation, noting that since 1994 (when FDA announced its strategy to implement provisions of the Safe Medical Devices Act of 1990) FDA has called for PMAs or reclassified the majority of class III devices that did not require PMAs at that time. The department’s comments, however, did not specify time frames in which FDA will address the remaining class III device types allowed to enter the market via the 510(k) process, stating instead that the agency is considering its legal and procedural options for completing this task as expeditiously as possible, consistent with available resources and competing time frames. Given that more than 3 decades have passed since Congress envisioned that all class III devices would eventually be required to undergo premarket review through the more stringent PMA process, we believe it is imperative that FDA take immediate steps to address the remaining class III device types that may still enter the market through the less stringent 510(k) process by requiring PMAs for or reclassifying them. In April 2009, FDA took what it termed “the first step towards completing the review of Class III device types predating the 1976 law, as was recommended by the U.S. Government Accountability Office (GAO) in a January 2009 report to Congress.” Specifically, FDA announced that it was requiring manufacturers of 25 types of class III medical devices marketed prior to 1976 to submit safety and effectiveness information to the agency by August 7, 2009, so that it may evaluate the risk level for each device type. In the Federal Register notice announcing the requirement, FDA stated that once the safety and effectiveness information was submitted, the agency would be able to determine which device types would be required to undergo the agency’s most stringent premarket review process. FDA’s requirement that manufacturers submit safety and effectiveness information is an essential initial step toward implementing our recommendation and fully implementing the law. However, FDA did not specify a time frame for how quickly it will review the submitted information, determine whether to reclassify the device types, and require PMAs for those that remain in class III. It should be noted, however, that while the PMA process is more stringent than the 510(k) process, FDA can approve a device through the PMA process without clinical data demonstrating the safety and effectiveness of the device. For example, in our review of FDA’s approval of PMAs for certain temporomandibular joint (jaw) implants, FDA managers overruled their review staff to approve one of the devices, despite the review staff’s concern over the sufficiency of the clinical data. The review decision stated that either good engineering data or good clinical data—not necessarily both—were acceptable to approve a device and accepted the engineering data as a basis for approving an implanted device for which the review staff had determined that the clinical data were inadequate. In our recent high-risk report, we noted that FDA’s monitoring of postmarket safety of approved products, including medical devices, has been questioned by numerous groups. In 2008, we reported that the number of adverse event reports associated with all devices increased substantially from about 77,000 reports in 2000 to about 320,000 reports in 2006. FDA’s review and analysis of these reports provides information about trends such as infection outbreaks or common user errors caused by inadequate instructions and may result in actions such as device recalls. During fiscal year 2006, FDA initiated 651 recall actions involving 1,550 medical devices. This included 21 recall actions in which FDA determined that it was likely that the use of the medical device would cause serious health problems or death. We and FDA have identified shortcomings in FDA’s postmarket surveillances. In 2006, FDA reported that the agency’s Center for Devices and Radiological Health’s ability to understand the risks of adverse events related to the use of medical devices—whether used in the in the home of a patient, in a hospital, in a laboratory, or in the office of a private practitioner—is limited both by a lack of informative, validated adverse event reports and by a lack of quality epidemiologic information. FDA specifically reported: One major constraint is the lack of objective data about device use and device-related problems. Underreporting of adverse events continues to be a problem. FDA’s medical device reporting system is a passive system—that is, the reports are entered as reported by manufacturers, facilities, practitioners, or patients—and, as a result, some reports are incomplete or difficult to understand. The volume of submitted reports exceeded the center’s ability to consistently enter or review the data in a routine manner. In its 2006 report, FDA identified areas for improvement in postmarket problem assessment for the center. In 2008, FDA officials told us that while they have a number of strategies to prioritize their reviews, they still cannot review all the reports they receive. We have also found shortcomings in FDA’s monitoring of manufacturers’ compliance with requirements following device approval. In 2007, we found that manufacturers do not always submit their required annual reports in a timely manner. For example, FDA was missing five annual reports from the manufacturer of one device we were examining, but it was not until we requested these reports that FDA contacted the manufacturer to obtain the missing information. Without these annual reports, FDA cannot adequately monitor manufacturers’ compliance with postmarket requirements. Our work has also identified challenges faced by FDA in terms of inspecting establishments that manufacture medical devices. In January 2008, we testified that FDA has not met a statutory requirement to inspect certain domestic manufacturing establishments every 2 years. FDA officials estimated that the agency has inspected these establishments every 3 years (for establishments manufacturing class III devices) or every 5 years (for establishments manufacturing class II devices). There is no comparable requirement to inspect foreign establishments, and agency officials estimate that these establishments have been inspected every 6 years (for class III devices) or 27 years (for class II devices). We also testified that FDA faces additional challenges in managing its inspections of foreign device establishments. We found that two databases that provide FDA with information about foreign device establishments and the products they manufacture for the U.S. market contain inaccuracies that create disparate estimates of establishments subject to FDA inspection. Although comparing information from these two databases could help FDA determine the number of foreign establishments marketing devices in the United States, these databases cannot exchange information and any comparisons must be done manually. Moreover, inspections of foreign device manufacturing establishments pose unique challenges to FDA, such as difficulties in finding translation services and in extending trips if the inspections uncover problems. FDA has taken some steps to address shortcomings related to inspections of foreign establishments, including changes to its registration database to improve the accuracy of the count of establishments and initiatives to address unique challenges related to inspections of foreign manufacturers, but we have not evaluated whether these changes will improve FDA’s inspection program. In addition, FDA’s accredited third-party inspection programs may be unable to quickly help FDA fulfill its responsibilities. In January 2007, we reported on the status of the Accredited Persons Inspection Program, citing, among other things, concerns regarding its implementation and potential incentives and disincentives that may influence manufacturers’ participation. We found that several factors may influence manufacturers’ interest in voluntarily requesting an inspection by accredited organization. According to FDA and representatives of affecte entities, there are potential incentives and disincentives to reques ting an inspection, as well as reasons for deferring participation in the program. Potential incentives include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements and to control the scheduling of the inspection. Potential disincentives include bearing the cost for the inspection and uncertainty about the potential consequences of making a commitment to having an inspection to assess compliance with FDA requirements in the near future. Some manufacturers might be deferring participation. For example, manufacturers that already contract with a specific accredited organization to conduct inspections to meet the requirements of other countries might defer participation until FDA has cleared that organization to conduct independent inspections. In both our January 2008 and May 2008 testimonies, we reported that few inspections of device manufacturing establishments had been conducted through FDA’s two d accredited third-party inspection programs. As of June 12, 2009, FDA reported that a total of 21 inspections—8 inspections of domestic establishments and 13 inspections of foreign establishments—had been conducted under these programs. The small number of inspections completed by accredited third-party organizations raises questions about the practicality and effectiveness of these programs to quickly help FDA increase the number of establishments inspected. Taken together, these shortcomings in both premarket and postmarket activities raise serious concerns about FDA’s regulation of medical devices. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other members of the subcommittee may have at this time. For further information about this statement, please contact Marcia Crosse, at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Kim Yamane and Geraldine Redican-Bigott, Assistant Directors; Susannah Bloch; Matt Byer; Sean DeBlieck; Helen Desaulniers; and Julian Klazkin made key contributions to this report. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Medical Devices: FDA Should Take Steps to Ensure That High-Risk Device Types Are Approved through the Most Stringent Premarket Review Process. GAO-09-190. Washington, D.C.: January 15, 2009. Health-Care-Associated Infections in Hospitals: Number Associated with Medical Devices Unknown, but Experts Report Provider Practices as a Significant Factor. GAO-08-1091R. Washington, D.C.: September 26, 2008. Medical Devices: FDA Faces Challenges in Conducting Inspections of Foreign Manufacturing Establishments. GAO-08-780T. Washington, D.C.: May 14, 2008. Reprocessed Single-Use Medical Devices: FDA Oversight Has Increased, and Available Information Does Not Indicate That Use Presents an Elevated Health Risk. GAO-08-147. Washington, D.C.: January 31, 2008. Medical Devices: Challenges for FDA in Conducting Manufacturer Inspections. GAO-08-428T. Washington, D.C.: January 29, 2008. Medical Devices: FDA’s Approval of Four Temporomandibular Joint Implants. GAO-07-996. Washington, D.C.: September 17, 2007. Food and Drug Administration: Methodologies for Identifying and Allocating Costs of Reviewing Medical Device Applications Are Consistent with Federal Cost Accounting Standards, and Staffing Levels for Reviews Have Generally Increased in Recent Years. GAO-07-882R. Washington, D.C.: June 25, 2007. Medical Devices: Status of FDA’s Program for Inspections by Accredited Organizations. GAO-07-157. Washington, D.C.: January 5, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Americans depend on the Food and Drug Administration (FDA) to provide assurance that medical devices sold in the United States are safe and effective. FDA classifies medical device types into three classes, with class I including those with the lowest risk to patients (such as forceps) and class III including those with the greatest risk (such as pacemakers). FDA's responsibilities include premarket and postmarket oversight--spanning, for example, both premarket review of devices and postmarket surveillance (the collection and analysis of data on marketed devices). These responsibilities apply to all devices marketed in the United States, regardless of whether they are manufactured domestically or overseas. In 2009, GAO added FDA's oversight of medical products, including devices, to its list of high-risk areas warranting attention by Congress and the executive branch. GAO was asked to testify on recent work related to FDA's responsibilities for medical devices, including premarket review, postmarket surveillance, and inspection of manufacturing establishments. This statement is based on a recent GAO report, Medical Devices: FDA Should Take Steps to Ensure That High-Risk Device Types Are Approved through the Most Stringent Premarket Review Process ( GAO-09-190 , January 15, 2009) and on other GAO reports and testimonies related to FDA oversight. GAO found that FDA does not review all class III devices through its most stringent premarket review process. Unless exempt by regulation, new devices must clear FDA premarket review through either the 510(k) premarket notification process, which is used to determine if a new device is substantially equivalent to another legally marketed device, or through the more stringent premarket approval (PMA) process, which requires the manufacturer to supply evidence providing reasonable assurance that the device is safe and effective. In 1976, Congress envisioned that FDA would eventually approve all class III devices through the more stringent PMA process, but this process remains incomplete. GAO found that in fiscal years 2003 through 2007, FDA cleared 228 submissions representing 24 types of class III devices through the 510(k) process. GAO recommended in its January 2009 report that FDA expeditiously take steps to issue regulations requiring PMAs for or reclassifying class III device types currently allowed to enter the market via the 510(k) process. In response, in April 2009, FDA required manufacturers to submit information on the safety and effectiveness of these types of devices. However, FDA did not specify a time frame for how quickly it will reclassify them or require PMAs for those device types that remain in class III. FDA also faces challenges in postmarket surveillance of medical devices. In 2008, GAO reported that the number of adverse event reports associated with medical devices increased substantially from 2000 to 2006. Both GAO and FDA, however, have identified shortcomings in FDA's postmarket oversight. For example, in 2006 FDA reported that the agency's ability to understand the risks related to the use of medical devices is limited by the fact that the volume of submitted reports exceeded FDA's ability to consistently enter or review the reports in a routine manner. In 2008, FDA officials told us that while they have a number of strategies to prioritize their reviews of adverse event reports, they still cannot review all the reports they receive. Finally, GAO has found that FDA has not conducted required inspections of manufacturing establishments, another key FDA responsibility for medical devices marketed in the United States. In 2008, GAO reported that FDA has not met a statutory requirement to inspect certain domestic manufacturing establishments every 2 years. Instead, FDA officials estimated that the agency has inspected domestic establishments every 3 years (for class III devices) or 5 years (for class II devices). There is no comparable requirement to inspect foreign establishments, and FDA officials estimate that they have been inspected every 6 years (for class III devices) or 27 years (for class II devices). GAO reported that FDA has taken some steps to address shortcomings related to inspections of foreign establishments, but GAO has not evaluated whether these changes will improve FDA's inspection program. Taken together, these shortcomings in both premarket and postmarket activities raise serious concerns about FDA's regulation of medical devices.
|
DHS satisfied or partially satisfied each of the applicable legislative conditions specified in the act. In particular, the plan, including related program documentation and program officials’ statements, satisfied or provided for satisfying all key aspects of (1) compliance with the DHS enterprise architecture; (2) federal acquisition rules, requirements, guidelines, and systems acquisition management practices; and (3) review and approval by DHS and the Office of Management and Budget (OMB). Additionally, the plan, including program documentation and program officials’ statements, satisfied or provided for satisfying many, but not all, key aspects of OMB’s capital planning and investment review requirements. For example, DHS fulfilled the OMB requirement that it justify and describe its acquisition strategy. However, DHS does not have current life cycle costs or a current cost/benefit analysis for US-VISIT. DHS has implemented one, and either partially implemented or has initiated action to implement most of the remaining recommendations contained in our reports on the fiscal year 2002 and fiscal year 2003 expenditure plans. Each recommendation, along with its current status, is summarized below: Develop a system security plan and privacy impact assessment. The department has partially implemented this recommendation. As to the first part of this recommendation, the program office does not have a system security plan for US-VISIT. However, the US-VISIT Chief Information Officer (CIO) accredited Increment 1 based upon security certifications for each of Increment 1’s component systems and a review of each component’s security-related documentation. Second, although the program office has conducted a privacy impact assessment for Increment 1, the assessment does not satisfy all aspects of OMB guidance for conducting an assessment. For example, the assessment does not discuss alternatives to the methods of information collection, and the system documentation does not address privacy issues. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, program management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with the Software Engineering Institute’s (SEI) guidance. The department plans to implement this recommendation. The US-VISIT program office has assigned responsibility for implementing the recommended controls. However, it has not yet developed explicit plans or time frames for defining and implementing them. Ensure that future expenditure plans are provided to the department’s House and Senate Appropriations Subcommittees in advance of US- VISIT funds being obligated. With respect to the fiscal year 2004 expenditure plan, DHS implemented this recommendation by providing the plan to the Senate and House subcommittees on January 27, 2004. According to the program director, as of February 2004 no funds had been obligated to US-VISIT. Ensure that future expenditure plans fully disclose US-VISIT capabilities, schedule, cost, and benefits. The department has partially implemented this recommendation. Specifically, the plan describes high-level capabilities, high-level schedule estimates, categories of expenditures by increment, and general benefits. However, the plan does not describe planned capabilities by increment and provides only general information on how money will be spent in each increment. Moreover, the plan does not identify all expected benefits in tangible, measurable, and meaningful terms, nor does it associate any benefits with increments. Establish and charter an executive body composed of senior-level representatives from DHS and each US-VISIT stakeholder organization to guide and direct the program. The department has implemented this recommendation by establishing a three-entity governance structure. The entities are (1) the Homeland Security Council, (2) the DHS Investment Review Board, and (3) the US- VISIT Federal Stakeholders Advisory Board. The purpose of the Homeland Security Council is to ensure the coordination of all homeland security- related activities among executive departments and agencies, and the Investment Review Board is expected to monitor US-VISIT’s achievement of cost, schedule, and performance goals. The advisory board is chartered to provide recommendations for overseeing program management and performance activities, including providing advice on the overarching US- VISIT vision; recommending changes to the vision and strategic direction; and providing a communications link for aligning strategic direction, priorities, and resources with stakeholder operations. Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. The department is in the process of implementing this recommendation. DHS has determined that US-VISIT will require 115 government personnel and has filled 41 of these, including 12 key management positions. However, 74 positions have yet to be filled, and all filled positions are staffed by detailees from other organizational units within the department. Clarify the operational context in which US-VISIT is to operate. The department is in the process of implementing this recommendation. DHS released Version 1 of its enterprise architecture in October 2003, and it plans to issue Version 2 in September 2004. Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The department plans to implement this recommendation. The fiscal year 2004 expenditure plan identifies high-level benefits to be delivered, but the benefits are not associated with specific increments. Additionally, the plan does not identify the total cost of Increment 2. Program officials expected to finalize a cost-benefit analysis this past March and a US-VISIT life cycle cost estimate this past April. Define program office positions, roles, and responsibilities. The department is in the process of implementing this recommendation. Program officials are currently working with the Office of Personnel Management to define program position descriptions, including roles and responsibilities. The program office has partially completed defining the competencies for all 12 key management areas. These competencies are to be used in defining the position descriptions. Develop and implement a human capital strategy for the program office. The department plans to implement this recommendation in conjunction with DHS’s ongoing workforce planning, but stated that they have yet to develop a human capital strategy. According to these officials, DHS’s departmental workforce plan is scheduled for completion during fiscal year 2004. Develop a risk management plan and report all high risks areas and their status to the program’s governing body on a regular basis. The department has partially implemented this recommendation. The program has completed a draft risk management plan, and is currently defining risk management processes. The program is creating a risk management team to operate in lieu of formal processes until these are completed, and also maintains a risk-tracking database that is used to manage risks. Define performance standards for each program increment that are measurable and reflect the limitations imposed by relying on existing systems. The department is in the process of implementing this recommendation. The program office has defined limited performance standards, but not all standards are being defined in a way that reflects the performance limitations of existing systems. Our observations recognize accomplishments to date and address the need for rigorous and disciplined program management practices relating to system testing, independent verification and validation, and system change control. An overview of specific observations follows: Increment 1 commitments were largely met. An initial operating capability for entry (including biographic and biometric data collection) was deployed to 115 air and 14 sea ports of entry on January 5, 2004, with additional capabilities deployed on February 11, 2004. Exit capability (including biometric capture) was deployed to one air and one sea port of entry. Increment 1 testing was not managed effectively and was completed after the system became operational. The Increment 1 system acceptance test plan was developed largely during and after test execution. The department developed multiple plans, and only the final plan, which was done after testing was completed, included all required content, such as tests to be performed and test procedures. None of the test plan versions, including the final version, were concurred with by the system owner or approved by the IT project manager, as required. By not having a complete test plan before testing began, the US-VISIT program office unnecessarily increased the risk that the testing performed would not adequately address Increment 1 requirements and failed to have adequate assurance that the system was being fully tested. Further, by not fully testing Increment 1 before the system became operational, the program office assumed the risk of introducing errors into the deployed system. In fact, post-deployment problems surfaced with the Student and Exchange Visitor Information System (SEVIS) interface as a result of this approach, and manual work-arounds had to be implemented. The independent verification and validation contractor’s roles may be in conflict. The US-VISIT program plans to use its contractor to review some of the processes and products that the contractor may be responsible for defining or executing. Depending on the products and processes in question, this approach potentially impedes the contractor’s independence, and thus its effectiveness. A program-level change control board has not been established. Changes related to Increment 1 were controlled primarily through daily coordination meetings (i.e., oral discussions) among representatives from Increment 1 component systems teams and program officials, and the various boards already in place for the component systems. Without a structured and disciplined approach to change control, program officials do not have adequate assurance that changes made to the component systems for non-US-VISIT purposes do not interfere with US-VISIT functionality. The fiscal year 2004 expenditure plan does not disclose management reserve funding. Program officials, including the program director, stated that reserve funding is embedded within the expenditure plan’s various areas of proposed spending. However, the plan does not specifically disclose these embedded reserve amounts. By not creating, earmarking, and disclosing a specific management reserve fund in the plan, DHS is limiting its flexibility in addressing unexpected problems that could arise in the program’s various areas of proposed spending, and it is limiting the ability of the Congress to exercise effective oversight of this funding. Plans for future US-VISIT increments do not call for additional staff or facilities at land ports of entry. However, these plans are based on various assumptions that potential policy changes could invalidate. These changes could significantly increase the number of foreign nationals who would require processing through US-VISIT. Additionally, the Data Management Improvement Act Task Force’s 2003 Second Annual Report to Congress has noted that existing land port of entry facilities do not adequately support even the current entry and exit processes. Thus, future US-VISIT staffing and facility needs are uncertain. The fiscal year 2004 US-VISIT expenditure plan (with related program office documentation and representations) at least partially satisfies the legislative conditions imposed by the Congress. Further, steps are planned, under way, or completed to address most of our open recommendations. However, overall progress on all of our recommendations has been slow, and considerable work remains to fully address them. The majority of these recommendations are aimed at correcting fundamental limitations in the program office’s ability to manage US-VISIT in a way that reasonably ensures the delivery of mission value commensurate with costs and provides for the delivery of promised capabilities on time and within budget. Given this background, it is important for DHS to implement the recommendations quickly and completely through active planning and continuous monitoring and reporting. Until this occurs, the program will continue to be at high risk of not meeting expectations. To the US-VISIT program office’s credit, the first phase of the program has been deployed and is operating, and the commitments that DHS made regarding this initial operating capability were largely met. However, this was not accomplished in a manner that warrants repeating. In particular, the program office did not employ the kind of rigorous and disciplined management controls that are typically associated with successful programs, such as effective test management and configuration management practices. Moreover, the second phase of US-VISIT is already under way, and these controls are still not established. These controls, while significant for the initial phases of US-VISIT, are even more critical for the later phases, because the size and complexity of the program will only increase, and the later that problems are found, the harder and more costly they are to fix. Also important at this juncture in the program’s life are the still open questions surrounding whether the initial phases of US-VISIT will return value to the nation commensurate with their costs. Such questions warrant answers sooner rather than later, because of the program’s size, complexity, cost, and mission significance. It is imperative that DHS move swiftly to address the US-VISIT program management weaknesses that we previously identified, by implementing our remaining open recommendations. It is equally essential that the department quickly corrects the additional weaknesses that we have identified. Doing less will only increase the risk associated with US-VISIT. To better ensure that the US-VISIT program is worthy of investment and is managed effectively, we are reiterating our prior recommendations, and we further recommend that the Secretary of Homeland Security direct the Under Secretary for Border and Transportation Security to ensure that the US-VISIT program director takes the following actions: Develop and approve complete test plans before testing begins. These plans, at a minimum, should (1) specify the test environment, including test equipment, software, material, and necessary training; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. Establish processes for ensuring the independence of the IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. Identify and disclose to the Appropriations Committees management reserve funding embedded in the fiscal year 2004 expenditure plan. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. Assess the full impact of a key future US-VISIT increment on land port of entry workforce levels and facilities, including performing appropriate modeling exercises. To ensure that our recommendations addressing fundamental program management weaknesses are addressed quickly and completely, we further recommend that the Secretary direct the Under Secretary to have the program director develop a plan, including explicit tasks and milestones, for implementing all of our open recommendations, including those provided in this report. We further recommend that this plan provide for periodic reporting to the Secretary and Under Secretary on progress in implementing this plan. Lastly, we recommend that the Secretary report this progress, including reasons for delays, in all future US-VISIT expenditure plans. In written comments on a draft of this report signed by the US-VISIT Director (reprinted in app. II, along with our responses), DHS agreed with our recommendations and most of our observations. It also stated that it appreciated the guidance that the report provided and described actions that it is taking or plans to take in response to our recommendations. However, DHS stated that it did not fully agree with all of our findings, specifically offering comments on our characterization of the status of one open recommendation and two observations. First, it did not agree with our position that it had not developed a security plan and completed a privacy impact assessment. According to DHS, it has completed both. We acknowledge DHS’s activity on both of these issues, but disagree that completion of an adequate security plan and privacy impact assessment has occurred. As we state in the report, the department’s security plan for US-VISIT, titled Security and Privacy: Requirements & Guidelines Version 1.0, is a draft document, and it does not include information consistent with relevant guidance for a security plan, such as a risk assessment methodology and specific controls for meeting security requirements. Moreover, much of the document discusses guidelines for developing a security plan, rather than specific contents of a plan. Also, as we state in the report, the Privacy Impact Assessment was published but is not complete because it does not satisfy important parts of OMB guidance governing the content of these assessments, such as discussing alternatives to the designed methods of information collection and handling. Second, DHS stated that it did not fully agree with our observation that the Increment 1 system test plan was developed largely during and after testing, citing several steps that it took as part of Increment 1 requirements definition, test preparation, and test execution. However, none of the steps cited address our observations that DHS did not have a system acceptance test plan developed, approved, and available in time to use as the basis for conducting system acceptance testing and that only the version of the test plan modified on January 16, 2004 (after testing was completed) contained all of the required test plan content. Moreover, DHS’s comments acknowledge that the four versions of its Increment 1 test plan were developed during the course of test execution, and that the test schedule did not permit sufficient time for all stakeholders to review, and thus approve, the plans. Third, DHS commented on the roles and responsibilities of its various support contractors, and stated that we cited the wrong operative documentation governing the role of its independent verification and validation contractor. While we do not question the information provided in DHS’s comments concerning contractor roles, we would add that its comments omitted certain roles and responsibilities contained in the statement of work for one of its contractors. This omitted information is important because it is the basis for our observation that the program office planned to task the same contractor that was responsible for program management activities with performing independent verification and validation activities. Under these circumstances, the contractor could not be independent. In addition, we disagree with DHS’s comment that we cited the wrong operative documentation, and note that the document DHS said we should have used relates to a different support contractor than the one tasked with both performing program activities and performing independent verification and validation activities. The department also provided additional technical comments, which we have incorporated as appropriate into the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of State and the Director of OMB. Copies of this report will also be available at no charge on our Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Another contact and key contributors to this report are listed in appendix III. facilitate legitimate trade and travel, contribute to the integrity of the U.S. immigration system,1 and adhere to U.S. privacy laws and polices. US-VISIT capability is planned to be implemented in four increments. Increment 1 began operating on January 5, 2004, at major air and sea ports of entry (POEs). This goal has been added since the last expenditure plan. established by the Office of Management and Budget (OMB), including OMB Circular A-11, part 3.2 Complies with DHS’s enterprise architecture. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. Is reviewed and approved by DHS and OMB. Is reviewed by GAO. OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. 1. determine whether the US-VISIT fiscal year 2004 expenditure plan satisfies the 2. determine the status of our US-VISIT open recommendations, and 3. provide any other observations about the expenditure plan and DHS’s management of US-VISIT. We conducted our work at DHS’s headquarters in Washington, D.C., and at its Atlanta Field Operations Office (Atlanta’s William B. Hartsfield International Airport) from October 2003 through February 2004 in accordance with generally accepted government auditing standards. Details of our scope and methodology are given in attachment 1. Legislative conditions 1. Meets the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7.2. Complies with the DHS enterprise architecture. 3. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. 4. Is reviewed and approved by DHS and OMB. 5.Is reviewed by GAO. GAO open recommendations 1. Develop a system security plan and privacy impact assessment. 2. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. 3. Ensure that future expenditure plans are provided to DHS’s House and Senate Appropriations Subcommittees in advance of US-VISIT funds being obligated. 4. Ensure that future expenditure plans fully disclose US-VISIT system capabilities, schedule, cost, and benefits to be delivered. Actions have been taken to fully implement the recommendation. GAO open recommendations 5. Establish and charter an executive body composed of senior-level representatives from DHS and each stakeholder organization to guide and direct the US-VISIT program. 6. Ensure that human capital and financial resources are provided to establish a fully functional and effective US-VISIT program office. 7. Clarify the operational context in which US-VISIT is to operate. 8. Determine whether proposed US-VISIT increments will produce mission value commensurate with costs and risks. 9. Define US-VISIT program office positions, roles, and responsibilities. 10. Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. 11. Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. 12. Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. Commitments were largely met; the system is deployed and operating. Testing was not managed effectively; if continued, the current approach to testing would increase risks. The system acceptance test (SAT) plan was developed largely during and after test execution. The SAT plan available during testing was not complete. SAT was not completed before the system became operational. Key program issues exist that increase risks if not resolved. Independent verification and validation (IV&V) contractor’s roles may be conflicting. Program-level change control board has not been established. Expenditure plan does not disclose management reserve funding. Land POE workforce and facility needs are uncertain. To assist DHS in managing US-VISIT, we are making eight recommendations to the Secretary of DHS. In their comments on a draft of this briefing, US-VISIT program officials stated that they generally agreed with the briefing and that it was fair and balanced. collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the border management community. Classes of travelers that are not subject to US-VISIT are foreign nationals admitted on A-1, A-2, C-3 (except for attendants, servants, or personal employees of accredited officials), G-1, G-2, G-3, G-4, NATO-1, NATO-2, NATO-3, NATO-4, NATO-5, or NATO-6 visas, unless the Secretary of State and the Secretary of Homeland Security jointly determine that a class of such aliens should be subject to the rule; children under the age of 14; and persons over the age of 79. The Miami Royal Caribbean seaport and the Baltimore/Washington International Airport. included the development of policies, procedures, and associated training for implementing US-VISIT at the air and sea POEs; included outreach efforts, such as brochures, demonstration videos, and signage at air and sea POEs; did not include additional inspector staff at air and sea POEs; and did not include the acquisition of additional entry facilities. For exit, DHS is in the process of assessing facilities space and installing conduit, electrical supply, and signage. Increment 2 is divided into two Increments—2A and 2B. Increment 2A is to include at all POEs the capability to process machine- readable visas and other travel and entry documents that use biometric identifiers. This increment is to be implemented by October 26, 2004. According to the US-VISIT Deputy Director: Each of the 745 entry and exit traffic lanes at these 50 land POEs is to have the infrastructure, such as underground conduit, necessary to install the RF technology. Secondary inspection is used for more detailed inspections that may include checking more databases, conducting more intensive interviews of the individual, or both. RF technology would require proximity cards and card readers. RF readers read the information contained on the card when the card is passed near the reader, and could be used to verify the identity of the card holder. of manually completed I-94 forms1 from exiting travelers. Increment 3 is to expand Increment 2B system capability to the remaining 115 land POEs. It is to be implemented by December 31, 2005. I-94 forms have been used for years to track foreign nationals’ arrivals and departures. Each form is divided into two parts: an entry portion and an exit portion. Each form contains a unique number printed on both portions of the form for the purposes of subsequent recording and matching the arrival and departure records on nonimmigrants. An indefinite-delivery/indefinite-quantity contract provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period of time. The government schedules deliveries or performance by placing orders with the contractor. IBIS lookout sources include: DHS’s Customs and Border Protection and Immigration and Customs Enforcement; the Federal Bureau of Investigation; legacy Immigration and Naturalization Service and Customs information; the U.S. Secret Service; the U.S. Coast Guard; the Internal Revenue Service; the Drug Enforcement Agency; the Bureau of Alcohol, Tobacco & Firearms; the U.S. Marshals Service; the U.S. Office of Foreign Asset Control; the National Guard; the Treasury Inspector General; the U.S. Department of Agriculture; the Department of Defense Inspector General; the Royal Canadian Mounted Police; the U.S. State Department; Interpol; the Food and Drug Administration; the Financial Crimes Enforcement Network; the Bureau of Engraving and Printing; and the Department of Justice Office of Special Investigations. This footnote has been modified to include additional information obtained since the briefing’s delivery to the Committees. stores biometric data about foreign visitors;1 Student Exchange Visitor Information System (SEVIS), a system that contains information on foreign students; Computer Linked Application Information Management System (CLAIMS 3), a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay; and Consular Consolidated Database (CCD), a system that includes information on whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. Includes data such as: Federal Bureau of Investigation information on all known and suspected terrorists, selected wanted persons (foreign-born, unknown place of birth, previously arrested by DHS), and previous criminal histories for high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sexual registrants; DHS information on previous criminal histories and previous IDENT enrollments. Information from the bureau includes fingerprints from the Integrated Automated Fingerprint Identification System. This footnote has been modified to include additional information obtained since the briefing’s delivery to the Committees. A CD-ROM is a digital storage device that is capable of being read, but not overwritten. CLAIMS 3’s interface with ADIS was deployed and implemented on February 11, 2004. U.S. General Accounting Office, Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed, GAO-03-1083 (Washington, D.C.: Sept. 19, 2003). Operational context is unsettled. Near-term facilities solutions pose challenges. Mission value of first increment is currently unknown. GAO’s Review of Fiscal Year 2002 Expenditure Plan In our report on the fiscal year 2002 expenditure plan,1 we reported that INS intended to acquire and deploy a system with functional and performance capabilities consistent with the general scope of capabilities under various laws; the plan did not provide sufficient information to allow Congress to oversee the program; INS had not developed a security plan and privacy impact assessment; and INS had not implemented acquisition management controls in the area of acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation consistent with SEI guidance. We made recommendations to address these areas. U.S. General Accounting Office, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: June 9, 2003). Fiscal Year 2004 Expenditure Plan Summary (see next slides for descriptions) Available appropriations (millions) The US-VISIT expenditure plan satisfies or partially satisfies each of the legislative conditions. Condition 1. The plan, including related program documentation and program officials’ statements, partially satisfies the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7, which establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The table that follows provides examples of the results of our analysis. Examples of A-11 conditions Provide justification and describe acquisition strategy. Summarize life cycle costs and cost/benefit analysis, including the return on investment. Results of our analysis US-VISIT has completed an Acquisition Plan dated November 28, 2003. The plan provides a high-level justification and description of the acquisition strategy for the system. DHS does not have current life cycle costs nor a current cost/benefit analysis for US-VISIT. According to program officials, US-VISIT has a draft life cycle cost estimate and cost/benefit analysis. Both are expected to be completed in March 2004.A security plan for US-VISIT has not been developed. Instead, US-VISIT was certified and accredited based upon the updated security certification for each of Increment 1’s component systems. The US-VISIT program published a privacy impact assessment on January 5, 2004. Provide risk inventory and assessment. US-VISIT has developed a draft risk management plan and a process to implement and manage risks. US-VISIT also maintains a risk and issues tracking database. Condition 2. The plan, including related program documentation and program officials’ statements, satisfies this condition by providing for compliance with DHS’s enterprise architecture. DHS released version 1 of the architecture in October 2003.1 It plans to issue version 2 in September 2004. According to the DHS Chief Information Officer (CIO), DHS is developing a process to align its systems modernization efforts, such as US-VISIT, to its enterprise architecture. Alignment of US-VISIT to the enterprise architecture has not yet been addressed, but DHS CIO and US-VISIT officials stated that they plan to do so. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. Condition 3. The plan, including related program documentation and program officials’ statements, satisfies the condition that it comply with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. These criteria provide a management framework based on the use of rigorous and disciplined processes for planning, managing, and controlling the acquisition of IT resources, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, and evaluation. The table that follows provides examples of the results of our analysis. The US-VISIT program has developed and documented an acquisition strategy and plan for a prime contractor to perform activities for modernizing US-VISIT business processes and systems, calling for, among other things, these activities to meet all relevant legislative requirements. Activities identified include U.S. border management- related work and support; other DHS-related strategic planning, and any associated systems development and integration, business process reengineering, organizational change management, information technology support, and program management work and support; and other business, technical, and management capabilities to meet the legislative mandates, operational needs, and government business requirements. The strategy defines a set of acquisition objectives, identifies key roles and responsibilities, sets general evaluation criteria, and establishes a high-level acquisition schedule. The plan describes initial tasking, identifies existing systems with which to interoperate/interface, defines a set of high-level risks, and lists applicable legislation. The RFP for the prime contractor acquisition was issued on November 28, 2003. A selecting official has been assigned responsibility, and a team, including contract specialists, has been formed and has received training related to this acquisition. A set of high-level evaluation factors have been defined for selecting the prime integrator, and the team plans to define more detailed criteria. Condition 4 met. The plan, including related program documentation and program officials’ statements, satisfies the requirement that it be reviewed and approved by DHS and OMB. DHS and OMB reviewed and approved the US-VISIT fiscal year 2004 expenditure plan. Specifically, the DHS IRB1 approved the plan on December 17, 2003, and OMB approved the plan on January 27, 2004. The IRB is the executive review board that provides acquisition oversight of DHS level 1 investments and conducts portfolio management. Level 1 investment criteria are contract costs exceeding $50 million; importance to DHS strategic and performance plans; high development, operating, or maintenance costs; high risk; high return; significant resource administration; and life cycle costs exceeding $200 million. According to the DHS CIO, US-VISIT is a level 1 investment. Condition 5 met. The plan satisfies the requirement that it be reviewed by GAO. Our review was completed on March 2, 2004. Open Recommendation 1: Develop a system security plan and privacy impact assessment. Security Plan. DHS does not have a security plan for US-VISIT. Although program officials provided us with a draft document entitled Security & Privacy: Requirements & Guidelines Version 1.0,1 this document does not include information consistent with relevant guidance for a security plan. The OMB and the National Institute of Standards and Technology have issued security planning guidance.2 In general, this guidance requires the development of system security plans that (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the security requirements, (3) delineate roles and responsibilities of all individuals who access the system, (4) discuss a risk assessment methodology, and (5) address security awareness and training. Security & Privacy: Requirements & Guidelines Version 1.0 Working Draft, US-VISIT Program (May 15, 2003). Office Management and Budget Circular Number A-130, Revised (Transmittal Memorandum No. 4), Appendix III, “Security of Federal Automated Information Resources” (Nov. 28, 2000) and National Institute of Standards and Technology, Guide for Developing Security Plans for Information Systems, NIST Special Publication 800-18 (December 1998). The draft document identifies security requirements for the US-VISIT program and addresses the need for training and awareness. However, the document does not include (1) specific controls for meeting the security requirements, (2) a risk assessment methodology, and (3) roles and responsibilities of individuals with system access. Moreover, with the exception of the US-VISIT security requirements, much of the document discusses guidelines for developing a security plan, rather than specific contents of US-VISIT security plan. Despite the absence of a security plan, the US-VISIT CIO accredited Increment 1 based upon updated security certifications1 for each of Increment 1’s component systems (e.g., ADIS, IDENT, and IBIS) and a review of the documentation, including component security plans, associated with these updates. According to the security evaluation report (SER), the risks associated with each component system were evaluated, component system vulnerabilities were identified, and component system certifications were granted. Certification is the evaluation of the extent to which a system meets a set of security requirements. Accreditation is the authorization and approval granted to a system to process sensitive data in an operational environment; this is made on the basis of a compliance certification by designated technical personnel of the extent to which design and implementation of the system meet defined technical requirements for achieving data security. Based on the SER, the US-VISIT security officer certified Increment 1, and Increment 1 was accredited and granted an interim authority to operate for 6 months. This authority will expire on June 18, 2004. Additionally, this authority would not extend to a modified version of Increment 1. For example, the SER states that US-VISIT exit functionality was not part of the Increment 1 certification and accreditation, and that it was to be certified and accredited separately from Increment 1. The SER also notes that the Increment 1 certification will require updating upon the completion of security documentation for the exit functionality. Privacy Impact Assessment. The US-VISIT program has conducted a privacy impact assessment for Increment 1. According to OMB guidance,1 the depth and content of such an assessment should be appropriate for the nature of the information to be collected and the size and complexity of the system involved. OMB Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002, OMB M-03-22 (Sept. 26, 2003). The assessment should also, among other things, (1) identify appropriate measures for mitigating identified risks, (2) discuss the rationale for the final design or business process choice, (3) discuss alternatives to the designed information collection and handling, and (4) address whether privacy is provided for in system development documentation. The OMB guidance also notes that an assessment may need to be updated before deploying a system in order to, among other things, address choices made in designing the system or in information collection and handling. The Increment 1 assessment satisfies some, but not all, of the above four OMB guidance areas. Specifically, it identifies Increment 1 privacy risks, discusses mitigation strategies for each risk, and briefly discusses the rationale for design choices. However, the assessment does not discuss alternatives to the designed methods of information collection and handling. Additionally, the Increment 1 systems documentation does not address privacy issues. According to the Program Director, the assessment will be updated for future increments. Open Recommendation 2: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. According to the US-VISIT Program Director, the program office has established a goal of achieving SEI Software Acquisition Capability Maturity Model (SA-CMM®) level 2, and the office’s Acquisition and Program Management Lead has responsibility for achieving this status. To facilitate attaining this goal, the Acquisition and Program Management Lead’s organization includes functions consistent with the management controls defined by the SA-CMM®, such as acquisition planning and requirements development and management. According to the Acquisition and Program Management Lead, an approach for achieving level 2 will be defined as part of a strategy that has yet to be developed. However, the lead could not provide a date for when the strategy would be developed. The expenditure plan indicates that the US-VISIT program office will solicit SEI’s participation in achieving level 2. Open Recommendation 3: Ensure that future expenditure plans are provided to the Department’s House and Senate Appropriations Subcommittees on Homeland Security in advance of US-VISIT funds being obligated. The Congress appropriated $330 million in fiscal year 2004 funds for the US-VISIT program.1 On January 27, 2004, DHS provided its fiscal year 2004 expenditure plan to the Senate and House Appropriations Subcommittees on Homeland Security. On January 26, 2004, DHS submitted to the Senate and House Appropriations Subcommittees on Homeland Security a request for the release of $25 million from the fiscal year 2004 appropriations. Department of Homeland Security Appropriations Act, 2004, Pub. L. 108-90 (Oct. 1, 2003). Open Recommendation 4: Ensure that future expenditure plans fully disclose US- VISIT system capabilities, schedule, cost, and benefits to be delivered. The expenditure plan identifies high-level capabilities, such as record arrival of foreign nationals, identify foreign nationals who have stayed beyond the authorized period, and use biometrics to verify identity of foreign nationals. The plan does not associate these capabilities with specific increments. The plan identifies a high-level schedule for implementing the system. For example, Increment 2A is to be implemented by October 26, 2004; Increment 2B by December 31, 2004; and Increment 3 by December 31, 2005. The plan identifies total fiscal year 2004 costs by each increment. For example, DHS plans to obligate $73 million in fiscal year 2004 funds for Increment 2A. However, the plan does not break out how the $73 million will be used to support Increment 2A, beyond indicating that the funds will be used to read biometric information in travel documents, including fingerprints and photos, at all ports of entry. Also, the plan does not identify any nongovernment costs. The plan identifies seven general benefits and planned performance metrics for measuring three of the seven benefits. The plan does not associate the benefits with increments. The following table shows US-VISIT benefits and whether associated metrics have been defined. metric defined? Open Recommendation 5: Establish and charter an executive body composed of senior-level representatives from DHS and each stakeholder organization to guide and direct the US-VISIT program. DHS has established a three-entity governance structure. The entities are (1) the Homeland Security Council (HSC), (2) the DHS Investment Review Board (IRB), and (3) the US-VISIT Federal Stakeholders Advisory Board. The HSC is tasked with ensuring the coordination of all homeland security- related activities among executive departments and agencies and is composed of senior-level executives from across the federal government. According to the expenditure plan, the HSC helps to set policy boundaries for the US-VISIT program. According to DHS’s investment management guidance,1 the IRB is the executive review board that provides acquisition oversight of DHS level 1 investments2 and conducts portfolio management. The primary function of the IRB is to review level 1 investments for formal entry into the budget process and at key decision points. The plan states that the IRB is to monitor the US- VISIT program’s achievement of cost, schedule, and performance goals. DHS Management Directive 1400, Investment Review Process (undated). Level 1 investment criteria are contract costs exceeding $50 million; importance to DHS strategic and performance plans; high development, operating, or maintenance costs; high risk; high return; significant resource administration; and life cycle costs exceeding $200 million. According to the DHS CIO, US-VISIT is a level 1 investment. According to its charter, the Advisory Board provides recommendations for overseeing US-VISIT management and performance activities, including providing advice on the overarching US-VISIT vision; recommending the overall US-VISIT strategy and its responsiveness to all operational missions, both within DHS and with its participating government agencies; recommending changes to the US-VISIT vision and strategic direction; providing a communication link for aligning strategic direction, priorities, and resources with stakeholder operations; reviewing and assessing US-VISIT programwide institutional processes to ensure that business, fiscal, and technical priorities are integrated and carried out in accordance with established priorities; and reviewing and recommending new US-VISIT program initiatives, including the scope, funding, and programmatic resources required. Open Recommendation 6: Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. DHS established the US-VISIT program office in July 2003 and determined the office’s staffing needs to be 115 government and 117 contractor personnel. As of February 2004, DHS had filled all the program office’s 12 key management and 29 other positions, leaving 74 positions to be filled. All filled positions are currently staffed by detailees from other organizational units within DHS, such as Immigration and Customs Enforcement. The graphic on the next page shows the US-VISIT program office organization structure and functions, the number of positions needed by each office, and the number of positions filled by detailees. In addition to the 115 government staff anticipated, the program anticipated 117 contractor support staff. As of February 2004, program officials told us they had filled 97.5 of these 117. Open Recommendation 7: Clarify the operational context in which US-VISIT is to operate. DHS is in the process of defining the operational context in which US-VISIT is to operate. In October 2003, DHS released version 1 of its enterprise architecture, and it plans to issue version 2 in September 2004.1 We are currently reviewing DHS’s latest version of its architecture at the request of the House Committee on Government Reform’s Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. Open Recommendation 8: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The expenditure plan identifies high-level benefits to be provided by the US-VISIT program, such as the ability to prevent the entry of high-threat or inadmissible individuals through improved and/or advanced access to data before the foreign national’s arrival. However, the plan does not associate these benefits with specific increments. Further, the plan does not identify the total estimated cost of Increment 2. Instead, the plan identifies only fiscal year 2004 funds to be obligated for Increments 2A and 2B, which are $73 million and $81 million, respectively. In addition, the plan does not include any nongovernmental costs associated with US- VISIT. The RFP indicates that the total solution for Increment 2 has not been determined and will not be finalized until the prime contractor is on board. Until that time, DHS is not in a position to determine the total cost of Increments 2A and 2B, and thus whether they will produce mission value commensurate with costs. According to program officials, they have developed a life cycle cost estimate and cost-benefit analysis that are currently being reviewed and are to be completed in March 2004. According to these officials, the cost-benefit analysis will be for Increment 2B. Open Recommendation 9: Define US-VISIT program office positions, roles, and responsibilities. The US-VISIT program is working with the Office of Personnel Management (OPM) through an interagency agreement to, among other things, assist the program office in defining its position descriptions (including position roles and responsibilities), issuing vacancy announcements, and recruiting persons to fill the positions. The US-VISIT program is also working with OPM to define the competencies that are to be used in defining the position descriptions. As of February 2004, the program office reported that it has partially completed defining the competencies for its 12 offices and has partially competed position descriptions for 4 of the 12 offices. The following slide shows the competencies defined and position descriptions written. Open Recommendation 10: Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The US-VISIT program office has not yet defined a human capital strategy, although program officials stated that they plan to develop one in concert with the department’s ongoing workforce planning. As part of its effort, DHS is drafting a departmental workforce plan that, according to agency officials, will likely be completed during fiscal year 2004. According to the Program Director, the Director of Administration and Management is responsible for developing the program’s strategic human capital plan. However, descriptions of the Administration and Management office functions, including those provided by the program office and those in the expenditure plan, do not include strategic human capital planning. Open Recommendation 11: Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The program office has developed a draft risk management plan, dated June 2003. The draft defines plans to develop, implement, and institutionalize a risk management program. The program’s primary function is to identify and mitigate US-VISIT risks. The expenditure plan states that the program office is currently defining risk management processes. In the interim, the program office is creating a risk management team to assist the program office in proactively identifying and managing risks while formal processes and procedures are being developed. The expenditure plan also states that the US-VISIT program office currently maintains a risk and issue tracking database and conducts weekly risk and schedule meetings. Within the risk database, each risk is assigned a risk impact rating and an owner. The database also gives the date when the risk is considered closed. In addition, the US-VISIT program office has staff dedicated to tracking these items and meeting weekly with the various integrated project teams to mitigate potential risks. Open Recommendation 12: Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. US-VISIT has defined limited, measurable performance standards. For example: System availability1—the system shall be available 99.5 percent of the time. Data currency—(1) US-VISIT Increment 1 Doc Key2 data shall be made available to any interfacing US-VISIT system within 24 hours of the event (enrollment, biometric encounter, departure, inspector modified data); (2) IBIS/APIS arrival manifests, departure manifests, and inspector-modified data shall be made available to ADIS within 24 hours of each stated event; and (3) IDENT shall reconcile a biometric encounter within 24 hours of the event. System availability is defined as the time the system is operating satisfactorily, expressed as a percentage of time that the system is required to be operational. DocKey includes such information as biographical data and the fingerprint identification number, and is used to track a foreign national’s identity as the information is shared between systems. However, not all performance standards are being defined in a way that reflects the performance limitations of existing systems. In particular, US-VISIT documentation states that the system performance standard for Increment 1 is 99.5 percent. However, Increment 1 availability is the product of its component system availabilities. Given that US-VISIT system documentation also states that the system availability performance standard for IDENT and ADIS is 99.5 percent, Increment 1 system availability would have to be something less than 99.5 percent (99.5 x 99.5 x other component systems’ availability). Observation 1: Increment 1 commitments were largely met; the system is deployed and operating. According to DHS, Increment 1 was to deliver an initial operating capability to all air and sea POEs by December 31, 2003, that included recording the arrival and departure of foreign nationals using passenger and crew manifest data, verifying foreign nationals’ identity upon entry into the United States through the use of biometrics and checks against watchlists at air POEs and 13 of 42 sea POEs, interfacing with seven existing systems that contain data about foreign nationals, identifying foreign nationals who have overstayed their visits or changed their visitor status, and potentially including an exit capability beyond the capture of the manifest data. Generally, an initial operating capability was delivered to air and sea POEs on January 5, 2004. In particular, Increment 1 entry capability (including biographic and biometric data collection) was deployed to 115 airports and 14 seaports on January 5, 2004. Further, while the expenditure plan states that an Increment 1 exit capability was deployed to 80 air and 14 sea POEs on January 5, 2004, exit capability (including biometric capture) was deployed to only one air POE (Baltimore/Washington International Airport) and one sea POE (Miami Royal Caribbean seaport). DHS’s specific satisfaction of each commitment is described on the following slides. INS Data Management Improvement Act of 2000, Pub. L. 106-215 (June 15, 2000). Recording the arrival and departure of foreign nationals using passenger and crew manifest data: Satisfied: Carriers submit electronic arrival and departure manifest data to IBIS/APIS. Verifying foreign nationals’ identity upon entry into the United States through the use of biometrics and checks against watchlists at air POEs and 13 sea POEs: Satisfied: After carriers submit electronic manifest data to IBIS/APIS, IBIS/APIS is queried to determine whether there is any biographic lookout or visa information for the foreign national. Once the foreign national arrives at a primary POE inspection booth, the inspector, using a document reader, scans the machine-readable travel documents. IBIS/APIS returns any existing records on the foreign national, including manifest data matches and biographic lookout hits. When a match is found in the manifest data, the foreign national’s name is highlighted and outlined on the manifest data portion of the screen. (Verifying foreign nationals’ identity, cont’d) Biographic information, such as name and date of birth, is displayed on the bottom half of the screen, as well as the picture from the scanned visa. IBIS also returns information about whether there are, within IDENT, existing fingerprints for the foreign national. The inspector switches to the IDENT screen and scans the foreign national’s fingerprints (left and right index fingers) and photograph. The system accepts the best fingerprints available within the 5-second scanning period. This information is forwarded to the IDENT database, where it is checked against stored fingerprints in the IDENT lookout database. If no prints are currently in the IDENT database, the foreign national is enrolled in US-VISIT (i.e., biographic and biometric data are entered). If the foreign national’s fingerprints are already in IDENT, the system performs a 1:1 match (a comparison of the fingerprint taken during the primary inspection to the one on file) to confirm that the person submitting the fingerprints is the person on file. If the system finds a mismatch of fingerprints or a watchlist hit, the foreign national is sent to secondary inspection for further screening or processing. Interfacing seven existing systems that contain data about foreign nationals: Largely satisfied: As of January 5, 2004, US-VISIT interfaced six of seven existing systems. The CLAIMS 3 to ADIS interface was not operational on January 5, 2004, but program officials told us that it was subsequently placed into production on February 11, 2004. Identifying foreign nationals who have overstayed their visits or changed their visitor status: Largely satisfied: ADIS matches entry and exit manifest data provided by air and sea carriers. The exit process includes the carriers’ submission of electronic manifest data to IBIS/APIS. This biographic information is passed to ADIS, where it is matched against entry information. (Verifying foreign nationals who overstay or change status, cont’d) US-VISIT was to rely on interfaces with CLAIMS 3 and SEVIS to obtain information regarding changes in visitor status. However, as of January 5, 2004, the CLAIMS 3 interface was not operational; it was subsequently placed into production on February 11, 2004. Further, although the SEVIS to ADIS interface was implemented on January 5, 2004, after January 5, problems surfaced, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Potentially include an exit capability beyond the capture of the manifest data: Not satisfied: Biometric exit capability was not deployed to the 80 air1 and 14 sea POEs that received Increment 1 capability. Instead, biometric exit capability was provided to two POEs for pilot testing. Under this testing, foreign nationals use a self-serve kiosk where they are prompted to scan their travel documentation and provide their fingerprints (right and left index fingers). On a daily basis, the information collected on departed passengers is downloaded to a CD-ROM.2 The CD is then express mailed to a DHS contractor facility to be uploaded into IDENT, where a 1:1 match is performed (i.e., the fingerprint captured during entry is compared with the one captured at exit). According to program officials, biometric capture for exit was deployed at two POEs on January 5, 2004, as a pilot. According to these officials, this exit capability was deployed to only two POEs because US-VISIT decided to evaluate other exit alternatives. Only 80 of the 115 air POEs are departure airports for international flights. A CD-ROM is a digital storage device that is capable of being read, but not overwritten. Observation 2: The system acceptance test (SAT) plan was developed largely during and after test execution. The purpose of SAT is to identify and correct system defects (i.e., unmet system functional, performance, and interface requirements) and thereby obtain reasonable assurance that the system performs as specified before it is deployed and operationally used. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities. According to relevant systems development guidance,1 SAT plans are to be developed before test execution. However, this was not the case for Increment 1. Specifically, the US-VISIT program provided us with four versions of a test plan, each containing more information than the previous version. While the initial version was dated September 18, 2003, which is before testing began, the three subsequent versions (all dated November 17, 2003) were modified on November 25, 2003, December 18, 2003, and January 16, 2004, respectively. According to US-VISIT officials, in the absence of a DHS Systems Development Life Cycle (SDLC), they followed the former Immigration and Naturalization Service’s SDLC, version 6.0, to manage US-VISIT development. According to the program office, the version modified on January 16, 2004, is the final plan. According to the SAT Test Analysis Report (dated January 23, 2004), testing began on September 29, 2003, and was completed on January 7, 2004, meaning that the plans governing the execution of testing were not sufficiently developed before test execution.1 The following timeline compares test plan development and execution. According to an IT management program official, although the Test Analysis Report was marked “Final,” it is still being reviewed. According to US-VISIT officials, SAT test plans were not completed before testing began because of the compressed schedule for testing. According to these officials, a draft test plan was developed and periodically updated to reflect documentation provided by the component contractors. In the absence of a complete test plan before testing began, the US-VISIT program office unnecessarily increased the risk that the testing performed would not adequately address Increment 1 requirements, which increased the chances of either having to redo already executed tests or deploy a system that would not perform as intended. In fact, postdeployment problems surfaced with the SEVIS interface, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 3: SAT plan available during testing was not complete. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities. According to relevant systems development guidance, a complete test plan (1) specifies the test environment, including test equipment, software, material, and necessary training; (2) describes each test to be performed, including test controls, inputs, and expected outputs; (3) defines the test procedures to be followed in conducting the tests; and (4) provides traceability between test cases and the requirements to be verified by the testing.1 This guidance also requires that the system owner concur with, and the IT project manager approve, the test plan before SAT testing. According to US-VISIT officials, in the absence of a DHS Systems Development Life Cycle (SDLC), they followed the former Immigration and Naturalization Service’s SDLC, version 6.0, to manage US-VISIT development. As previously noted, the US-VISIT program office provided us with four versions of the SAT test plan. The first three versions of the plan were not complete. The final plan largely satisfied the above criteria. The September 18, 2003, test plan included a description of the test environment and a brief description of tests to be performed, but the description of the tests did not include controls, inputs, and expected outputs. Further, the plan did not include specific test procedures for implementing the test cases and provide traceability between the test cases and the requirements that they were designed to test. Similarly, the November 25, 2003, test plan included a description of the test environment and a brief description of tests to be performed, but the description of the tests did not include controls, inputs, and expected outputs. Further, the plan did not include specific test procedures for implementing the test cases or provide traceability between the test cases and the requirements they were designed to test. The December 18, 2003, test plan included a description of the test environment and a brief description of 55 tests to be performed. The plan also described actual test procedures and controls, inputs, and expected outputs for 24 of the 55 test cases. The plan included traceability between the test cases and requirements. The January 16, 2004, test plan included a description of the test environment; the tests to be performed, including inputs, controls, and expected outputs; the actual test procedures for each test case; and traceability between the test cases and requirements. None of the test plan versions, including the final version, indicated concurrence by the system owner or approval by the IT project manager. The following graphic shows the SAT plans’ satisfaction of relevant criteria. According to US-VISIT officials, SAT test plans were not completed before testing began because the compressed schedule necessitated continuously updating the plan as documentation was provided by the component contractors. According to an IT management official, test cases were nevertheless available for ADIS and IDENT in these systems’ regression test plans or in a test case repository. Without a complete test plan for Increment 1, DHS did not have adequate assurance that the system was being fully tested, and it unnecessarily assumed the risk that errors detected would not be addressed before the system was deployed, and that the system would not perform as intended when deployed. In fact, postdeployment problems surfaced with the SEVIS interface, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 4: SAT was not completed before the system became operational. The purpose of SAT is to identify and correct system defects (i.e., unmet system functional, performance, and interface requirements) and thereby obtain reasonable assurance that the system performs as specified before it is deployed and operationally used. SAT is accomplished in part by (1) executing a predefined set of test cases, each traceable to one or more system requirements, (2) determining if test case outcomes produce expected results, and (3) correcting identified problems. To the extent that test cases are not executed, the scope of system testing can be impaired, and thus the level of assurance that the system will perform satisfactorily is reduced. Increment 1 began operating on January 5, 2004. However, according to the SAT Test Analysis Report, testing was completed 2 days after Increment 1 began operating (January 7, 2004). Moreover, the Test Analysis Report shows that important test cases were not executed. For example, none of the test cases designed to test the CLAIMS 3 and SEVIS interfaces were executed. According to agency officials, the CLAIMS 3 to ADIS interface was not ready for acceptance testing before January 5, 2004. Accordingly, deployment of this capability and the associated testing were deferred; they were completed on February 11, 2004. Similarly, the SEVIS to ADIS interface was not ready for testing before January 5, 2004. However, this interface was implemented on January 5, 2004, without acceptance testing. According to program officials, the program owner and technical project managers were aware of the risks associated with this approach. By not fully testing Increment 1 before the system became operational, the program office assumed the risk of introducing errors into the deployed system and potentially jeopardizing its ability to effectively perform its core functions. In fact, postdeployment problems surfaced with the SEVIS interface as a result of this approach, and manual workarounds had to be implemented. According to the program officials, the problems are still being addressed. Observation 5: Independent verification and validation (IV&V) contractor’s roles may be conflicting. As we have previously reported,1 the purpose of independent verification and validation (IV&V) is to provide an independent review of system processes and products. The use of IV&V is a recognized best practice for large and complex system development and acquisition projects like US-VISIT. To be effective, the IV&V function must be performed by an entity that is independent of the processes and products that are being reviewed. The US-VISIT program plans to use its IV&V contractor to review some of the processes and products that the contractor may be responsible for. For example, the contractor statement of work, dated July 18, 2003, states that it shall provide program and project management support, including providing guidance and direction and creating some of the strategic program and project level products. At the same time, the statement of work states that the contractor will assess contractor and agency performance and technical documents. U.S. General Accounting Office, Customs Service Modernization: Results of Review of First Automated Commercial Environment Expenditure Plan, GAO-01-696 (Washington, D.C.: June 5, 2001). Depending on the products and processes in question, this approach potentially does not satisfy the independence requirements of effective IV&V, because the reviews conducted could lack independence from program cost and schedule pressures. Without effective IV&V, DHS is unnecessarily exposing itself to the risk that US-VISIT increments will not perform as intended or be delivered on time and within budget. Observation 6: Program-level change control board has not been established. The purpose of configuration management is to establish and maintain the integrity of work products (e.g., hardware,software, and documentation). According to relevant guidance,1 system configuration management includes four management tasks: (1) identification of hardware and software parts (items/components/ subcomponents) to be formally managed, (2) control of changes to the parts, (3) periodic reporting on configuration status, and (4) periodic auditing of configuration status. A key ingredient to effectively controlling configuration change is the functioning of a change control board (CCB); using such a board is a structured and disciplined approach for evaluating and approving proposed configuration changes. SEI’s Capability Maturity Model ® Integration (CMMISM) for Systems Engineering, Software Engineering, Integrated Product and Process Development, and Supplier Sourcing, Version 1.1 (Pittsburgh: March 2002). According to the US-VISIT CIO, the program does not yet have a change control board. In the absence of one, program officials told us that changes related to Increment 1 were controlled primarily through daily coordination meetings (i.e., oral discussions) among representatives from Increment 1 component systems (e.g., IDENT, ADIS, and IBIS) teams and program officials, and the CCBs already in place for the component systems. The following graphic depicts the US-VISIT program’s interim change control board approach compared to a structured and disciplined program-level change control approach. In particular, the interim approach requires individuals from each system component to interface with as many as six other stakeholders on system changes. Moreover, these interactions are via human-to-human communication. In contrast, the alternative approach reduces the number of interfaces to one for each component system and relies on electronic interactions with a single control point and an authoritative configuration data store. Without a structured and disciplined approach to change control, the US-VISIT program does not have adequate assurance that approved system changes are actually made; that approved changes are based, in part, on US-VISIT impact and value rather than solely on system component needs; and most importantly, that changes made to the component systems for non-US-VISIT purposes do not interfere with US-VISIT functionality. Observation 7: Expenditure plan does not disclose management reserve funding. The creation and use of a management reserve fund to earmark resources for addressing the many uncertainties that are inherent in large-scale systems acquisition programs is an established practice and a prudent management approach. The appropriations committees have historically supported an explicitly designated management reserve fund in expenditure plans submitted for such programs as the Internal Revenue Service’s Business Systems Modernization and DHS’s Automated Commercial Environment. Such explicit designation provides the agency with a flexible resource source for addressing unexpected contingencies that can inevitably arise in any area of proposed spending on the program, and it provides the Congress with sufficient understanding about management reserve funding needs and plans to exercise oversight over the amount of funding and its use. The fiscal year 2004 US-VISIT expenditure plan does not contain an explicitly designated management reserve fund. According to US-VISIT officials, including the program director, reserve funding is instead embedded within the expenditure plan’s various areas of proposed spending. However, the plan does not specifically disclose these embedded reserve amounts. We requested but have yet to receive information on the location and amounts of reserve funding embedded in the plan.1 By not creating, earmarking, and disclosing a specific management reserve fund in its fiscal year 2004 US-VISIT expenditure plan, DHS is limiting its flexibility in addressing unexpected problems that could arise in the program’s various areas of proposed spending, and it is limiting the ability of the Congress to exercise effective oversight of this funding. In agency comments on a draft of this report, US-VISIT stated that it supported establishing a management reserve and would be revising its fiscal year 2004 expenditure plan to identify a discrete management reserve amount. Observation 8: Land POE workforce and facility needs are uncertain. Effectively planning for program resource needs, such as staffing levels and facility additions or improvements, depends on a number of factors, including the assumptions being made about the scope of the program and the sufficiency of existing staffing levels and facilities. Without reliable assumptions, the resulting projections of resource needs are at best uncertain. For entry at land POEs, DHS plans for Increment 2B do not call for additional staff or facilities. The plans do not call for acquiring and deploying any additional staff to collect biometrics while processing foreign nationals through secondary inspection areas. Similarly, these plans provide for using existing facilities, augmented only by such infrastructure improvements as conduits, electrical supply, and signage. For exit at land POEs, DHS’s plans for Increment 2B also do not call for additional staff or facilities, although they do provide for installation of RF technology at yet-to-be- defined locations in the facility area to record exit information. US-VISIT Increment 2B workforce and facility plans are based on various assumptions, including (1) no additional foreign nationals will need to go to secondary inspection and (2) the average time needed to capture the biometric information will be 15 seconds, based on the Increment 1 experience at air POEs. However, these assumptions raise questions for several reasons. According to DHS program officials, including the Acting Increment 2B Program Manager, the Director of Facilities and Engineering, and the Program Director, any policy changes that could significantly increase the number of foreign nationals who would require processing through US-VISIT could impact these assumptions and thus staffing and facilities needs. According to the Increment 1 pilot test results, the average time needed to capture biometric information is 19 seconds. Moreover, DHS facilities told us that they have yet to model the impact of even the additional 15 seconds for secondary inspections. Moreover, according to a report from the Data Management Improvement Act Task Force,1 existing land POE facilities do not adequately support even the current entry and exit processes. In particular, more than 100 land POEs have less than 50 percent of the required capacity (workforce and facilities) to support current inspection processes and traffic workloads. To assist in its planning, the US-VISIT program office has begun facility feasibility assessments and space utilization studies at each land POE. Until such analysis is completed, the assumptions being used to support Increment 2B workforce and facility planning will be questionable, and the projected workforce and facility resource needs will be uncertain. Data Management Improvement Act Task Force, Second Annual Report to the Congress (Washington, D.C., December 2003). at a minimum, should (1) specify the test environment, including test equipment, software, material, and necessary training; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. Establish processes for ensuring the independence of the IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. To ensure that our recommendations addressing fundamental program management weaknesses are addressed quickly and completely, we further recommend that the Secretary direct the Under Secretary to have the program director develop a plan, including explicit tasks and milestones, for implementing all our open recommendations, including those provided in this report. We further recommend that this plan provide for periodic reporting to the Secretary and Under Secretary on progress in implementing this plan. Last, we recommend that the Secretary report this progress, including reasons for delays, in all future US-VISIT expenditure plans. assessed DHS’s plans and ongoing and completed actions to establish and implement the US-VISIT program (including acquiring the US-VISIT system, expanding and modifying existing port of entry facilities, and developing and implementing policies and procedures) and compared them to existing guidance to assess risks. For DHS-provided data that we did not substantiate, we have made appropriate attribution indicating the data’s source. We conducted our work at DHS’s headquarters in Washington, D.C., and at its Atlanta Field Operations Office (Atlanta’s William B. Hartsfield International Airport) from October 2003 through February 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Homeland Security’s letter dated April 27, 2004. 1. We do not agree that the US-VISIT program has a security plan. In response to our request for the US-VISIT security plan, DHS provided a draft document entitled Security and Privacy: Requirements & Guidelines Version 1.0. However, as we state in the report, this document does not include information consistent with relevant guidance for a security plan. For example, this guidance states that a system security plan should (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the requirements, (3) delineate roles and responsibilities of all individuals who have access to the system, (4) describe the risk assessment methodology to be used, and (5) address security awareness and training. The document provided by DHS addressed two of these requirements—security requirements and training and awareness. As we state in the report, the document does not (1) describe specific controls to satisfy the security requirements, (2) describe the risk assessment methodology, and (3) identify roles and responsibilities of individuals with system access. Further, much of the document discusses guidelines for developing a security plan, rather than providing the specific content expected of a plan. 2. Although DHS has completed a Privacy Impact Assessment for Increment 1, the assessment is not consistent with the Office of Management and Budget guidance. This guidance says that a Privacy Impact Assessment should, among other things, (1) identify appropriate measures for mitigating identified risks, (2) discuss the rationale for the final design or business process choice, (3) discuss alternatives to the designed information collection and handling, and (4) address whether privacy is provided for in system development and documentation. While the Privacy Impact Assessment for US-VISIT Increment 1 discusses mitigation strategies for identified risks and briefly discusses the rationale for design choices, it does not discuss alternatives to the designed information collection and handling. Further, Increment 1 system documentation does not address privacy. 3. DHS’s comments did not include a copy of its revised fiscal year 2004 expenditure plan because, according to an agency official, OMB has not yet approved the revised plan for release, and thus we cannot substantiate its comments concerning either the amount or the disclosure of management reserve funding. Further, we are not aware of any unduly burdensome restrictions and/or approval processes for using such a reserve. We have modified our report to reflect DHS’s statement that it supports establishing a management reserve and the status of revisions to its expenditure plan. 4. We have modified the report as appropriate to reflect these comments and subsequent oral comments concerning the membership of the US- VISIT Advisory Board. 5. We do not believe that DHS's comments provide any evidence to counter our observation that the system acceptance test plan was developed largely during and after testing. In general, these comments concern the Increment 1 test strategy, test contractor and component system development team coordination, Increment 1 use cases, and pre-existing component system test cases, none of which are related to our point about the completeness of the four versions of the test plan. More specifically, our observation does not address whether or not an Increment 1 test strategy was developed and approved, although we would note that the version of the strategy that the program office provided to us was incomplete, was undated, and did not indicate any level of approval. Further, our observation does not address whether some unspecified level of coordination occurred between the test contractor and the component system development teams; it does not concern the development, modification, and use of Increment 1 “overarching” use cases, although we acknowledge that such use cases are important in developing test cases; and it does not address the pre- existence of component system test cases and their residence in a test case repository, although we note that when we previously asked for additional information on this repository, none was provided. Rather, our observation concerns whether a sufficiently defined US- VISIT Increment 1 system acceptance test plan was developed, approved, and available in time to be used as the basis for conducting system acceptance testing. As we state in the report, to be sufficient such a plan should, among other things, define the full complement of test cases, including inputs and outputs, and the procedures for executing these test cases. Moreover, these test cases should be traceable to system requirements. However, as we state in our report, this content was added to the Increment 1 test plan during the course of testing, and only the version of the test plan modified January 16, 2004, contained all of this content. Moreover, DHS's comments recognize that these test plan versions were developed during the course of test execution and that the test schedule did not permit sufficient time for all stakeholders to review the versions. 6. We do not disagree with DHS’s comments describing the roles and responsibilities of its program office support contractor and its Federally Funded Research and Development Center (FFRDC) contractor. However, DHS’s description of the FFRDC contractor’s roles and responsibilities do not cover all of the taskings envisioned for this contractor. Specifically, DHS’s comments state that the FFRDC contractor is to execute such program and project management activities as strategic planning, contractor source selection, acquisition management, risk management, and performance management. These roles and responsibilities are consistent with the FFRDC contractor’s statement of work that was provided by DHS. However, DHS’s comments omit other roles and responsibilities specified in this statement of work. In particular, the comments do not cite that this contractor is also to conduct audits and evaluations in the form of independent verification and validation activities. It is this audit and evaluation role, particularly the independence element, which is the basis for our concern and observation. As we note above and state in the report, US-VISIT program plans and the contractor’s statement of work provide for using the same contractor both to perform program and project management activities, including creation of related products, and to assess those activities and products. Under these circumstances, the contractor could not be sufficiently independent to effectively discharge the audit and evaluation tasks. 7. We do not agree with DHS’s comment that we cited the wrong operative documentation pertaining to US-VISIT independent verification and validation plans. As discussed in our comment No. 6, the statement of work that we cite in the report relates to DHS plans to use the FFRDC contractor to both perform program and project management activities and develop related products and to audit and evaluate those activities and products. The testing contractor and testing activities discussed in DHS comments are separate and distinct from our observation about DHS plans for using the FFRDC contractor. Accordingly, our report does not make any observation regarding the independence of the testing contractor. 8. We agree that US-VISIT lacks a change control board and support DHS’s stated commitment to establish a structured and disciplined change control process that would include such a board. In addition to the individual named above, Barbara Collier, Gary Delaney, Neil Doherty, Tamra Goldstein, David Hinchman, Thomas Keightley, John Mortin, Debra Picozzi, Karl Seifert, and Jessica Waselkow made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
|
The Department of Homeland Security (DHS) has established a program--the United States Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals who travel to the United States. By congressional mandate, DHS is to develop and submit for approval an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. Among other things, GAO was asked to determine whether the plan satisfied these conditions, and to provide observations on the plan and DHS's program management. DHS's fiscal year 2004 US-VISIT expenditure plan and related documentation at least partially satisfies all conditions imposed by the Congress, including meeting the capital planning and investment control review requirements of the Office of Management and Budget (OMB). DHS developed a draft risk management plan and a process to implement and manage risks. However, DHS does not have a current life cycle cost estimate or a cost/benefit analysis for US-VISIT. The US-VISIT program merges four components into one integrated whole to carry out its mission. GAO also developed a number of observations about the expenditure plan and DHS's management of the program. These generally recognize accomplishments to date and address the need for rigorous and disciplined program practices. US-VISIT largely met its commitments for implementing an initial operating capability, known as Increment 1, in early January 2004, including the deployment of entry capability to 115 air and 14 sea ports of entry. However, DHS has not employed rigorous, disciplined management controls typically associated with successful programs, such as test management, and its plans for implementing other controls, such as independent verification and validation, may not prove effective. More specifically, testing of the initial phase of the implemented system was not well managed and was completed after the system became operational. In addition, multiple test plans were developed during testing, and only the final test plan, completed after testing, included all required content, such as describing tests to be performed. Such controls, while significant for the initial phases of US-VISIT, are even more critical for the later phases, as the size and complexity of the program will only increase. Finally, DHS's plans for future US-VISIT resource needs at the land ports of entry, such as staff and facilities, are based on questionable assumptions, making future resource needs uncertain.
|
In 1975, DOD implemented the Reserve Components Common Personnel Data System (RCCPDS) to collect information on current and past members of the six reserve components—Army National Guard, Air National Guard, Army Reserve, Navy Reserve, Marine Corps Reserve, and Air Force Reserve. This information included data on reservists’ personal characteristics, such as name, Social Security number, date of birth, gender, home address, and education, as well as data on their military characteristics, such as service, reserve component, prior service status, and date of initial entry into the reserve forces. According to the director of DMDC, the services send daily, weekly, and monthly updated data submissions to DMDC in accordance with applicable guidance. After the first Gulf War, in a May 15, 1991, memorandum, DOD identified 16 recommendations requiring action by many offices within DOD regarding Desert Storm personnel data issues. For example, the memorandum said that DOD should consistently report on who participated in the operations and cites examples of key terms, such as in theater, that were being interpreted differently by DMDC, the Office of the Secretary of Defense, and the services. In December 1991, DOD reported on how DMDC provided information about operations Desert Shield and Desert Storm. This report cited areas for improvement. For example, the report indicated that DMDC created makeshift procedures to establish and maintain the new data sources and to accommodate varied data requests. The report cited that these procedures sometimes resulted in inconsistent or incomplete data being provided in response to a request. On May 2, 2001, DOD updated guidance to the military services, among others, to maintain a centralized database of active duty personnel. In this guidance, DOD requires the services to report personnel information about all active duty military servicemembers as well as reservists who are ordered to active duty. While this instruction called for the services to report information about servicemembers on active duty in support of a contingency, the requirements for reporting contingency data were not specific. On October 4, 2001, the Under Secretary of Defense, Personnel and Readiness (USD (P&R)), issued a memorandum that required the services to report personnel information to DMDC on all active and reserve component personnel mobilized or deployed in support of GWOT, in accordance with DOD guidance. The purpose of GWOT data was, among other things, to establish eligibility for benefits and entitlements as a result of participation in the named contingencies. The information is critical because it provides a historical database with which to assess the impact of policies and processes, events, and exposures on the health of deployed reserve component servicemembers. DMDC was tasked with providing reporting guidance to the services for these data submissions. DMDC sent this guidance to the services on October 12, 2001. DMDC is a civilian-led agency with a mission to deliver timely and quality support to its customers, and to ensure that data received from different sources are consistent, accurate, and appropriate when used to respond to inquiries. DMDC reports to the Deputy Under Secretary of Defense for Program Integration, who is in the Office of the USD (P&R) (see fig. 1). In February 2002, USD (P&R) reminded the services in another memorandum of its earlier requirement for reporting personnel data to DMDC and informed the services that they had 2 weeks to provide plans to DMDC on how they were going to correct any personnel data reporting problems. On August 6, 2004, DOD updated prior guidance regarding RCCPDS to include an enclosure that set out specific requirements for the services to report personnel information for all reserve component servicemembers supporting a named contingency, unlike previous guidance. The purpose of the new enclosure was to ensure more accurate reporting on a named contingency, such as GWOT missions, as well as to establish eligibility for benefits and entitlements, and to develop a registry of participants for tracking in support of research and evaluation of DOD programs and policies. According to DOD officials, the services, in general, were still reporting data according to previous guidance for a few years after the new guidance was issued. In August 2004, DMDC began operation of its CTS database to address DOD’s reporting requirements, including those in the new enclosure (that is, enclosure 11). The CTS database is DOD’s repository for collecting activation, mobilization, and deployment data for reservists who have served and continue to serve in support of GWOT. The CTS database contains both an activation file, which contains mobilization data, and a deployment file. Both files are updated monthly by service submissions and cover GWOT from September 11, 2001, to the present. The purpose of the activation file is to account for and provide medical and educational benefits for all reservists called to active duty in support of GWOT contingencies, and it allows DOD to provide data on the number of reservists who have been mobilized in support of GWOT. The purpose of the CTS deployment file is to account for a deployed servicemember’s deployment date and location during each deployment event in support of deployment health surveillance and DOD guidance. The database is also used to track and report the number of reservists who have been deployed in support of GWOT since September 11, 2001. Our analysis of DOD data indicates that more than 531,000 reservists have been mobilized in support of GWOT and more than 378,000 reservists, or about 71 percent of the number mobilized, have been deployed in support of GWOT through June 30, 2006 (see fig. 2). The Army National Guard deployed the greatest number of reservists in support of GWOT from September 2001 through June 30, 2006, and, of those, the majority were deployed once. The data also indicate that the vast majority of reservists who deployed in support of GWOT were U.S. citizens, White, and male. Further, the data indicate that most of the reservists spent 1 year or less deployed. DOD guidance requires the services to report timely, accurate, and complete activation, mobilization, and deployment data. DOD guidance also requires DMDC to collect and maintain mobilization and deployment data from the services about the reservists. DOD is required by policy to report personnel data about reservists, such as service, service component, reserve component category, race, ethnicity, gender, citizenship status, occupation, unit, and volunteer status regarding a current mobilization. In addition, DOD is required by policy to capture deployment information such as the location a reservist is deployed to and the dates the reservist was deployed to that location. Our analysis of DOD data indicates that more than 531,000 reservists have been mobilized in support of GWOT and more than 378,000 reservists, or 71 percent of the number mobilized, have been deployed in support of GWOT through June 30, 2006 (see fig. 2). The number of mobilizations and deployments peaked in fiscal year 2003 with about 206,000 reservists mobilized and about 127,000 reservists deployed (see figs. 3 and 4). Since fiscal year 2003, the total number of mobilizations has declined, while the number of deployments remained stable through fiscal year 2005. The Army National Guard has mobilized and deployed the greatest number of reservists—more than 230,000 mobilized and more than 163,000 deployed. The Navy Reserve had the least number of reservists mobilized—with about 29,000—while the Marine Corps Reserve had the fewest number deployed with about 19,000 reservists (see fig. 2). The percentage of the total reservists mobilized or deployed varies across the fiscal years (see figs. 3 and 4). For example, looking at the percentage of mobilizations by component each year, Navy Reserve, Air Force Reserve, and Air National Guard mobilizations occurred early in GWOT and have generally declined over time. Conversely, the percentage of Army National Guard and Army Reserve mobilizations has generally increased over time. The greatest number of Army National Guard deployments—more than 60,000— occurred in fiscal year 2005 (see table 5 totals in app. II), while also in fiscal year 2005, the Army National Guard represented the largest deploying component, with 52 percent of deployments belonging to it (see fig. 4). Although reservists usually deployed only once, some experienced multiple deployments (see fig. 5). For example, compared to the other reserve components, the Air National Guard and the Air Force Reserve had nearly half of their reservists deploying two and three or more times, but they tend to have shorter deployment cycles according to the Air Expeditionary Force cycle. Under this cycle, reservists deploy for about 120 days in a 20-month cycle. However, servicemembers assigned to stressed specialties deploy for longer periods of time and in greater frequency. At the unit level, some deployment rules have been modified to increase volunteerism or to add stability to key missions. The Army National Guard and the Marine Corps Reserve had the lowest percentage of reservists deploying two and three or more times, but they tend to have longer deployment cycles. In general, DOD policy stipulates that Army units spend 1 year “boots on the ground” in theater. This policy also states that Marine Corps units below the regimental or group level deploy for 7 months while regimental and group headquarters units and above deploy for 12 months. This policy also states that the Chief of Naval Operations’ goal is for servicemembers to have a 6-month deployment with 12 months in a nondeployed status. Our analysis of DOD data indicates that across the services, the majority of reservists have been deployed once, and of those deployed in support of GWOT, most—about 307,000 reservists, or 81 percent—have spent a year or less deployed. Alternatively, more than 65,000 reservists, or 17 percent, have spent more than 1 year but less than 2 years deployed, and about 6,000 reservists, or fewer than 2 percent, have spent more than 2 years deployed. The data also indicate that the Marine Corps Reserve had the highest percentage of reservists serving more than 2 years. In addition, the data also indicate that very few—less than 1 percent—of Air National Guard reservists served more than 2 years (see fig. 6). Our analysis of DOD data indicates that most reservists who have deployed in support of GWOT through June 30, 2006, were members of the Selected Reserve (see fig. 7 and table 5 in app. II). The majority of units and individuals in each reserve component are part of the Selected Reserve. These units and individuals have been designated as so essential to the initial wartime mission that they have priority for training, equipment, and personnel over all categories of reservists. Congress authorizes end strength for Selected Reserve personnel each year. The authorized end strength for the Army National Guard has been about 350,000 for the past several years. For fiscal year 2005, data provided by the services to DMDC indicate that the Army National Guard deployed more than 60,000 Selected Reserve servicemembers, which represents the highest number of Selected Reserve servicemembers deployed in a single fiscal year by a single reserve component since GWOT began. Although the services are authorized a maximum number of selected reservists, the actual number of reservists will fluctuate when additional reservists are recruited or others leave the reserve component. In addition, reservists such as those in the Individual Ready Reserve, are also available for deployment. In general, reservists are trained to have specific skills and specialties and may not be suited to deploy for a specific mission until additional training is provided. In addition, some reservists may not be available for deployment because they are in training, on medical leave, or awaiting training. Our analysis of DOD data indicates that almost 98 percent of reservists who have deployed in support of GWOT through June 30, 2006, were U.S. citizens at the time of their most current deployment (see fig. 8). The data indicate that about 1 percent of reservists were non-U.S. citizens or non- nationals at the time of their most current deployment. The citizenship status of more than 1,400 reservists was unknown. DOD data also indicate that 168 reservists’ citizenship status changed. Table 1 shows the citizenship status of reservists by reserve component by fiscal year. Our analysis of DOD data indicates that about 78 percent of those deployed for GWOT were White; about 14 percent were Black or African American; about 2 percent were Asian, Native Hawaiian, or Other Pacific Islander; and about 1 percent were American Indian or Alaskan Native (see table 2). Overall, about 5 percent of the deployed reservists declined to indicate their race. The Army National Guard, the Air National Guard, and the Air Force Reserve had the highest percentages of the reservists who identified themselves as White. Further, about 90 percent of those who responded identified themselves as non-Hispanic and 8 percent as Hispanic (see table 3). Our analysis of DOD data indicates that about 338,000 reservists, or about 89 percent of the number deployed, were male (see table 4). About 11 percent of those deployed in support of GWOT were female. Of the approximately 163,500 Army National Guard servicemembers who have been deployed through June 30, 2006, more than 92 percent were male. Almost 98 percent of those deployed in support of GWOT through June 30, 2006, for the Marine Corps Reserve were male, representing the highest percentage of males compared with females for all of the reserve components. Our analysis of DOD data indicates that California, Texas, Pennsylvania, and Florida had the highest numbers of reservists who have deployed in support of GWOT through June 2006 (see table 6 in app. II for the number of reservists deployed by state of residence by reserve component by fiscal year). The 4 states combined had more than 76,000 reservists in residence at the time of their deployments. Eleven states deployed more than 10,000 reservists each, accounting for more than 160,000 reservist deployments. Of those deployed, about 39 percent came from states in the southern United States, about 23 percent from the midwest, about 18 percent from states in the western United States, and about 15 percent came from states in the northeast part of the country. More than 20,000 reservists indicated California or Texas as their state of residence at the time they were deployed (see fig. 9). Nineteen states and 5 territories had fewer than 5,000 reservists in residence at the time of their deployment and 20 states and 1 territory had from 5,000 to 9,999 reservists in residence at the time of their deployment. Our analysis of DOD data indicates that since GWOT began, the occupational areas of enlisted reservists deployed in support of GWOT have stayed somewhat consistent across all services. For example, the Army National Guard, the Air Force Reserve, and the Marine Corps Reserve have deployed reservists mostly in infantry occupational areas including such groups as infantry, air crew, and combat engineering. All six reserve components have deployed electrical and mechanical equipment repairers, such as automotive, aircraft, and armament and munitions. Three of the six reserve components—the Army National Guard, the Army Reserve, and the Marine Corps Reserve—have deployed reservists who are service and supply handlers, such as law enforcement and motor transport. Since GWOT began, the occupational areas most deployed for reserve component officers have varied, but all reserve components primarily deployed tactical operations officers, to include ground and naval arms, helicopter pilots, and operations staff subgroups. The Army National Guard, the Air National Guard, and the Navy Reserve have deployed engineering and maintenance officers, such as the communications and radar and aviation maintenance occupational subgroups. The Air National Guard, the Army Reserve, and the Air Force Reserve have deployed reservists in the health care officer occupational areas, including physicians and nurses. The Army Reserve and the Marine Corps Reserve have deployed supply and procurement occupational areas that include transportation, general logistics, and supply occupational subgroups. The Air Force Reserve has also deployed intelligence officers in occupational subgroups such as general intelligence and counterintelligence. We were unable to analyze the volunteer status variable because the data do not exist for all of the reserve components. Similarly, we were unable to analyze the deployment location and deployment unit variables because we determined, in agreement with DMDC officials, that the data in these fields were not reliable. This issue is discussed further below. While we found selected deployment and mobilization data to be sufficiently reliable for our purposes (that is, providing descriptive data), some of the data were not reliable enough for us to report, even for descriptive purposes. DMDC and the services, as required by DOD policy, have taken steps to improve the reliability of the mobilization data; however, more action is needed to improve the reliability of CTS data and DMDC’s analyses of those data. For example, (1) the rebaselining effort resulted in substantial changes being made to the mobilization data, and the Army—which has mobilized and deployed the largest number of reservists for GWOT—has not completed this rebaselining effort, which the Joint Staff tasked DMDC and the services to do in November 2005; (2) we identified data issues that DOD has not addressed that could further improve the reliability of the data, such as standardizing the use of key terms like deployment; and (3) DMDC does not have effective controls for ensuring the accuracy of its data analyses used to produce reports as required by federal government internal control standards. Although DMDC and DOD have undertaken a major data cleaning—or rebaselining—effort to improve the reliability of mobilization data, the effort does not address some fundamental data quality issues. While we recognize that such a large-scale effort, although replete with challenges, is a positive step toward better quality data, if data reporting requirements and definitions are not uniform, and if there are no quality reviews of DMDC’s analyses, some data elements and DMDC’s analyses of those data may continue to be unreliable. A senior DMDC official stated that it emphasizes getting data to customers in a timely manner rather than documenting the internal control procedures needed to improve the reliability of the data and the data analyses produced. However, with proper internal controls, DMDC could potentially achieve both timeliness and accuracy. Without reliable data and analyses, DOD cannot make sound data-driven decisions about reserve force availability. Moreover, DOD may not be able to link reservists’ locations with exposure to medical hazards. We have found the deployment and mobilization data we used to be sufficiently reliable for our purposes (that is, providing descriptive data), and DMDC and the services have recently taken steps to improve the reliability of mobilization data. However, additional steps are needed to make mobilization data more reliable. As previously noted, DOD guidance requires the services to report timely, accurate, and complete activation, mobilization, and deployment data. DMDC officials responsible for overseeing the CTS database stated that a rebaseline of the deployment data was not necessary because the deployment data matched the data in the Defense Finance and Accounting Service’s (DFAS) systems by more than 98 percent. Although DMDC and the services rebaselining of the mobilization data in CTS has resulted in improvements, the Army, which has mobilized the greatest number of reservists for GWOT, has not completed its rebaselining effort. A senior-level DMDC official responsible for overseeing the CTS database said that the mobilization data in the CTS database prior to the rebaselining effort were less than 80 percent accurate for the Army, the Navy, and the Air Force, but that the Marine Corps’ data were generally considered to be accurate prior to the rebaselining effort. The official also stated that DMDC expects that the mobilization data within the CTS database will be 90 percent accurate because of this rebaselining effort, which was still ongoing through August 2006. While we recognize that this is a considerable undertaking, to date, only the Navy and the Air Force have validated or certified their mobilization data files. Navy officials said that the Navy has validated its personnel records and established a common baseline of data with DMDC. Air Force Reserve officials said that their data within CTS are now 99 to 100 percent accurate. The Chief of the Personnel Data Systems Division for the Air National Guard certified that although file discrepancies are still being reconciled, the data that were processed by DMDC on June 11, 2006, were the most accurate activation data and that data accuracy will improve with each future file sent to DMDC. The DMDC official said that the Marine Corps had only partially completed its rebaselining effort and would not be finished until the Marine Corps provided its August 2006 data file in September 2006. The Army National Guard and Army Reserve are still working to rebaseline their mobilization data, and the Army has not provided a time frame for completing the effort. However, we still have concerns regarding the reliability of the mobilization data, because the scope of the rebaselining effort changed and the data changed substantially as a result of the rebaselining. At the beginning of our review, DMDC and the services referred to the rebaselining effort as a “reconciliation,” which, according to a DMDC official and a Reserve Affairs official, would have resulted in all data (current and past) being reviewed and corrected as needed. We acknowledge that some degree of change is expected in any data cleaning effort, especially with large-scale, multisource collection methods such as DMDC’s data collection process. However, our experience has shown that cleaning efforts that result in a large degree of change would suggest systematic error. Such error raises concerns about the reliability of both the original data and the “cleaned” data. If both the source data and the cleaned data are populated with the same assumptions and information, any reconciliation of data points should result in relatively small change that correct simply for random error, such as from keypunch or data source errors. However, for some variables, the data changed substantially as a result of DMDC and the services’ rebaselining or data cleaning effort. Our analysis shows that data from the period of September 2001 through December 2005 have changed by about 4 percent to as much as 20 percent. For example: The number of reservists mobilized for GWOT through December 2005 went from about 478,000 to about 506,000—an increase of more than 27,000 reservists or a change of more than 5 percent. The Army Reserve data sustained the greatest change during this time with a more than 19 percent increase in the number of reservists mobilized. The number of mobilized Army National Guard reservists increased more than 7 percent. According to a senior DMDC official, the Army data are expected to continue to change, perhaps substantially enough to require the rebaselining of the data again in the future. The number of Air National Guard reservists mobilized decreased by more than 13 percent. The Navy Reserve, the Marine Corps Reserve, and the Air Force Reserve data all changed about 5 percent. DOD officials stated that the rebaselining effort occurred because the Joint Staff tasked DMDC and the services with ensuring that the data the Joint Staff’s Manpower and Personnel office was using in CTS were the same data as the services were using to determine reserve force availability. According to a senior-level DMDC official responsible for overseeing CTS, the rebaselining effort’s scope changed because all of the services agreed that starting over and replacing all of the data would make more sense than trying to correct transactions already in CTS, because the services found errors in the CTS files initially used for the reconciliation. Service officials said that some of the data discrepancies developed because of a DMDC quality check procedure that sometimes resulted in DMDC replacing the service-submitted data with data from other sources. DMDC officials said that they did this because the services were unable to report some of the required CTS data. According to DMDC officials, service submissions have become more complete over time, resulting in DMDC now using the quality check procedures only to check the data rather than to populate the CTS database. This DMDC official stated that DMDC expected the data to change substantially based on the issues identified with service data during the initial reconciliation effort and the subsequent rebaselining effort. Because the rebaselining effort is not complete and the Army—which has mobilized and deployed the largest number of reservists for GWOT—has not finished the rebaselining, we do not know how much the data will continue to change as DMDC and the services work to finish this effort. DOD data on reservists’ mobilizations and deployments are important because decision makers at DOD and in Congress need the data to make sound decisions about personnel issues and for planning and budgeting purposes. Prior to the rebaselining effort, some services recognized that there were data issues that needed to be addressed and took steps to do so, as DOD guidance requires the services to report accurate and complete mobilization and deployment data. However, some data issues that would ensure more accurate, complete, and consistent mobilization and deployment data across the services in the future have not been fully addressed by DOD. Some examples of data issues being addressed include the following: The Air Force and the Navy were having difficulty tracking mobilizations based on reservists’ mobilization orders, which has resulted in both services independently working to develop and implement systems that write reservists’ orders. The Army Reserve recently began to modify its mobilization systems, which Army officials expect will improve the collection of reservists’ mobilization data. The Air Force identified problems with the way in which the Defense Eligibility Enrollment Reporting System (DEERS) processed end dates for reservists’ mobilizations, which resulted in some reservists not receiving appropriate benefits (for example, dental benefits). Air Force officials worked with officials from the Office of the Secretary of Defense and DMDC to identify and address the data processing logic issues. Despite these positive steps, service process improvements are not all complete, and further, there has been no comprehensive review across DOD to identify data issues that if addressed, could result in more complete, accurate, and consistent mobilization and deployment data across and within the services. Reserve Affairs officials in the office of Reserve Systems Integration said that a more sustainable fix to the processes of collecting data is needed to ensure that data captured in the future are accurate and more efficiently collected. We agree and have identified some issues that may continue to affect data reliability, such as the following: The use of terms, such as activated, mobilized, and deployed, has not been standardized across the services. Although the department has defined these terms in the Department of Defense Dictionary of Military and Associated Terms, the terms are used differently by the individual services. In the Air Force, “activation” can refer to the time when a reservist either volunteers or is involuntarily mobilized; however, the term “mobilized” refers only to someone who is not a volunteer. Even within a single service, these words can have different meanings. For example, an Army National Guard official who participated in the rebaselining effort said that Army National Guard servicemembers who backfill active duty servicemembers are not considered deployed since they have not left the United States. However, according to this official, some staff in the Army National Guard use “deployed” to include reservists who are mobilized within the United States. There is no single data entry process that would minimize the potential for contradictory data about reservists in multiple systems. Currently, data about reservists are entered separately into multiple systems. There is no mechanism for DMDC to ensure that the services are addressing the data inconsistencies DMDC identifies during its ongoing, monthly validation process, such as Social Security numbers that are duplicated in two reserve components. DOD has taken an ad hoc, episodic approach to identifying data reporting requirements and to addressing data issues. DOD has periodically issued policies regarding its need to collect and report specific data, such as volunteer status and location deployed, about active duty servicemembers and reservists. As a result of changing requirements, many of these policies have addendums that include these additional data requirements, which are not immediately supported by the services’ existing systems that are used to collect the data. Over time, this has led to disjointed policies that overlap and that require the services to modify their existing systems and processes, which can take months to complete. There are incomplete data submissions across the services. Specifically, data for volunteer status was not available in CTS for all service components, and the location deployed and deploying unit data were not reliable enough for the purposes of this report. Only three of the six reserve components—the Air National Guard, the Marine Corps Reserve, and the Air Force Reserve—provide information on a reservist’s volunteer status, which neither we nor DMDC report because it is not available for all six components. Further, DMDC officials said that they consider CTS location data incomplete although the data are improving with each fiscal year. DMDC officials said that most unit information is based on the unit a reservist is assigned to and may not represent the unit the reservist is currently deployed with in theater. For this reason, we did not consider these data reliable enough to report. A DMDC official stated that DMDC does not have the authority to direct the services to correct data errors or inconsistencies or to address data issues. DMDC does, however, work with the services and tries to identify and address data challenges. According to some service officials, the department plans to implement a new, integrated payroll and personnel system—Defense Integrated Military Human Resource System (DIMHRS)—and that the services have been diverting resources needed to modify their existing systems and relevant processes to support DIMHRS. However, our past work has shown that DOD has encountered a number of challenges with DIMHRS, which is behind schedule, and the current schedule has it available no sooner than April 2008, when the Army is scheduled to begin implementing the system. In general, service officials said that they are working to collect data on volunteer status, location deployed, and deploying unit; however, Air Force officials stated that they do collect data on location deployed and deploying unit and that these data are accurate and are being provided to DMDC. Army Reserve officials stated that they currently do not have plans to collect data on volunteer status. DMDC has not documented (1) its procedures for verifying that the data analyses it performs are correct and (2) the procedures for monthly validation of service data or the procedures used to perform analyses of data. Either of these issues could, if documented as part of DMDC’s verification process, address some of our concerns about internal controls. DMDC is required by policy to develop and produce reports about mobilization data and respond to requests for information about deployed personnel. DOD policy requires DMDC and the reserve components to ensure the accuracy of files and the resulting reports. Federal government internal control standards require that data control activities, such as edit checks, verifications, and reconciliations, be conducted and documented to help provide reasonable assurance that agency objectives are being met. DMDC officials said that they have internal verification procedures that require supervisors to review all data analyses used to generate reports, although these procedures are not documented. Specifically, the supervisors are to review (1) the statistical programming code used to generate the data analyses to ensure that the code includes the customer’s data analyses parameters (that is, the assumptions used to produce the analyses) and (2) the “totals” generated to ensure that these totals match the control totals that show the number of reservists currently or ever mobilized or deployed in support of GWOT. DMDC officials acknowledge the importance of verifying the accuracy of the data analyses prior to providing the reports to customers, and they stated that they had verified the accuracy of the analyses provided to us. However, we found numerous errors in the initial and subsequent analyses we received of the GWOT data through May 2006, causing us to question whether DMDC verified the data analyses it provided to us and, if it did, whether the current process is adequate. For example, we found that DMDC had done the following: Counted reservists with more than one deployment during GWOT also among those who deployed only once during GWOT, which resulted in overcounting the number of reservists’ deployments. Used ethnicity responses to identify race despite having told us that the internal policy was changed in 2006 and that this was no longer an acceptable practice. Counted reservists whose ethnicity was “unknown” as “non-Hispanic” although “unknown” does not necessarily mean someone’s ethnicity is “non-Hispanic” and there was a category for unknowns. Repeatedly categorized data based on a reservist’s first deployment (when there was more than one) despite agreeing to modify this analytical assumption so that we could present data by the reservist’s most current deployment. Reported thousands of reservists as having changed citizenship status during GWOT although, in our analyses, we found that only 168 reservists had changed status. Analyzed data by reserve component categories (for example, Selected Reserve and Individual Ready Reserve) rather than by reserve component as we had asked. By analyzing the number of days a reservist was deployed by reserve component category, a reservist could be counted multiple times within one component if he or she changed category. This error affected the way in which the total number of days a reservist was deployed was calculated. For example, if the same reservist served 350 days as an Army National Guard Selected Reserve member and an additional 350 days as an Army National Guard Individual Ready Reserve member, he or she would be counted as two reservists who were each deployed for less than a year. However, our intent was to report that the same individual had been deployed for a total of 700 days. In our analysis, all of a reservist’s days deployed were totaled and counted once for each reserve component, regardless of which category he or she belonged to when deployed. Miscoded the end date for the analysis of how many days reservists were deployed for GWOT. This resulted in up to an additional 90 days of deployment being counted for reservists who were still deployed at the time the data were submitted to DMDC. In our discussions with DMDC officials, they readily acknowledged that errors had been made, although they stated that the analyses had undergone supervisory review prior to our receiving them. During these discussions, we also discovered that many of these errors occurred because DMDC had not used all of our data analyses parameters, although these officials had stated that this was one of the verification process steps followed. Although we were able to work with DMDC officials and identify the analytical assumptions they were going to use to complete our analyses, without documented analytical procedures, it is unclear to what degree the analyses DMDC provides to other users of the data also contain errors since many may not similarly verify the analyses provided to them by DMDC. In addition, DMDC officials have not documented additional processes that would further support a verification process, such as (1) the ongoing, monthly validation process of service-provided data and (2) the procedures to perform analyses and generate reports, including the assumptions DMDC uses when producing periodic and special reports for customers. In the past, according to the services, the ongoing, monthly validation process DMDC used resulted in two sets of data—one set of service data and one set of DMDC data—that may not have been the same. For example, we were told by the Air Force that, in some cases, service data were replaced with default values because of a business rule that DMDC applied to the data and that this change resulted in errors to the service-provided data. These inconsistent data caused the Joint Staff to request that the services and DMDC reconcile the data. As stated above, there were errors in the analyses performed to generate the reports DMDC provided to us, including DMDC’s not using many of the assumptions we agreed to for the analyses. DMDC also made errors that contradicted its own undocumented policy. A senior DMDC official said DMDC has not documented these procedures because the organization emphasizes getting data and reports to its customers in a timely manner rather than preparing this documentation. This official said that documentation is not a top priority because situations change rapidly, and it would be hard to keep these documents up-to-date. The official also said that the errors made in the analyses provided to us were caused by human error and the need to provide data quickly. Further, the DMDC official said that while there are standard data requests that are generated frequently, GAO’s request was an ad hoc request, and the procedures for addressing such requests, in practice, are not as well defined. While we agree that our requests met DMDC’s definition of an ad hoc request, we disagree that sufficient time was not allowed for DMDC to prepare the analyses. For the initial request, we worked with DMDC over the course of about 5 business days to define the analytical assumptions that would be used during the analysis. DMDC then took about 8 business days to complete the analysis and provide it to us. DOD data analyses are important because decision makers at DOD and in Congress need the data to make sound decisions about reserve force availability, medical surveillance, and planning and budgeting. In the absence of documented procedures and the necessary controls to ensure that they are implemented, it is difficult for an organization to ensure that it has established a robust process that is being consistently applied and that accurate results are being achieved. Joint Staff and Reserve Affairs officials are emphasizing the need to use one data source for most analyses to further reduce the inconsistencies in data analyses because service-produced analyses and DMDC-produced analyses could differ if both are not using the same set of data and assumptions. Otherwise, it is possible that the data analyses provided to decision makers at DOD or in Congress will be incomplete and inconsistent. If the data analyses are incorrect, users could draw erroneous conclusions based on the data, which could lead to policies that affect reservists in unanticipated ways. DOD recognizes the need for accurate, complete, and consistent data and data analyses, and it has taken some preliminary, ad hoc steps to improve its data, including undertaking a considerable effort to rebaseline its mobilization data. It has not, however, addressed some of the inconsistencies in data and data analyses departmentwide, such as when terms are used differently from one service to the next. Further, service officials stated that it is anticipated that a lot of these problems will be addressed when DIMHRS is implemented. However, the schedule for DIMHRS continues to slip, so it is unclear when this solution will be available. We recognize that the need for accurate, complete, and consistent data and data analyses about reservist mobilization and deployment is always important, and even more so during higher levels of mobilization and deployment, such as is the case now with GWOT. This is especially true since, in general, there are restrictions on the maximum length of time a reservist can be involuntarily activated. Thus, having accurate and complete data on a reservist’s status is critical for determining availability for future deployments. This is especially true of the CTS data since the Manpower and Personnel office in the Joint Staff and the Office of the Assistant Secretary of Defense for Reserve Affairs mostly use the data found in CTS. These data also help DOD and Congress to understand the potential impacts of policy decisions as they relate to reservists who are eligible for TRICARE Reserve Select and educational benefits based on the number of days a reservist is deployed. DOD has not provided guidance to the services to better define and standardize the use of key terms. DOD also has not collected and maintained all essential data nor has it established a process for ensuring that data inconsistencies are resolved. Further, DOD has not documented key procedures and processes for verifying the data analyses it provides to its customers, thus compromising its ability to ensure the accuracy, completeness, and consistency of these analyses. Until decision makers in DOD and Congress have accurate, complete, and consistent data and analyses, they will not be in the best position to make informed decisions about the myriad of reserve deployment matters. We recommend that the Secretary of Defense take the following four actions: Direct the Under Secretary of Defense, Personnel and Readiness, to provide guidance to the services to better define and standardize the use of key terms, like activation, mobilization, and deployment, to promote the completeness, accuracy, and consistency of the data within CTS. Direct the service secretaries to (1) take the steps necessary to provide all required data to DMDC, such as volunteer status and location deployed, and (2) have the services address data inconsistencies identified by DMDC. Direct the service secretaries to establish the needed protocols to have the services report data consistent with the guidance above. Direct the Under Secretary of Defense, Personnel and Readiness, to require DMDC to document its internal procedures and processes, including the assumptions it uses in data analyses. In doing this, the Under Secretary of Defense, Personnel and Readiness, should collaborate on the reasonableness of the assumptions established and used by DMDC in its data analyses with the Office of the Assistant Secretary of Defense for Reserve Affairs and the Joint Staff. The Under Secretary of Defense, Personnel and Readiness, provided written comments on a draft of this report and stated that we changed one of our original audit objectives and did not inform the department of this change. We disagree. While the scope of our audit did change after our initial notification letter of June 17, 2005, was sent to DOD, we notified the proper officials of this change in a December 2, 2005, email to the agency- designated liaison within the DOD Inspector General’s office. In this email, we specifically said that we would be contacting DMDC and that we would be focusing on data for reserve component activation, mobilization, and deployment for GWOT. In accordance with generally accepted government auditing standards (GAGAS), GAO analysts are expected, as appropriate, to review an agency’s internal controls as they relate to the scope of the performance audit. Specifically, we are required by GAGAS to review the reliability of the data and the data analyses provided to us. To assess the reliability of data and data analyses, we often review an agency’s internal controls that are put in place to ensure the accuracy of the data and analyses. As we discuss in our report, we found the data to be sufficiently reliable for our purposes. However, over the course of the work, the analyses of the data DMDC provided to us continued to have errors. This raised concerns about the adequacy of DMDC’s internal controls for preparing and verifying these analyses, which DMDC stated were not documented. In accordance with GAGAS, when reporting on the results of their work, auditors are responsible for disclosing all material or significant facts known to them which, if not disclosed, could mislead knowledgeable users or misrepresent the results. Consistent errors in DMDC’s analyses led us to include an audit objective on the reliability of the data and the data analyses. In its written comments, DOD generally concurred with three of our recommendations and did not concur with one of our recommendations. DOD also provided technical comments, which we have incorporated in the report, as appropriate. Regarding our recommendation that DOD provide guidance to the services to better define and standardize the use of key terms, DOD stated that this requirement has already been addressed because these terms are defined. We acknowledged in our draft report that these key terms are defined in the Department of Defense Dictionary of Military and Associated Terms. However, as we state in our report, our audit work indicates that the services are not operationalizing the use of the terms in a consistent manner. The intent of our recommendation is to have DOD standardize the use of the key terms across the services. DOD generally concurred with our recommendation that the services provide all required data to DMDC and address data inconsistencies, and stated that the services have been directed to provide all necessary data and are working to address data inconsistencies. While we agree that the services are working with DMDC to address data inconsistencies with regard to the rebaselining of mobilization data, we also identified other data inconsistencies that DOD has not addressed, such as Social Security numbers that are duplicated in more than one reserve component. We agree with DOD that some requirements cannot be immediately supported by service data systems and modifications to them can take time to complete. However, as our report notes, some service officials stated that resources are being diverted from these efforts to the DIMHRS program, which we reported is behind schedule. We continue to observe the need for the services to provide all necessary data, to address these data inconsistencies, and to establish needed protocols to have the services report data consistent with DOD guidance, especially since the data are used to determine reserve force availability and for medical surveillance. DOD also generally concurred with our recommendation that DMDC document its internal procedures and processes, including the assumptions it uses in data analyses. In its written comments, DOD stated that DMDC is in the process of developing documentation on its internal procedures and processes and has a draft that addresses the processes used from receipt of the data from the service components to the final quality control of the consolidated file. DOD also stated that DMDC has a draft product regarding many of the data analyses procedures used. During this engagement, we asked if these procedures and processes were documented. As we say in the report, DMDC stated that they were undocumented and that documenting them was not a priority. Although DOD stated that it is in the process of drafting these procedures and processes, we were never provided a draft of these documents. DOD also stated that while DMDC attempts to document the assumptions made in resulting report titles and footnotes, the disclosure of assumptions used in data analyses remain the responsibility of the requester of the data analyses. Although we agree that the requesters of the data bear responsibility to disclose the analytical assumptions used in the data analyses, our audit work indicates that there are basic assumptions that DMDC establishes and uses that, if documented and discussed with those who request data analyses, would allow the users to understand how the information can be used, as well as the limitations of the data analyses. For example, during a discussion with a Reserve Affairs official, who uses the data analyses provided by DMDC to provide information to senior DOD officials, we stated that DMDC defaults to using a servicemember’s first deployment rather than the most current deployment when preparing data analyses. This official was unaware that DMDC used this assumption and stated that the expectation was that DMDC was using the most current deployment to generate the analyses. This official planned to discuss this issue with DMDC in the future. In its written comments, DOD did not concur with what it characterized as our fourth recommendation. Specifically, DOD separated a single recommendation into two recommendations. In the draft report we sent to DOD, the recommendation read: “We recommend that the Secretary of Defense direct the Under Secretary of Defense, Personnel and Readiness, to require DMDC to document its internal procedures and processes, including the assumptions it uses in data analyses. In doing this, the Under Secretary of Defense, Personnel and Readiness, should collaborate on the reasonableness of the assumptions used by DMDC in its data analyses with the Office of the Assistant Secretary of Defense for Reserve Affairs and the Joint Staff.” DOD stated that DMDC is a support organization that generates reports for a multitude of organizations and that each organization that requests reports provides the assumptions that DMDC uses to develop the reports. However, our audit work showed that DMDC has established and uses some basic assumptions in analyzing data and that DMDC may not always discuss these assumptions with other DOD offices, such as Reserve Affairs. As a result, we continue to emphasize the need for DMDC to document these assumptions and to collaborate with these offices to ensure a common understanding of these assumptions. Although DOD organizations can request data analyses using multiple assumptions, without written documentation other organizations may not be fully aware of the analytical assumptions used by DMDC and this may lead to miscommunication and, ultimately, the data analyses may not be valid in that it does not report what the user intended. We continue to believe that the assumptions used need to be documented and discussed with other DOD offices as we recommended. Based on DOD’s comments, we modified this recommendation to clarify our intent. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Under Secretary of Defense, Personnel and Readiness; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-5559 or stewartd@gao. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine (1) what Department of Defense (DOD) data indicate are the number of reservists mobilized and deployed in support of the Global War on Terrorism (GWOT), and the selected demographic and deployment characteristics of those deployed and (2) whether DOD’s reserve deployment and mobilization data and analyses are reliable. We identified, based on congressional interest and our knowledge of DOD issues, selected demographic and deployment variables to review. We then worked with the Defense Manpower Data Center (DMDC) to identify the data fields within DMDC’s Contingency Tracking System (CTS) that best provided information about the selected demographic and deployment variables we wanted to analyze. Although we wanted to analyze the locations to which reservists were deployed and the units with which reservists were deployed, DMDC officials said, and we agreed based on our review of the data, that the data were not reliable enough for those purposes. Our selected variables included the number of deployed reservists who volunteered for at least one deployment; the number of deployed reservists who have served one, two, or three or more deployments; the race and ethnicity of the deployed reservists; the gender of the deployed reservists; the state of residence of the deployed reservists; the number of deployed reservists who were Selected Reserve, Individual Ready Reserve, Standby Reserve, or Retired Reserve; the number of deployed reservists who were citizens at the time of their deployment; the number of days the reservists were deployed; and the top occupational areas for reservists deployed in support of GWOT. To address objective 1, we obtained and analyzed data for September 2001 through June 2006 from DMDC’s CTS. CTS consists of two files—the activation file, which tracks activations and mobilizations, and the deployment file, which tracks deployments. Using CTS data from both files, we analyzed the number of National Guard and Reserve servicemembers mobilized and deployed in support of GWOT, as well as selected demographic and deployment variables, using statistical analysis software. To address objective 2, we performed a data reliability assessment on the data provided by DMDC from CTS’ activation and deployment files. We requested DMDC reports that replicated our analyses and then compared those report results to our analyses, and we reviewed the programming code DMDC used to generate those reports. To assess the reliability of CTS data, we obtained an understanding of the data, the file structure, the sources of the data, and relevant DOD guidance. Specifically, we (1) performed electronic testing of the data files for completeness (that is, missing data), out-of-range values, and dates outside of valid time frames; (2) assessed the relationships among data elements (for example, determining whether deployment dates were overlapping since each record in the deployment file is intended to represent one deployment); (3) reviewed existing information about the data and the systems that produced them; (4) interviewed department officials to identify known problems or limitations in the data, as well as to understand the relationship between the two files and how data are received from the services, cleaned (“rebaselined”), and processed by DMDC; and (5) compared “prerebaselined” mobilization data to “postrebaselined” mobilization data to determine the extent to which the data changed as a result of the cleaning effort. When we found discrepancies (for example, overlapping deployment dates), we worked with DMDC to understand the discrepancies. In our interviews with DMDC officials, we discussed the purpose and uses of CTS, the service data rebaselining effort and the internal controls for verifying data analyses, monthly validation of data, and performing data analyses. Similarly, we discussed data collection, processing, and reliability issues as well as service-specific data issues and the rebaselining effort with officials from the Office of the Assistant Secretary of Defense for Reserve Affairs and from each of the reserve components, including the U.S. Army National Guard, the U.S. Air National Guard, the U.S. Army Reserve, the U.S. Navy Reserve, the U.S. Marine Corps Reserve, and the U.S. Air Force Reserve. We also discussed the reliability of the services’ data, the rebaselining effort, and the results of a previous Joint Staff review of the quality of service data within CTS with officials in the Joint Chiefs of Staff Manpower and Personnel office. Finally, we interviewed officials from the Deployment Health Surveillance Directorate and the Army Medical Surveillance Activity about the quality of the deployment data and how they use the data. In the course of our review, we determined that some data fields were highly unreliable. For example, electronic testing indicated that data on location and reservist unit information were missing in many cases. Based on our conversations with DMDC and our understanding of the data system, we decided not to conduct lower level analyses (for example, analyses of reservists’ assigned units) because the results would be less reliable than aggregate level analyses. Although we are reasonably confident in the reliability of most CTS data fields at the aggregate level, because we could not compare source documentation from each of the services to a sample of DMDC data, we could not estimate precise margins of error. Consequently, we used the data for descriptive purposes, and we did not base any recommendations on the results of our analyses. In addition, we presented only higher level, aggregate data from fields that we determined were sufficiently reliable for our reporting purposes. For these purposes, and presented in this way, the CTS data we use are sufficiently reliable with the following caveat: The Army had not completed its rebaselining effort for mobilization data before the completion of our review, and we could not, therefore, assess the reliability of Army mobilization data to the same extent as those of the other services. However, based on our electronic testing, data comparisons, and interviews with officials, we believe that the data are sufficiently reliable to present as descriptive information. To assess the reliability of DMDC’s reports (that is, its own analyses) of CTS data, we compared our independent analyses of National Guard and Reserve servicemembers’ mobilization and deployment statistics with results that DMDC provided from its own analyses of the same data. To pinpoint differences in analytical assumptions, we reviewed the statistical code DMDC used to produce its reports and compared it with our programming code. Through an iterative process, we noted errors in DMDC’s programs and requested changes and reruns of the data. We worked with DMDC to ensure that discrepancies were not caused by differences in our analytical assumptions. Where there were discrepancies, we reached the following consensus on how to address them: Removed the Coast Guard entries from our analyses of the CTS database since, as we state in this report, the Coast Guard Reserve is under the day- to-day control of the Department of Homeland Security rather than DOD. Combined a reservist’s Social Security number with his or her reserve component to create a unique identifier. DMDC officials said they do this because they are unsure where the source of the error is when they find that a Social Security number corresponds with two reserve components for a deployment during approximately the same time period. DOD’s policy, when there is a duplicate Social Security number for more than one reserve component, is to count both transactions. However, the use of duplicate Social Security numbers results in overcounting. Specifically, the June 2006 file had 38 reservists with overlapping mobilizations, 20 reservists with overlapping deployments, and more than 800 deployed reservists who appeared to have legitimately changed components. To compensate for the 58 “errors” where DMDC did not know which mobilization or deployment to count, it double-counted all 58 reservists. Likewise, the 800 deployed reservists who changed reserve components during GWOT were also double-counted. Removed reservists from all analyses when their reserve component category is unknown, so that the numeric totals across analyses would be consistent. DMDC officials said that this is an undocumented standard operating procedure. Utilized the reservists’ information for most recent deployment to provide the most current information possible in cases where a reservist deployed more than once. Calculated the length of a reservist’s deployment by including both the day the deployment began and the day on which the deployment ended. Thus, the number of days deployed is inclusive of the beginning and end dates. Combined the race categories for Asian, Native Hawaiian, and Other Pacific Islander because, prior to 2003, the distinction between these two groups was not captured in the data. After clarifying and agreeing on the analytical assumptions, we again reviewed DMDC’s code and compared its results with our own to determine whether and why there were remaining discrepancies. We also requested written documentation of DMDC’s internal control procedures for the CTS data and, when no documentation was available, interviewed knowledgeable officials about existing internal control procedures. Using the framework of standards for internal control for the federal government, we compared the information from those documents and interviews with our numerous, iterative reviews of DMDC’s statistical programs used to generate comparative reports to assess the reliability of DMDC-generated reports from CTS. We determined that the reports DMDC generated for our review were not sufficiently reliable for our reporting purpose. Thus, we completed our own data analyses. We performed our work from December 2005 through August 2006 in accordance with generally accepted government auditing standards. Our analysis of DOD data indicates that most reservists who deployed in support of GWOT through June 30, 2006, were part of the Selected Reserve (see table 5). In addition, California, Texas, Pennsylvania, and Florida had the highest numbers of reservists who have deployed in support of GWOT through June 30, 2006 (see table 6). In addition to the contact named above, Cynthia Jackson, Assistant Director; Crystal Bernard; Tina Kirschbaum; Marie A. Mak; Ricardo Marquez; Julie Matta; Lynn Milan; Rebecca Shea; and Cheryl Weissman made key contributions to this report.
|
GAO has previously reported on the Department of Defense's (DOD) ability to track reservists deployed to the theater of operations and made recommendations. Reliable mobilization and deployment data are critical for making decisions about reserve force availability and medical surveillance. Because of broad congressional interest, GAO initiated a review under the Comptroller General's authority to conduct evaluations on his own initiative to determine (1) what DOD data indicate are the number of reservists mobilized and deployed in support of the Global War on Terrorism (GWOT) and the selected demographic and deployment characteristics of those deployed and (2) whether DOD's reserve deployment and mobilization data and analyses are reliable. GAO analyzed data and data analyses from DOD's Contingency Tracking System (CTS) and interviewed agency officials. GAO's analysis of DOD data indicates that more than 531,000 reservists have been mobilized in support of GWOT as of June 30, 2006, and more than 378,000 reservists, or 71 percent of the number mobilized, have been deployed. The number of reservists deployed increased through fiscal year 2003 and remained stable through fiscal year 2005. The majority of reservists have been deployed once. GAO's analysis further indicates that of the more than 378,000 reservists who have deployed in support of GWOT, 81 percent have spent a year or less deployed and 17 percent of reservists have spent more than 1 year but less than 2 years deployed. Of those who deployed, almost 98 percent were U.S. citizens. Since GWOT began, about 78 percent of reservists who were deployed were White, about 14 percent were Black or African American, and almost 90 percent identified themselves as non-Hispanic and 8 percent as Hispanic. Of those who were deployed, 89 percent were male and 11 percent were female. There were three variables--volunteer status, location deployed, and unit deployed--required by DOD policy for which the Defense Manpower Data Center (DMDC) could not provide data because the data either did not exist or were not reliable enough for the purposes of GAO's report. GAO found the deployment and mobilization data used to be reliable for providing descriptive information. However, the mobilization data, some deployment data fields, and DMDC's processes for data analyses need improvement. DMDC and the services have recently taken steps to improve the reliability of mobilization data; however, additional steps are needed to make mobilization data more reliable. DMDC and the services have undertaken a large-scale, challenging effort to replace all previous service-provided mobilization data in DMDC's CTS database with new data from the services, referred to as "rebaselining." To date, the Air Force has certified that it has rebaselined its data and Navy officials say they have validated their personnel files and established a common baseline of data with DMDC. The Army, which has mobilized the largest number of reservists, has not completed its rebaselining effort and has not set a deadline for completion. Also, DOD has not fully addressed other data issues that could affect the accuracy and completeness of the data, such as standardizing the use of key terms and ensuring that the services address data issues identified by DMDC as well as provide data for all required data fields, such as location, to DMDC. Also, because the data analyses DMDC provided had numerous errors, GAO questions the effectiveness of its verification procedures and other supporting procedures, all of which DMDC has not documented. Until DOD addresses data issues and DMDC documents the internal control procedures it uses to analyze data and verify its analyses of the data, the information provided to decision makers within Congress and DOD may be unreliable and decision makers will not be in the best position to make informed decisions about reserve force availability and reservists' exposure to health hazards.
|
The Medicare home health care benefit covers skilled nursing, therapy, and related services provided in beneficiaries’ homes. To qualify, a beneficiary must be confined to his or her residence (that is, must be “homebound”); require intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and be furnished services under a plan of care prescribed and periodically reviewed by a physician. If these coverage criteria are met, Medicare will pay for part-time or intermittent skilled nursing; physical, occupational, and speech therapy; medical social service; and home health aide visits. Only HHAs that have been certified are allowed to bill Medicare. Beneficiaries do not pay any coinsurance or deductibles for these services, and there are no limits on the number of home health care visits they receive as long as they meet the coverage criteria. HCFA, the agency within the Department of Health and Human Services responsible for administering Medicare, uses five regional contractors (which are insurance companies), called regional home health intermediaries (RHHI), to process and pay claims submitted by HHAs and to review or audit their annual cost reports. HHAs are paid their actual costs for delivering services up to statutorily defined limits. During each fiscal year, HHAs receive interim payments based on the projected per visit cost and, in some instances, the projected volume of services for Medicare beneficiaries. At the end of the year, each HHA submits a report on its costs and the services it has provided and the RHHI determines how much Medicare reimbursement the HHA has earned for the year. If the interim payments that the agency received exceed this amount, the HHA must return the overpayment to Medicare. Otherwise, Medicare makes a supplementary payment of the difference between the earned reimbursement and the interim payments. HHAs are expected to minimize overpayments and underpayments by notifying RHHIs of changes in projected costs or volume during the year so that their interim payments can be adjusted. Final cost report settlements generally do not occur until 2 years after an HHA’s fiscal year ends. The home health care benefit has been one of the fastest growing components of the Medicare program, increasing from 3.2 percent of total Medicare spending in 1990 to 9 percent in 1997. Medicare’s home health care expenditures rose from $3.7 billion in 1990 to $17.8 billion in 1997. The rapid growth in home health care was primarily driven by legislative and policy changes in coverage. These changes essentially transformed the home health care benefit from one focused on patients needing short-term posthospital care to one that also serves chronic, long-term care patients. The growth in spending has slowed markedly in recent years. Several factors probably contributed to the deceleration, including HCFA’s recent antifraud measures. While spending grew, HCFA’s oversight of HHAs declined. The proportion of home health care claims that HCFA reviewed dropped sharply, from about 12 percent in 1989 to 2 percent in 1995, while the volume of claims about tripled. Yet the need for such review to ensure that Medicare pays only for services that meet its coverage rules has not diminished. In a study of a sample of high-dollar claims that were paid without review, we found that a large proportion of the services did not meet Medicare’s coverage criteria. Operation Restore Trust (ORT), a joint effort by federal and several state agencies to uncover program integrity violations, also found high rates of noncompliance with Medicare’s coverage criteria among the problematic HHAs they investigated. In addition, RHHIs audited cost reports for only about 8 percent of HHAs each year from 1992 through 1996. Until recently, the number of Medicare-certified HHAs increased along with the rise in home health care spending—from 5,700 in 1989 to 10,600 at the end of 1997. Last year, we reported that HHAs were granted Medicare certification without adequate assurance that they provided quality care or met Medicare’s conditions of participation. Moreover, once certified, there was little likelihood that a provider would be terminated from the program. Beginning in mid-1997, HHAs that request Medicare certification or change ownership have had to go through an enrollment process designed to screen out some problem providers. The process requires HHAs to identify their principals—that is, anyone with a 5-percent or greater ownership interest—and to indicate whether any of them have ever been excluded from participating in Medicare. HCFA also has proposed requiring all HHAs to reenroll every 3 years, which would entail an independent audit of providers’ records and practices. In the BBA, the Congress strengthened HCFA’s ability to keep potentially problematic providers out of the Medicare program by codifying a $50,000 surety bond requirement and establishing other participation requirements. The law also required HHAs participating in Medicaid to obtain a $50,000 surety bond. The law expanded the enrollment process by requiring HHA owners to furnish HCFA with their Social Security numbers and information regarding the subcontractors of which they have direct or indirect ownership of 5 percent or more. The BBA also provides that an HHA may be excluded if its owner transfers ownership or a controlling interest in the HHA to an immediate family member (or household member) in anticipation of, or following, a conviction, assessment, or exclusion against the owner. Subsequently, HCFA implemented additional changes to further strengthen requirements for HHAs entering the Medicare program and to prevent fraud and abuse. For example, the surety bond regulation imposes a capitalization requirement for home health care providers enrolling on or after January 1, 1998. New HHAs are required to have enough operating capital for their first 3 months in business, of which no more than half can be borrowed funds. In another regulation, HCFA requires that an HHA serve at least ten private-pay patients before seeking Medicare certification. This contrasts with the previous requirement that only a single patient had to have been served. The BBA mandated surety bonds for Medicare suppliers of DME, CORFs, and rehabilitation agencies as well. Like HHAs, Medicare spending for these providers has grown rapidly in the past few years. Further, there is general concern that providers are given inadequate oversight and that their bills are insufficiently reviewed. DME suppliers sell or rent covered DME (such as wheelchairs), prosthetics, orthotics, and supplies to Medicare beneficiaries for use in their home. In 1996, there were more than 68,000 Medicare-participating DME suppliers. From 1992 to 1996, spending for DME increased from $3.7 billion to $5.7 billion, an average annual increase of 11 percent. Medicare bases its payment of DME suppliers on a fee schedule. Consequently, how much Medicare should pay for each item is known when it is delivered. Overpayments that should be returned to Medicare arise almost entirely from claims submitted and paid inappropriately. CORFs and rehabilitation agencies both provide rehabilitation services to outpatients. CORFs offer a broad array of services under physician supervision—such as skilled nursing, psychological services, drugs, and medical devices—and must have a physician on staff. Conversely, rehabilitation agencies provide physical therapy and speech pathology services, primarily in nursing facilities, to individuals who are referred by physicians. In 1996, there were 336 CORFs and 2,207 rehabilitation agencies. From 1990 to 1996, Medicare’s spending for CORFs grew from $19 million to $122 million, an average annual increase of 36 percent. Spending for rehabilitation agencies increased from $151 million to $457 million, an average growth of 25 percent per year during the same period. Like HHAs, CORFs and rehabilitation agencies are paid on the basis of their costs. Therefore, the actual payment for a service is not known until the cost report is settled, after the end of the fiscal year. This increases the likelihood that overpayments will be made because of unallowable costs or cost-estimation problems during the year. A surety bond is a three-party agreement. It is a written promise made by the bond issuer, usually an insurance company, called a surety, to back up the promise of the purchasing firm to a third party named in the bond. For example, the surety may agree to compensate the third party if the bondholder fails to deliver a product on time or without significant defects. In issuing a bond, the surety signals its confidence that the bondholder will be able to fulfill the promised obligations. In purchasing a bond, the bondholder acknowledges its duty to indemnify—that is, compensate—the surety when a bond is redeemed. Surety bonds entail many different types of guarantees, depending on what the third party requiring the bond wants to accomplish. Three types of surety bonds have been seen as potentially appropriate for HHAs—a financial guarantee bond, an antifraud bond, and a compliance bond. However, while labels are often applied to different types of bonds, the types have no strict definitions. The specific language in each bond describes the guarantee, what constitutes default, how a default is demonstrated, and what penalty or compensation ensues. The bond types can be described in general terms. A financial guarantee bond promises that the surety will pay the third party requiring the bond financial obligations not paid by the bondholder up to the face value of the bond. Antifraud bonds generally provide the third party protection in the event that it incurs losses from the bondholder’s fraudulent or abusive actions. The third party delineates what constitutes fraud or abuse for purposes of bond default. A compliance bond generally guarantees that the bondholder will conform to the terms of the contract with the third party requiring the bond and that the surety will pay the third party if the bondholder does not meet the contract’s terms. This bond also can be designed to guarantee that the bondholder complies with specific standards, such as having a required license or conforming to a set of regulations. Just as surety bonds vary depending on what is being guaranteed, so do the criteria that sureties use in assessing or underwriting a prospective bond purchaser. There are, however, general underwriting rules. Sureties traditionally examine what they call a firm’s “3 Cs”—character, capacity (or proven ability to perform), and capital. The emphasis that the surety places on each of these three elements varies with the guarantee incorporated into the bond. For a bond that guarantees that the surety will pay financial obligations, for example, sureties emphasize the financial resources of the bond purchaser and its principals. Sureties emphasize a bond purchaser’s character or capacity in determining whether to provide a bond that guarantees that the surety will pay when the bonded firm acts dishonestly, although they are still interested in the purchaser’s financial situation. To underwrite or assess whether to provide a bond, sureties examine, at a minimum, information about the firm—such as an organizational plan, length of time in business, financial statements for the current year and the previous 2 years, resumes of key individuals, current sales and revenues, and a business plan. Sureties generally review the credit history of a firm and, if it is privately held, of its principals. This scrutiny is one of the benefits of requiring a surety bond. The cost of obtaining a bond is the fee or premium charged plus having to provide collateral—cash or assets that can be turned into cash. The greater the surety’s risk of having to compensate the third party without being able to recover that compensation from the bondholder, the higher the cost. The fee is usually 1 percent to 2 percent of the face value for most commercial bonds. Collateral, which ensures the compensation of the surety, is typically required when there is a greater risk of a loss because of the type of guarantee provided by the bond or the capacity of the firm to repay the surety. For example, sureties may require collateral of firms that do not have assets worth considerably more than the face value of the bond. The collateral that sureties accept may be a deed of trust on real property, a Treasury bond, or an irrevocable letter of credit from a bank. The costs to the bondholder of providing collateral vary inversely with the costs to the surety for liquidating it. A property deed may be the least costly for the bondholder to provide, followed by a Treasury bond, and then an irrevocable letter of credit, which generally requires the payment of a fee to the financial institution. The latter two options require obligating cash from operating capital. These are the most secure options for the surety, however, since they can be liquidated with minimal costs. For certain firms seeking a bond, a firm’s principals may have to personally guarantee that they will repay the bond issuer for any losses. Such personal indemnity is generally required for smaller, privately held firms but not for most nonprofit firms or those that are publicly held. Surety industry representatives indicate that personal indemnity provides a measure of the willingness of the principals who benefit from the firm to stand behind their company. The ability to obtain a surety bond varies with the bond’s terms and the characteristics of the firm. The surety industry maintains that the narrower the definition of what a bond guarantees, the easier it is for a firm to obtain the bond. For example, obtaining an antifraud bond would be easier if it required payment of the penalty only in cases involving criminal fraud rather than any type of abuse. Large, financially healthy firms that have been in business a long time generally have the least difficulty obtaining any surety bond. Smaller privately held firms, new or inexperienced businesses, firms that have filed for bankruptcy or have credit problems, and those with little credit have more difficulty obtaining a bond. Those that have defaulted on a prior bond are likely to face the most severe scrutiny. One surety industry representative indicated that firms that default on one bond are unlikely to be able to obtain another. Anyone required by federal statute or regulation to furnish a surety bond may substitute a U.S. bond, Treasury note, or other federal public debt obligation of equal value. According to the surety industry, however, this option is taken only by large, well-financed entities. The Florida Medicaid surety bond requirement has often been cited as an important precursor to Medicare’s surety bond requirement. However, the effect of Florida Medicaid’s program integrity measures has few implications for Medicare’s imposition of a surety bond requirement. Florida instituted more stringent program integrity measures than Medicare at the same time it required the surety bond, such as a criminal background check. The state also targets the surety bond requirement to new and problem providers. Florida introduced several new measures to combat fraud and abuse in its Medicaid program in December 1995. As a result of these measures, HHAs and other noninstitutional providers in Florida are now subject to closer scrutiny before they can participate in Medicaid, and home health care coverage criteria have been tightened. One of Florida’s new program participation requirements is that certain agencies purchase a 1-year surety bond. The surety bond requirement applies to new HHAs, those in the program for less than a year when they reenroll, and problem agencies. Each must obtain a 1-year, $50,000 surety bond. Florida officials indicate that their primary reason for the surety bond requirement is that in underwriting a bond surety companies check the capacity and financial ability of providers to operate their business. They consider such review to be an effective and administratively efficient screening tool to keep unqualified providers from participating in the Medicaid program. Florida officials told us that the screening associated with obtaining a surety bond is so important that they no longer allow providers to substitute a $50,000 letter of credit for the surety bond. The required surety bond is a guarantee that the bondholder’s principals, agents, and employees will comply with Florida’s Medicaid statutes, regulations, and bulletins and will perform all obligations faithfully and honestly. Since the surety bond requirement was implemented, no HHAs have had claims made against their bonds. In addition to the surety bond requirement, Florida’s Medicaid reforms included several policy changes. A new agreement was implemented for all noninstitutional providers. They are required to pay for a criminal background check for each principal (owners of 5 percent or more, officers, and directors) by the state Department of Law Enforcement. Providers also must allow state auditors immediate access to their premises and records. Other measures the Florida Medicaid program implemented in its campaign against fraud and abuse include new computerized claims edits and other types of claims review to identify inappropriate billings. In addition, new constraints on the coverage of home health care services were imposed, including prior approval requirements for extended periods of service. More than one-fourth of HHAs that participated in Florida’s Medicaid program when the program integrity measures were implemented are reported to have left Medicaid. This estimate is based on an analysis of Medicaid provider numbers. Agencies with no provider number after the bond requirement effective date were counted as leaving the program. However, HHAs may stop using their provider numbers for reasons other than leaving the program or going out of business. Some, for example, use a new number obtained because they merged with another agency or were sold. We were able to locate seven of the nine largest HHAs that the state reported as having left the Medicaid program in 1996. All seven agencies were still providing Medicaid-covered home health care services. We did not assess what proportion of smaller agencies reported as leaving actually did so. The departure of HHAs from Florida’s Medicaid program cannot be attributed solely to the surety bond requirement. Bonds are required only for new or problem providers, so the requirement does not apply to most agencies that had been billing Medicaid. Most of the HHAs that left Medicaid would not have needed to obtain a bond. Florida’s governor maintains that the reduction in the number of Medicaid-participating HHAs has not affected patients’ access to care. Closures, in fact, are not a good measure of access because it is possible for one agency to quickly absorb the staff and patients of a closing HHA. We did not identify any systematic evaluations of the effect of the closures, however. HCFA, concerned about increases in overpayments to HHAs, structured the bond as a financial guarantee that agencies’ Medicare overpayments would be repaid, and it raised the required amount of the bond for larger agencies above the $50,000 specified by the BBA. Larger HHAs participating in both Medicare and Medicaid were required to obtain two separate surety bonds: one bond for Medicare valued at 15 percent of Medicare revenues and one for Medicaid valued at 15 percent of those revenues. The specification of the bond requirements and the anticipated costs raised industry concern about their affordability and availability. Under HCFA’s requirements, bonds, however, do increase the likelihood that they will be redeemed and, consequently, may increase the scrutiny of HHAs by surety companies and the proportion of agencies having to provide collateral. The requirements also provide an incentive for HHAs to repay any overpayments so they can continue to purchase bonds. HCFA specified a financial guarantee bond in its regulation and raised the value of the bond above the legislated minimum for larger agencies because of its concern about the recovery of overpayments to HHAs. HCFA believes that this type of bond will reduce Medicare’s risk of unrecovered overpayments. This risk, however, is currently small. Uncollected overpayments represented less than 1 percent of Medicare’s 1996 spending for home health care services. Although overpayments are expected to rise in the near term, longer-term changes in Medicare policies will probably reduce their likelihood in the future. The higher bond amount for larger agencies may correspond to the level of payments they receive, but the data HCFA used to establish the higher bond requirement were unrelated to HHA size. HHAs accounted for about one-fourth of all Medicare overpayments in 1996, and overpayments as a percentage of total HHA payments have been rising. In 1993, HHA overpayments were 4 percent of total program payments; by 1996, this had grown to 6 percent. HCFA estimates that about 60 percent of HHAs had overpayments in 1996. Most overpayments are recovered, however. HCFA data indicate that unrecovered overpayments in 1996 were less than 1 percent of Medicare’s HHA payments, although even this lower percentage overstates the problem. HCFA counts as unrecovered overpayments some money that is not really overpayment. Further, some of the actual overpayments may be collected in the future. HCFA and RHHI officials with whom we spoke expect to find that overpayments in 1998 were higher than in previous years because of the new limits on payments introduced by the BBA. They estimate that as many as 70 to 80 percent of HHAs may have overpayments for 1998. They also expect a larger proportion of overpayments to be uncollectible because more HHAs will leave Medicare still owing overpayments. Overpayments are problematic when HHAs terminate because there is no readily available way to collect them. HCFA reported that from October 1997 through September 1998, 1,155 HHAs quit serving Medicare beneficiaries and terminated from the program. These HHAs represent a larger proportion of terminated Medicare-certified agencies than in previous years. The recent spate of HHA closures and predicted increase in overpayments stem primarily from changes in Medicare’s payment, participation, and coverage policies. Once these policies are fully implemented, RHHIs and agencies should be better able to estimate allowable costs, thus minimizing overpayments. HCFA also has a BBA mandate to implement a prospective payment system (PPS) for HHAs. Under PPS, HHAs will know the payment at the time of service because they will receive a fixed, predetermined amount per unit of service, further reducing the potential for overpayment. Once PPS is in place, overpayments should occur only when bills are submitted and paid for individuals who are not eligible for Medicare’s benefits or for noncovered services. HCFA’s requirement that large agencies provide a bond equal to 15 percent of their Medicare revenues increases the cost of a bond considerably for some HHAs. HCFA officials argue that when large agencies fail to return overpayments, the potential loss to Medicare is greater than when smaller agencies do so. HCFA has not undertaken any analysis to determine the relationship between unrecovered overpayments and HHA size. In fact, larger agencies might be more likely to return them because they have more resources to manage the repayments and a greater incentive to remain in the Medicare program. Some information on the cost of and access to surety bonds is available. Many HHAs shopped for and obtained bonds before the regulation was postponed to February 15, 1999, but many others did not. In addition, HCFA made a change to the required terms for the bonds on June 1, 1998, that affected surety companies’ potential liability and their willingness to provide bonds to HHAs. The cost of obtaining a surety bond is the fee or premium charged plus any collateral that must be supplied. For large HHAs, the major cost of a surety bond is the fee, because they are required to have a bond equalling 15 percent of program revenues. They are less likely to have to provide collateral and the fee they pay may be a lower percentage of the bond’s face value than the fee for smaller HHAs. Small HHAs are required to obtain the minimum $50,000 bond but are more likely to have to put up collateral. The range of fees for HHA bonds is comparable to other commercial surety bonds. The surety underwriting association reports that fees for HHA financial guarantee bonds generally range between 1 and 2 percent of their face value. Rates may be higher or lower depending on the HHA’s financial situation. Some nonprofit and privately held for-profit HHAs that we interviewed had been quoted fees higher than those cited by the surety industry—up to 6 percent. Some of these quotes, however, were made before the changes in the regulation that reduced sureties’ risk. One surety that underwrote about 13 percent of the HHA surety bonds sold before the postponement of the requirement told us that its fees ranged from 0.5 to 3 percent of the face value of the bonds. It indicated that having a written business plan describing how the agency would respond to the new payment rates created by the BBA, audited financial statements, positive cash management history, and rigorous record keeping policies and practices reduced the HHAs’ fees. This surety charged its lowest fees to nonprofit HHAs supported directly or indirectly by public or private foundations. Its highest fees were for providers new to the home health care business. We found in looking at HCFA’s 1996 data that between 6,000 and 7,000 HHAs would be required to obtain a bond of more than $50,000 to participate in Medicare (see table 1). Assuming that sureties charge fees of 2 percent of a bond’s face value, fees would begin at $1,000, and about 2,400 HHAs would have to pay fees between $3,000 and $7,500 to obtain a bond. At that rate, fees could exceed $60,000 for large agencies, although sureties might charge them a lower rate. Larger agencies choosing to participate in Medicaid would pay additional fees to obtain a second bond. HCFA specifically exempted smaller agencies from this requirement, but the majority of HHAs will have to obtain bonds for both programs. The surety bond fee is not the only cost of obtaining a bond. Having to provide collateral raises the cost to HHAs. Sureties report requiring collateral because HCFA’s requirement that bonds be a financial guarantee increases the likelihood of claims. They want collateral from HHAs that pose greater risks of not being able to repay a surety if the bond is redeemed. Requirements for collateral vary among sureties, but generally they require collateral of privately held HHAs, particularly small and medium-sized agencies. The surety cited above required collateral of new HHAs unless they were financially strong and personal indemnity of the principals of all privately held firms. HCFA reported that about 40 percent of HHAs obtained Medicare surety bonds before the June 1998 delay in the implementation of the requirement (see table 2). Provider-based HHAs (those that are part of a hospital or skilled nursing facility) of any size were more likely to secure a surety bond than other types of HHAs; freestanding HHAs not part of a chain were least likely. It is impossible, however, to determine the proportion of HHAs that could have secured bonds. Surety and home health care industry representatives told us that some providers postponed the purchase of surety bonds, waiting for actual implementation of the requirement. They also said that some owners did not provide the collateral and personal indemnity that would have made it possible for them to obtain a bond. The timing of the surety bond requirement may have affected HHA proprietors who were particularly reluctant to use their personal assets as collateral and to provide personal indemnity because of the uncertainty created by the substantial changes in Medicare’s payment policy. It is not possible to determine whether they could have purchased bonds or ultimately will. A more narrowly defined antifraud bond, one triggered only by the failure to return overpayments received fraudulently, would probably be easier and less costly to obtain than a financial guarantee bond. Because the specified acts of fraud would not occur frequently, the risk of a claim against a bond would be low and sureties would provide bonds more readily. HHAs would be unlikely to have to pledge collateral if they could demonstrate good character and ties to the community, although personal indemnity from principals would possibly still be required of privately held HHAs. HCFA’s use of a financial guarantee bond for the return of overpayments regardless of their source will ensure more scrutiny and benefits to Medicare, however, than other types of bonds. In underwriting this type of bond, a surety will be likely to pay particular attention to financial statements, business practices, and overpayment history. This scrutiny will provide the Medicare program with several benefits. Proprietors who do not have relevant business experience will be deterred from incurring entering the program. Existing Medicare-certified HHAs will be examined as to business soundness. HHAs with overpayments that do not make an effort to repay them will be unlikely to obtain a subsequent surety bond and will be out of the Medicare business. And, generally, all providers will be deterred from incurring overpayments and will have incentive to repay any that are discovered. Screening by a surety appears to be most useful for new agencies. The rapid increase in the number of HHAs entering the Medicare program with little scrutiny also makes requiring surety bonds a useful mechanism for screening HHAs already in the program. However, the value of this scrutiny would probably diminish with an HHA’s continued participation in Medicare. Little may be gained from repetitive scrutiny of established, mature HHAs. The option to substitute a Treasury note or other federal public debt obligation for a surety bond will allow well-financed firms to avoid scrutiny. Whether this option is problematic depends on the purpose of the surety bond. If its primary purpose is to guarantee payment, then this causes no concern. If, however, the primary purpose is to increase scrutiny, then the ability to substitute may undermine that objective. Identifying the potential effect and cost of a compliance bond would be difficult because the terms of such a bond can vary widely. A bond like that required by Florida Medicaid, which requires compliance with all program rules and regulations, could effectively create a monetary penalty for violating Medicare’s conditions of participation. Now HHAs are required to comply but have the opportunity to address and correct deficiencies before losing their right to participate in the program. However, in underwriting a compliance bond, sureties might choose to avoid agencies that have been noted for violations in the past. Even if a bond were restricted to more serious deficiencies, sureties might be more reluctant to provide one. Sureties are less experienced in assessing compliance with Medicare rules and regulations than financial capacity, so it would be more difficult for them to predict which HHAs represent greater risk. Representatives of the surety industry acknowledge that no surety bond can screen out all people who want to take unfair advantage of the Medicare program. Some individuals who want to delude the surety will still be able to obtain surety bonds. In addition, individuals who have no history of criminal action but who intend to defraud or abuse the program once in could obtain bonds. Further, the substitution of a Treasury note, U.S. bond, or other federal public debt obligation, as allowed by Treasury regulation, eliminates any review of an HHA’s suitability or its history of performance. HCFA intends to propose surety bond requirements for DME suppliers, CORFs, and rehabilitation agencies that will parallel those for HHAs—a financial guarantee bond with a face value equal to the greater of $50,000 or 15 percent of Medicare payments. As with HHAs, Medicare will benefit from sureties’ review of these providers and the incentive created to return overpayments. There are numerous small DME providers, making it difficult for Medicare and other payers to monitor them. Historically, there has been general concern about DME suppliers’ business and billing practices. ORT found that a substantial number of suppliers billed Medicare for DME either not furnished or not provided as billed. The scrutiny provided by sureties will offer a review of their business practices and financial qualifications. For CORFs and rehabilitation agencies, the likelihood of overpayments is higher than for HHAs. HCFA estimates that in 1996, uncollected overpayments equaled 10.7 percent of the $122 million total Medicare spending for CORFs and 6.2 percent of the $457 million for rehabilitation agencies—significantly greater than the less than 1 percent estimated for HHAs. These providers’ access to and costs for surety bonds will also be comparable to those of HHAs. Larger firms or firms with more assets and other financial resources will probably have little difficulty obtaining a bond. Small firms and privately held ones with few resources will be more likely to have to provide collateral and personal indemnity to obtain one. Most of the smaller providers have limited revenues from Medicare; we estimate from HCFA data that between 74 and 97 percent would require a $50,000 surety bond (see table 3). Firms that own buildings or equipment will probably not have to pledge additional collateral if they have sufficient equity. Since many DME suppliers receive very limited Medicare revenue, they may be more likely to cease participation in the program if they view the surety bond requirement as too costly. The average DME supplier receives about one-twentieth of the Medicare revenue that the average HHA does. The effect on beneficiaries’ access may not be significant, however, given that DME suppliers currently number more than 68,000. While CORFs and rehabilitation agencies receive more Medicare revenue on average than DME suppliers, some may find the costs of obtaining a surety bond a barrier to Medicare participation. Access for beneficiaries may not be compromised significantly since other providers offer alternative sources of therapy. The Congress mandated surety bonds for HHAs because of concern about the growth in the home health care benefit and the lack of adequate oversight. HCFA implemented the requirement to ensure that it could recover Medicare overpayments made to HHAs. In underwriting the bonds, sureties will evaluate HHAs entering or continuing in the program by examining financial stability and business practices, which may raise the standard for Medicare participation by sureties. This scrutiny can help address the congressional concern. Specifying the terms of a bond as HCFA did will provide incentives for HHAs to return overpayments. Thus, the surety bond requirement can also achieve HCFA’s objectives. We believe that HCFA made a prudent choice in specifying the surety bond as a financial guarantee. A financial guarantee surety bond will raise the standard for HHAs entering the Medicare program and will help ensure that Medicare is protected against unrecovered overpayments. However, we believe that HCFA’s decision to require that larger agencies obtain bonds equal to 15 percent of their Medicare revenues may be unnecessarily burdensome for several reasons. First, this standard imposes a greater burden on large HHAs without a demonstrated commensurately greater benefit. Second, the home health care industry’s history of unrecovered overpayments does not warrant this requirement, even in the face of growing overpayments. Instead, we believe that a bond in the amount of $50,000 balances the benefit to Medicare of increased scrutiny and recovery of overpayments with the burden on participating agencies. Requiring a surety bond may effectively screen HHAs to determine whether they are reasonably organized entities, follow sound business practices, and have some financial stability. Such screening is most useful for new agencies. Given the considerable increase in recent years in the number of HHAs and the lack of scrutiny as these organizations entered the program, screening all existing HHAs is also useful. However, little may be gained from continued screening of established mature agencies. For such HHAs, the underwriting process is likely to be sensitive only to significant changes in financial stability. We also believe that requiring HHAs to obtain separate surety bonds for Medicare and Medicaid may be excessive. Even though HCFA exempted small agencies from obtaining two bonds, the majority of HHAs are required to purchase two bonds. This entails two fees and, in many cases, pledging collateral for two bonds. However, the level of scrutiny by the surety will be similar regardless of whether one or two bonds are needed. Requiring one bond for the two programs diminishes the financial protection but not HHAs’ incentives to repay overpayments. This is because an HHA that defaults on its bond to either Medicare or Medicaid is unlikely to obtain a bond in the future. Allowing HHAs to substitute a Treasury note for the surety bond makes sense when the primary objective of the requirement is to increase HCFA’s ability to recoup some unrecovered overpayments. However, this substitution undermines the objectives of requiring Medicare providers to submit to outside scrutiny and giving them strong incentives to return all overpayments. If congressional intent is to screen HHAs, the option of substituting a Treasury note does not afford that scrutiny. We recommend that to implement BBA’s surety bond requirement for HHAs, the HCFA Administrator revise the present regulation so that all HHAs obtain one financial guarantee surety bond in the amount of $50,000 for the guaranteed return of overpayments for both Medicare and Medicaid. With respect to the surety bond requirements that we are recommending, the Congress may wish to consider exempting from a surety bond requirement HHAs that have demonstrated fiscal responsibility—for example, those that have maintained a bond for a specified period of time and have returned any overpayments—and eliminating the option for HHAs of substituting a Treasury note, U.S. bond, or other federal public debt obligation for a surety bond. In written comments on a draft of this report, HCFA agreed with our findings, conclusions, and recommendations. The agency also agreed that the Congress should consider eventually exempting from the surety bond requirement HHAs that have demonstrated fiscal responsibility and eliminating the option for HHAs to submit federal public debt obligations in lieu of a surety bond. HCFA provided technical comments that we incorporated into the final report. We also obtained written comments from Florida Medicaid officials on the section of the report pertaining to the state’s program integrity efforts. They concurred with our findings and conclusions, and their technical comment was incorporated into the final report. Surety and home health care industry representatives reviewed a draft of this report. Their technical comments are included in the final report. The National Association of Surety Bond Producers and the Surety Association of America represented the surety industry. They expressed concern about the risk to the surety industry of writing $50,000 financial guarantee bonds for HHAs and asserted that a fraud and abuse bond would be more appropriate because it would provide the desired level of scrutiny of HHAs and be available to more of them. They also thought that a $50,000 bond would be too high for some HHAs. They suggested basing the amount of bonds on a percentage of Medicare revenues with a dollar upper limit. The surety industry representatives also believed that limiting the time during which HHAs must have surety bonds after demonstrating fiscal responsibility is not appropriate for several reasons. First, they maintained that the screening process remains important over time because sureties monitor changes in management and business practices, as well as in financial status, that may indicate problems. Second, they believed that if the requirement is limited, the cost of issuing bonds will go up as the group of HHAs purchasing bonds gets smaller. They expressed general concern about the attractiveness of this line of business to the surety industry if our recommendations are adopted. We believe that a $50,000 financial guarantee bond appropriately balances the costs to HHAs in obtaining a bond with protection for the Medicare program in the form of scrutiny and incentives to repay overpayments. We also believe that a financial guarantee bond will ensure more scrutiny and greater benefit to Medicare than other types of bonds. Further, after an HHA has demonstrated its commitment to repay or avoid overpayments, we believe that the value of the bond to the Medicare program diminishes substantially. The home health industry representatives who reviewed the report were from the American Association of Services and Homes for the Aging, the American Federation of HHAs, the American Hospital Association, the Home Care Association of America, the Home Health Services and Staffing Association, the National Association of Home Care, and the Visiting Nurses Association of America. Most of these organizations supported limiting the requirement to one $50,000 surety bond for both Medicare and Medicaid. They were concerned, however, that small HHAs might find the bond requirement burdensome and, given the payment changes implemented in 1998, might have to leave the Medicare program. The home health care industry representatives agreed that HHAs with “good track records” should be exempt from any surety bond requirement but thought that this exemption should be immediate. One representative thought that the Florida Medicaid program’s experience with surety bonds may be more relevant to Medicare’s experience than we do. It was also suggested that other mechanisms within the Medicare program could accomplish the screening function of a surety bond and that these options should be explored. The home health care industry representatives asserted that compliance bonds and antifraud bonds are more appropriate for the home health care industry than a financial guarantee bond. As noted earlier, we believe that a flat bond amount of $50,000 balances the concern of the industry with needed additional protections for the Medicare program. We believe that it is appropriate to require all HHAs to obtain a bond initially because this would ensure a level of scrutiny across all HHAs and because developing criteria to determine who should be exempt would be challenging. While other options could be pursued to screen HHAs, we believe that a financial guarantee bond will ensure more scrutiny and greater benefit to Medicare than other types of bonds. The surety industry and one home health care representative expressed concern about the timing of the surety bond requirement. Since the regulation was suspended, it is not clear when HHAs will have to obtain a surety bond or the amount of time it will need to cover. We agree that these details could affect future bond terms and the availability and cost of bonds. As agreed with your offices, unless you release the report’s contents earlier, we plan no further distribution for 30 days. We will then make copies available to other congressional committees and Members of the Congress with an interest in these matters, the Secretary of Health and Human Services, the Administrator of HCFA, and others upon request. If you or your staff have any questions, please call me on (202) 512-6806 or William J. Scanlon, Director, Health Financing and Systems Issues, at (202) 512-7114. Major contributors to this report are Sally Kaplan and Shari Sitron. To examine the surety bond issue, we reviewed our earlier extensive work on home health care and studied the regulation implementing the surety bond requirement in the Balanced Budget Act of 1997 (BBA) and related revisions and program memoranda, Department of Health and Human Services Office of Inspector General reports, and congressional hearing testimony. We conducted interviews with Health Care Financing Administration (HCFA) staff to determine the history and decision making process that resulted in the surety bond regulation. We also interviewed staff from the Florida Medicaid program and from three regional home health intermediaries (RHHI) who have responsibility for claims processing and medical review and cost report review and audit for almost 80 percent of the Medicare home health agencies (HHA). We interviewed officials with the Small Business Administration (SBA) and representatives of both the home health care and surety bond industries, including representatives of the trade associations for surety underwriters and surety producers, 4 sureties, 4 national home health care trade associations, 5 state home health care trade associations, and owners or operators of 44 HHAs from 13 states. HCFA provided us with data on comprehensive outpatient rehabilitation facilities (CORF) and rehabilitation agencies. These data came from systems HCFA uses to manage the Medicare program. Florida Medicaid provided us with a list of HHAs that had been in the program for 18 months or longer and dropped out of the program in 1996, taken from data systems used to manage the program. We conducted our work from April 1998 to November 1998 in accordance with generally accepted government auditing standards. SBA can guarantee surety bonds for construction contracts worth up to $1.25 million for small and emerging contractors who cannot obtain surety bonds through regular commercial channels. For surety bonds issued under two separate programs, SBA assumes a predetermined percentage of loss and reimburses the surety up to that amount if a contractor defaults. To be eligible for the SBA programs, a contractor must qualify as a small business (for example, have annual receipts for the previous 3 fiscal years of no more than $5 million) and meet the surety’s bonding qualification criteria. The information generally required by sureties includes an organization chart, current financial statements prepared by an accountant, financial statements for the previous 2 years, resumes of key people, a record of contract performance, the status of work in progress, and a business plan. The contractor pays the surety company’s fee for the bond, which cannot exceed the level approved by the appropriate state regulatory body. Both the contractor and the surety pay SBA a fee for each bond: The contractor pays $6 per $1,000 of the contract amount and the surety pays 20 percent of the amount paid for the bond. These fees go into a fund used to pay claims on defaulted bonds. In the Prior Approval program, SBA evaluates each bond application package to determine that the applicant is qualified and that the risk the agency will assume is reasonable before issuing a guarantee to the surety. SBA guarantees sureties 90 percent of losses on bonds up to $100,000 and on bonds to socially and economically disadvantaged contractors and guarantees 80 percent of losses on all other bonds under this program. Generally, contractors bonded under the SBA Prior Approval program are less experienced than contractors bonded under the Preferred Surety Bond (PSB) program. The PSB program is currently restricted to 14 sureties that are not permitted to participate in the Prior Approval program. The PSB program does not require SBA’s individual approval of bond applications but guarantees that SBA will pay 70 percent of surety losses if the contractor defaults. This program is for more experienced contractors that demonstrate growth potential and that are expected to be able to obtain surety bonds without an SBA guarantee in about 3 years. The firms in this program are usually larger than those in the Prior Approval program. A representative of SBA told us that HHAs would not be able to participate in its surety bond guarantee programs unless the definition of eligible entities were changed by law. Medicare Home Health Benefit: Impact of Interim Payment System and Agency Closures on Access to Services (GAO/HEHS-98-238, Sept. 9, 1998). Medicare: Interim Payment System for Home Health Agencies (GAO/T-HEHS-98-234, Aug. 6, 1998). Medicare Home Health Benefit: Congressional and HCFA Actions Begin to Address Chronic Oversight Weaknesses (GAO/T-HEHS-98-117, Mar. 19, 1998). Medicare: Improper Activities by Med-Delta Home Health (GAO/T-OSI-98-6, Mar. 19, 1998, and GAO/OSI-98-5, Mar. 12, 1998). Medicare Home Health: Success of Balanced Budget Act Cost Controls Depends on Effective and Timely Implementation (GAO/T-HEHS-98-41, Oct. 29, 1997). Medicare Home Health Agencies: Certification Process Ineffective in Excluding Problem Agencies (GAO/HEHS-98-29, Dec. 16, 1997, and GAO/T-HEHS-97-180, July 28, 1997). Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare: Home Health Cost Growth and Administration’s Proposal for Prospective Payment (GAO/T-HEHS-97-92, Mar. 5, 1997). Medicare Post Acute Care: Home Health and Skilled Nursing Facility Cost Growth and Proposals for Prospective Payment (GAO/T-HEHS-97-90, Mar. 4, 1997). Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/HEHS-95-171, Aug. 8, 1995). Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO evaluated the surety bond requirements for home health agencies (HHA) participating in Medicare, focusing on: (1) analyzing the key features of surety bonds that affect their costs and effect; (2) examining the Florida Medicaid program's experience with a surety bond requirements for HHAs and its relevance to the Medicare surety bond requirement; (3) reviewing the rationale for the surety bond requirements the Health Care Financing Administration (HCFA) selected, the cost and availability of bonds, the benefits for Medicare, and the implications of substituting a government note for a surety bond as set forth in a Department of the Treasury regulation; and (4) drawing implications from the implementation of the HHA surety bond requirement for implementing a similar surety bond provision for durable medical equipment (DME) suppliers, comprehensive outpatient rehabilitation facilities (CORF), and rehabilitation agencies. GAO noted that: (1) a surety bond is a three-party agreement in which a company, known as a surety, agrees to compensate the bondholder if the bond purchaser fails to keep a specified promise; (2) the terms of the bond determine the bond's cost and the amount of scrutiny the purchaser faces from the surety company; (3) when the terms of bonds increase the risk of default, more firms have difficulty purchasing them; (4) the likelihood that a firm will be unable to repay a surety increases fees charged and collateral requirements or the surety's unwillingness to sell it a bond; (5) Florida Medicaid's experience offers few insights into the potential effect of Medicare's surety bonds because the state implemented its surety bond requirement selectively, for new and problem HHAs, in combination with several other program integrity measures; (6) after implementation, Florida officials reported that about one-quarter of its Medicaid-participating HHAs had left the program, however, this exodus was not caused primarily by the surety bond requirement; (7) HCFA requires a surety bond guaranteeing HHAs repayment of Medicare overpayments, and it has set the minimum level of the bond as the greater of $50,000 or 15 percent of an agency's Medicare revenues out of concern that about 60 percent of HHAs had overpayments in 1996, amounting to about 6 percent of Medicare's HHA spending, and that, in their opinion, overpayments would increase in the future; (8) yet, HCFA's experience shows that most overpayments are returned, so that the net unrecovered overpayments were less than 1 percent of Medicare's home health care expenditures in 1996; (9) HCFA's implementing regulation requiring a bond guaranteeing the return of overpayments made for any reason rather than only those attributable to acts of fraud or dishonesty increases the risk of default; (10) sureties' scrutiny, which focuses primarily on an agency's business practices and financial status, is probably useful for screening new HHAs; (11) a Treasury regulation that allows the substitution of a government note for any federally required surety bond may undermine the purpose of the bond because HHAs could avoid surety scrutiny; (12) the Balanced Budget Act also requires the DME suppliers, CORFS, and rehabilitation agencies obtain a surety bond valued at a minimum of $50,000; and (13) Medicare will benefit from greater scrutiny of these organizations and their stronger incentives to avoid overpayments.
|
The BRAC 2005 Commission Report contains 198 recommendations approved by the BRAC Commission for closing or realigning DOD installations. The text of each recommendation contains several sections with important contextual information: Cost and Savings Information Secretary of Defense Recommendation Secretary of Defense Justification By law, DOD must implement the actions recommended by the Commission unless the President terminates the process, or Congress enacts a resolution of disapproval. BRAC 2005 differed from prior rounds in three significant ways—the circumstances under which it took place, its scale, and its scope. Unlike prior BRAC rounds, which were implemented during times of declining defense budgets and in which the focus was on eliminating excess capacity and realizing cost savings, BRAC 2005 was conducted in a global security environment characterized by increasing defense budgets and increasing military end strengths after the events of September 11, 2001 and was conducted concurrently with overseas contingency operations in Afghanistan and Iraq. At the same time, DOD was engaged in an initiative to relocate thousands of personnel from overseas to the continental United States. The scale of BRAC 2005 was much larger than the scales of the prior four rounds. BRAC 2005 generated more than twice the number of BRAC actions as all prior BRAC rounds combined. Table 2 compares the number of individual actions embedded in the BRAC 2005 recommendations with the number of individual actions needed to implement the recommendations in the prior rounds and shows that the number of individual BRAC actions was larger in BRAC 2005 (813) than in the four prior BRAC rounds combined (387). The scope of BRAC 2005 was broader than the scope of prior BRAC rounds. In addition to the traditional emphasis on eliminating unneeded infrastructure to achieve savings, DOD’s goals for the 2005 BRAC round included transforming the military by correlating base infrastructure to the force structure and enhancing joint capabilities by improving joint utilization to meet current and future threats. As shown in table 2, the 2005 BRAC round had the second lowest number of major closures, the largest number of major realignments, and the largest number of minor closures and realignments. Part of this transformation effort included a focus on providing opportunities to increase jointness, though many of the BRAC recommendations focused on consolidations and reorganizations within the military departments rather than across departments. However, the six recommendations we reviewed, as well as other recommendations, including creating joint bases, focused on jointness across multiple services. DOD implemented the BRAC 2005 recommendations we reviewed by requiring military services to relocate selected training functions; however we found that, although DOD’s justifications for collocating each of the six training functions that we reviewed mentioned jointness or inter-service training as a potential benefit, two of the six training functions took advantage of the opportunity provided by BRAC to consolidate training to increase jointness. Specifically, we found that DOD implemented all six recommendations by relocating the select training functions—as recommended by the BRAC Commission—but that opportunities for joint training were realized in only two locations. Figure 1 shows the relocations associated with each recommendation. Based on our meetings with officials, we found that officials implementing two of the six training functions created by those recommendations—the Joint Center of Excellence for Culinary Training and the 7th Special Forces Group—had found ways to take advantage of being located together to consolidate training and train jointly. For example, officials at the Joint Center of Excellence for Culinary Training stated that while the Air Force conducts its culinary training separately, the Army, Navy, and Marine Corps have successfully consolidated two of the three phases of their training and use a joint curriculum to train students. These officials stated that they were successful at consolidating the culinary training curricula for the Army, Navy, and Marine Corps because the leadership involved with implementing this recommendation was supportive of finding a way to train jointly even if that meant changing their curricula. Additionally, Army and Air Force Special Operations Forces officials stated that the relocation of the 7th Special Forces Group to Eglin Air Force Base allowed for increased joint training operations with the Air Force Special Operations Forces located at Hurlburt Field, near Eglin Air Force Base. These officials stated that they were successful at consolidating training and increasing jointness because they were already conducting joint training prior to the BRAC 2005 round and that since their move, being in closer proximity has made it even easier to train jointly. The implementation of the remaining four BRAC recommendations that we reviewed relocated—moved separate functions to one location—but did not consolidate training functions. According to officials at the locations that did not consolidate training, they do not regularly coordinate or share information on their training goals and curricula, despite the fact that part of the Secretary of Defense’s justification for the moves in the BRAC 2005 process was that they would bring a “train as we fight: jointly” perspective to the learning process or would otherwise allow for joint training. Service officials told us that after these recommendations were proposed by DOD and approved by the BRAC Commission, they compared each of their original curricula but did not identify many areas of overlap. Training function officials stated that they had received minimal guidance related to consolidating training. Therefore, we found, they did not adjust curricula to take advantage of their proximity to consolidate training and possibly be more joint and consolidate space. Training function officials also stated that their four training functions have very different missions, making consolidation of their training more difficult. For example, while both the Navy and the Air Force train their navigators at Pensacola Naval Air Station, they train them to fly in different scenarios (e.g., over land or over sea) and in different airplanes. Although the services may have differences in their training, the 2005 BRAC Commission Report noted that the Secretary of Defense had described the 2005 BRAC round as an opportunity to promote jointness. The BRAC Commission Report stated that while the 2005 BRAC recommendations would “not move the ball across the jointness goal line, Commission decisions would help move the ball down the field” toward more jointness. Table 3 summarizes the status of each of the six BRAC recommendations that we reviewed. We found that four of the six training functions in our review missed the opportunity to consolidate training to increase jointness, because DOD provided minimal guidance to direct those implementing the recommendations. Service officials stated that to direct the implementation of the six recommendations we reviewed, DOD provided them with the language from the BRAC Commission report as well as guidance for developing business plans. Using the guidance provided, each of the military departments’ headquarters developed a business plan. This guidance focused on movement of personnel or construction. In our previous work on consolidation of physical infrastructure and management functions, we identified key practices, including developing an implementation plan for the consolidation. Such a plan should include essential change management practices such as active, engaged leadership of executives at the highest possible levels; a dedicated implementation team that can be held accountable for change; and a strategy measuring progress toward the established goals of the consolidation. None of the guidance given to the military departments provided this type of direction. For example, the language from the BRAC Commission report for each recommendation we selected for review is generally less than one page long and contains high level summary information on costs, the action being recommended, DOD’s justification for the recommendation, community concerns, and the BRAC Commissions findings and recommendations. The business plans developed by the military departments included the text of the BRAC 2005 recommendation, a description of costs and savings for each moving organization, a list of organizations moving, a time table for the movement of organizations, details on any military construction, and environmental information. According to a September 2005 memorandum issued by the Office of the Secretary of Defense (OSD) and related to planning for BRAC 2005 implementation, the business plans were to serve as a foundation for the complex program management necessary to ensure that the recommendations were implemented efficiently and effectively. Additionally, the memorandum states that the implementation challenges presented by transformational recommendations—particularly recommendations to establish joint operations—underscore the utility and necessity of the plans. However, officials from the Basing Directorate under the Assistant Secretary of Defense for Energy, Installations, and Environment, the group that oversaw the implementation of BRAC 2005, stated that while the business plans do not include the BRAC 2005 language containing DOD’s justification related to consolidating training to increase jointness, it is the business manager’s responsibility to implement the recommendation, taking into account the intent of the recommendation as described in the justification language. During our review, however, we found that officials responsible for certifying that these six recommendations had been implemented were not required by OSD to certify whether or not they had taken advantage of the opportunity to increase jointness. Rather, the business plan managers were focused on the completion of the construction of buildings and the movement of personnel. Further, officials at the four training functions that did not consolidate training told us that although they had initially compared each service’s curricula to identify common training, they felt that there was not enough overlap in the training for it to be consolidated. They also stated that they had not received direction from OSD or the military services on how to consolidate curricula in order to foster jointness in the event that course curricula had few similarities, prepared personnel to perform different missions, or used different equipment. Like the BRAC 2005 recommendations that directed the relocation of several training functions in order to promote jointness or consolidate similar training, another BRAC 2005 recommendation directed the consolidation of 26 service-specific stand-alone installations into 12 joint bases to take advantage of opportunities for efficiencies and reduce duplication of similar support services. In order to implement this joint basing recommendation, the Office of the Secretary of Defense issued guidance in January 2008 designed to establish a comprehensive framework to consolidate installation-support functions while meeting mission requirements. OSD also created an oversight structure for handling disputes and established a set of common standards for the installation support to be provided by each joint base. Furthermore, DOD issued a directive on military training that gives the Undersecretary of Defense for Personnel & Readiness the responsibility to oversee and provide policy for individual and functional training programs for military personnel and the collective training programs of military units and staffs. If DOD and the services believe that the training functions in our review can still capitalize on the opportunity to promote jointness provided by the BRAC 2005 recommendations, additional guidance will be an important first step toward being able to take advantage of this opportunity. Officials from the Undersecretary of Defense for Personnel & Readiness agreed that additional guidance would potentially be helpful in providing opportunities to consolidate training to increase jointness. Further, in the event of a future BRAC round, such guidance could provide a useful framework for taking advantage of the opportunities provided by similar recommendations focused on developing joint training capabilities. DOD cannot determine if implementing the 2005 BRAC joint training recommendations that we reviewed has resulted in savings in operating costs. In addition, implementation costs reported to DOD by the training functions’ business plan managers likely did not include all costs funded from outside the BRAC account—we found at least $110 million in costs that likely should have been included based on DOD guidance requiring all BRAC-related costs to be reported, even those from outside the BRAC account. As a result, DOD may have incomplete or inaccurate cost information when trying to determine annual cost savings or total implementation costs of these BRAC recommendations. Although we reported in 2012 that DOD had projected that four of the recommendations in our review would result in annual savings in operating costs, we found that DOD could not determine whether implementing the 2005 BRAC joint training recommendations that we reviewed resulted in savings in operating costs. For two of the training functions in our review, DOD was able to provide complete baseline cost data; however, officials for these training functions could not determine whether cost fluctuations were due to the BRAC moves. For three of the training functions in our review, DOD was unable to provide complete baseline operating costs from before it implemented the BRAC recommendations, but officials representing these training functions indicated that implementing the recommendations may have increased some costs. The Joint Strike Fighter training program established by recommendation #125 was a new program and therefore there were no operating cost data prior to BRAC implementation. In our prior work, we have identified the importance of developing baseline and trend data. By developing baseline operating costs, agencies can better evaluate whether they are achieving their cost savings targets. In addition, in our 1997 report on lessons learned from the four prior BRAC rounds, we found that initial cost and savings estimates for prior BRAC rounds were not based on reliable baseline data, because they were not of budget quality, were not consistently developed, and were poorly documented. As we also noted in our 1997 report, sound estimates of savings are important, because DOD may rely on savings from BRAC for other purposes. In 2014 we found that DOD was unable to determine whether the consolidation of training at the Medical Education and Training Campus resulted in cost savings, because it had not developed baseline cost information as part of its metrics to assess success. We recommended that DOD develop baseline cost estimates as part of its metrics to assess cost savings for future consolidation efforts within the Medical Education and Training Campus, and DOD concurred with this recommendation. To date, DOD has not taken any actions to implement this recommendation, because, according to DOD officials, they cannot take action on these recommendations until another BRAC round is authorized. Two of the training functions in our review—Undergraduate Navigator Training and Ft. Bragg, North Carolina (7th Special Forces Group move to Eglin Air Force Base)—were able to provide complete baseline cost data. However, for these two training functions, officials could not determine whether subsequent cost fluctuations were due to the BRAC moves, non-BRAC events, or some combination. For example, the budget officials from the Air Force’s Air Education and Training Command were able to provide us with detailed operating cost data for their undergraduate navigator training, going back to 1996. However, even with these detailed cost data, the budget officials we met with stated that they could not account for all of the different events that had resulted in cost fluctuations during that time. Air Force budget officials further stated that multiple events such as sequestration, maintenance issues, and changes in how certain expenses are funded that occurred while BRAC was being implemented made it extremely difficult to determine whether any savings in the program’s operating costs were due to the implementation of the BRAC recommendation or to these other factors. For the remaining three training functions—Culinary training, Transportation Management training, and Religious training— the programs could not provide complete operating cost information from prior to the move. For example, according to Army budget personnel, the Army culinary, transportation management, and chaplain training programs did not have data for various reasons, including a change in accounting systems, and because they are not required to keep data that far back. In addition, according to Air Force officials, because the Air Force culinary program is part of a larger multidisciplinary training program that includes subjects such as fitness and mortuary services, it is not possible to isolate the costs for the culinary portion of the training. While these programs either did not have any baseline operating cost data or detailed operating cost baselines, in some instances officials were able to provide examples of where they believed operating costs have increased as a result of the respective BRAC moves. For example: Air Force officials estimated that they spend an additional $300,000 annually to operate the department’s Chaplain Corps College at Fort Jackson, South Carolina than they did to operate the one at Maxwell Air Force Base, Alabama. Navy officials provided operating cost data for their chaplain training program showing that they have spent an average of approximately $182,000 more per year since relocating to Ft. Jackson. Officials with both services cited increased travel costs as the primary driver of these increases, because Ft. Jackson does not have room for the students to stay on base. Therefore, according to officials, students from both services must stay at hotels in Columbia, South Carolina, and officials have to provide transportation to and from the base. A Navy culinary official estimated that sending students to Ft. Lee costs the service an additional $200,000 per year for airfare compared to what it cost when all training was at Naval Station Great Lakes, Illinois. This official also estimated that this travel takes about three days per student, which results in about $400,000 in lost work time per year. In addition, the official added that there are other costs related to getting the students to and from airports. Additionally, the Navy culinary official added that the training program has incurred additional administrative costs because the Army and Navy student tracking systems are not linked. Specifically, because the systems are not configured to exchange data, all Navy student data must be manually entered twice, once in each system. The official said that this equates to thousands of records per year and could take about $45,000 in labor costs to accomplish. For training functions we reviewed, Navy Joint Strike Fighter officials were able to identify a cost avoidance as a result of implementing the BRAC recommendation. As part of implementing this recommendation, the Air Force built a $59 million Academic Training Center at Eglin Air Force Base to serve the Air Force, Navy and Marine Corps. The Navy Joint Strike Fighter officials stated that if they had not colocated their program with the Air Force, the Navy would have had to pay to build and operate its own Academic Training Center. It is now likely not possible to determine baseline costs for implementing the recommendations in our review in order to determine the extent to which the implementation of these recommendations resulted in cost savings. Also, subsequent changes to the programs make it difficult to determine the effect of implementing the BRAC recommendations. Although it can sometimes be difficult to attribute costs and savings to a specific event, such as a BRAC change, DOD will not be able to estimate whether it has achieved annual savings in operating costs if it does not collect complete baseline cost data with which to measure progress. In 2012, we reported on DOD’s estimates of its final implementation costs for the BRAC 2005 recommendations; however, for two of the six recommendations in this review—the Joint Strike Fighter Initial Training Site and the 7th Special Forces Group move to Eglin Air Force Base—we found that at least $110 million in implementation costs funded from outside of the BRAC account that likely should have been included were not reported to DOD by the business plan managers. Thus DOD’s previously reported total cost of $35.1 billion to implement BRAC 2005 is likely somewhat understated. The statute authorizing BRAC 2005 established a special treasury account for purposes related to implementing the BRAC 2005 recommendations. During the lifetime of this account, DOD could also fund certain BRAC-related costs from outside the BRAC 2005 account to complete actions needed to implement the recommendations. For example, the services could use money obtained through their military construction process to renovate existing space or build new facilities. In 2010, we recommended that DOD take steps to capture and appropriately report to Congress any BRAC- related implementation costs that were funded from outside the BRAC account. DOD concurred with the recommendation, and in August 2010 the Deputy Under Secretary of Defense (Installations and Environment) issued a memo requiring BRAC business plan managers to submit all BRAC-related expenditures, including those funded from both inside and outside of the BRAC account. We reviewed the business plans for all six recommendations in our review, as well as data reported by the services to DOD, and found that none of them contained projects funded from outside of the BRAC account. Army and Air Force officials that we spoke with stated that there were general criteria for what could be included as a BRAC cost in the BRAC 2005 round. According to former business plan managers for some of the training functions in our review, and Army and Air Force service headquarters officials, some of these criteria included that the project be related to the physical move, the cost be for moves within the continental United States, the project not be related to addressing a deficiency that existed at the time of the BRAC recommendation, and the project be needed in order to comply with the original BRAC recommendation and not be used to accommodate personnel or mission expansion that happened after the BRAC decision. However, neither service nor OSD officials could provide us with any written guidance to this effect. Air Force officials also stated that language in the Form 1391—the DOD document to submit requirements and justification to Congress for funding for military construction projects— would indicate whether the project was BRAC-related. For three of the recommendations we reviewed, the military construction implementation costs reported to us were approximately the same as those reported to DOD in 2012. Business plan managers for recommendation 124—Joint Center for Excellence for Religious Training and Education—reported military construction implementation costs of approximately $11.6 million to DOD in 2012 and approximately $11.8 million to us in the course of this review. For recommendations 122— Joint Center for Consolidated Transportation Management Training—and 123—Joint Center of Excellence for Culinary Training—business plan managers reported combined military construction implementation costs of approximately $87.6 million to DOD in 2012, and approximately $89.4 million to us in the course of this review. For a fourth recommendation— Undergraduate Pilot and Navigator Training, recommendation #128—we could not determine what the total military construction implementation costs reported to DOD in 2012 were, because this was a bundled recommendation that contained projects on multiple bases, not just at Pensacola Naval Air Station, Florida. However, the final Pensacola military construction costs reported to us—$90.1 million—were close to the preliminary military construction estimates of $89.5 million for those projects. In the case of the Joint Strike Fighter and the Ft. Bragg, North Carolina recommendations, some projects that appear to be related to the BRAC move and were funded with non-BRAC money were not included in what was reported to DOD as required by DOD’s August 2010 memo. Examples of some of these projects that were not reported as BRAC implementation costs are Joint Strike Fighter (F-35) Parking Apron. In the official Form 1391 proposing this project and the need for the parking apron, the title of the project is “BRAC F-35 A/C Parking Apron.” Further, in the “Requirement” section of the document, the justification provided by the Air Force states that the build-up for Joint Strike Fighter operations includes relocating joint military instructor pilots and operations support personnel from Luke Air Force Base; Sheppard Air Force Base; Marine Corps Air Station Miramar, California; Naval Air Station Oceana, Virginia;; and the Naval Air Station at Pensacola, the moves required by this BRAC recommendation. Air Force headquarters officials stated that they did not include this as a BRAC implementation cost because they and the Navy headquarters officials agreed this cost was not related to the move. However, Air Force officials at Eglin Air Force Base as well as the Navy Business Plan Manager indicated that the Parking Apron was a necessary implementation cost. Furthermore, the cost for every other Air Force project that cited “BRAC” in the Form 1391 project title was counted as an implementation cost. By including this reference to BRAC in the 1391, this project was presented to Congress as a BRAC-related cost. The preliminary estimate for this project was $29 million dollars. Other Joint Strike Fighter Support Projects. This includes four projects related to the establishment of the Joint Strike Fighter training program. Three of these projects have language identical to that in the BRAC Joint Strike Fighter (F-35) A/C Parking Apron in the Requirement section of their Form 1391, which states that the build- up for Joint Strike Fighter operations includes relocating joint military instructor pilots and operations support personnel from Luke Air Force Base, Sheppard Air Force Base, Marine Corps Air Station Miramar, Ca., Naval Air Station Oceana, VA and the Naval Air Station at Pensacola. The Form 1391 for the fourth project cites the impending overcrowding as a result of establishing the Joint Strike Fighter training program as the justification for the project. Air Force headquarters officials stated that, for these projects, Air Education and Training Command or Air Force Materiel Command did not submit the project to the Air Force BRAC office to determine if it was a BRAC requirement. However, given the requirement language that cites the BRAC moves and the impending overcrowding, it is not clear to us that these were not BRAC implementation costs. Furthermore, Air Force documentation and headquarters officials acknowledged that one of these projects—the first phase of the Hydrant Refueling System Station—was a companion project to the BRAC F-35 A/C Parking Apron. The combined final cost for these projects was approximately $20.6 million. Housing. Neither the Joint Strike Fighter nor the Ft. Bragg business plan managers included the housing they built for Joint Strike Fighter pilot trainees or Special Forces Group soldiers as BRAC implementation costs. Air Force headquarters officials stated that there was a disagreement between the Air Force and the Navy about who should pay for the Joint Strike Fighter housing and how it should be paid for. In order to complete the housing prior to the arrival of students, the Air Force agreed to pay for the first housing unit and the Navy agreed to pay for the second unit. Regarding the barracks for the 7th Special Forces Group, at least one of these housing units was originally scheduled to be built with BRAC funding. However, Army headquarters and Special Operations Command officials stated that, due to construction delays, the Army reconsidered which funding source to use for some projects. As a result, all of the housing units ended up being built with regular military construction money as part of a larger project and no part of that project was counted as BRAC implementation costs. The decisions not to count the Joint Strike Fighter housing unit and the 7th Special Forces Group housing unit as BRAC implementation costs is inconsistent with the fact that housing for the culinary and transportation students at Ft. Lee, as well as the housing for navigator students at Pensacola Naval Air Station, were counted as BRAC implementation costs. The Joint Strike Fighter housing unit cost was $17.6 million and the cost of the three Army housing units ranged from $6.5 to 6.7 million each. 7th Special Forces Group Training Ranges. The Army built several training ranges on Eglin Air Force Base to support the move of 7th Special Forces Group from Ft. Bragg to Eglin Air Force Base. Army headquarters officials told us that the cost of the ranges was initially to be considered part of the BRAC implementation cost, and documentation shows that the Army planned to use BRAC funds to construct these ranges. However, due to the previously mentioned construction delays and changes to the funding source of projects, the ranges ended up being funded with Army military construction funds. When implementation costs were reported to DOD in the final business plan, the business plan managers did not indicate that there were any implementation costs funded from outside the BRAC account. The reasons for not including the ranges as a BRAC implementation cost are unclear. Both Army and Air Force headquarters officials stated that this may have been because the Air Force already had ranges at Eglin Air Force Base that the Army could have used. Air Force headquarters officials added that it may have been because the 7th Special Forces Group did not have these ranges at Ft. Bragg. However, officials with the Army Special Operations Command and the 7th Special Forces Group stated that the existing Air Force ranges at Eglin Air Force Base were insufficient for their training needs, and that they had all of the ranges in question when they were at Ft. Bragg. Not including the ranges as a BRAC implementation cost was also inconsistent with the other implementations of BRAC recommendations we reviewed, where the training facilities were counted as BRAC implementation costs. Construction of the ranges at Eglin Air Force Base cost a combined $39.3 million. An official with the Basing Directorate under the Assistant Secretary of Defense for Energy, Installations, and Environment—the group that oversaw the implementation of BRAC 2005—stated that the business plan managers were expected to include costs that were funded from outside the BRAC account in their final business plans and that, along with OSD General Counsel, they reviewed and provided comments on the cost submissions. However, the Basing Directorate official further stated that it was up to the military departments to ensure that all BRAC implementation costs were accounted for, and that the military departments had the flexibility to determine which costs would be associated with the BRAC recommendation and which would be attributed to other actions. We found that this flexibility in determining which costs were to be reported as BRAC costs led to inconsistencies in what kinds of projects were counted as BRAC implementation costs. By clarifying in guidance what is to be included as a BRAC implementation cost, DOD can help ensure that it has an accurate accounting of the final costs for any future BRAC implementation and that DOD and Congress are able to determine how much money is actually spent on any future BRAC rounds. BRAC 2005 provided DOD with the opportunity to consolidate infrastructure and also to become more efficient and effective in its operations. To that end, the recommendations for consolidating and developing joint training programs provided DOD with new opportunities for furthering transformation and promoting jointness to meet the new challenges DOD faces. However, two of the six recommendations focused on training have led to joint training rather than colocation, despite the opportunity to jointly train the force as it fights. All six recommendations were implemented as approved, but without additional guidance, DOD cannot ensure that it takes advantage of the opportunities provided by BRAC. If Congress approves a future BRAC round, DOD will have another opportunity to promote jointness should the department choose to propose such recommendations to a future BRAC Commission. However, without specific guidance that the military services can use in implementing jointness-focused recommendations—for instance on responsibility for monitoring implementation and measuring progress—the department may again face challenges in moving beyond colocation of functions. In implementing the training-focused jointness recommendations we examined, DOD did not collect baseline cost data for all of the recommendations as part of its implementation process, and without these data it could not determine the actual savings, if any, of implementing the recommendations. Unless DOD develops baseline cost data for the recommendations in any future BRAC rounds, it will be unable to determine the budgetary effect of its actions. Given that we found some implementation costs were paid for from other than the BRAC 2005-specific accounts, if DOD does not clarify in guidance the types of costs that are to be included as BRAC implementation costs, decision makers will lack reasonable assurance that the department’s cost data for any future BRAC round recommendations are fully reliable. To make further progress toward taking full advantage of the opportunity of consolidating training in order to increase jointness following the implementation of the BRAC 2005 recommendations, for the training functions that did not consolidate training beyond colocation, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness and the Secretaries of the military departments to provide guidance to the program managers on consolidating training, if DOD decides that taking advantage of an opportunity to increase jointness is still appropriate. To improve the ability of the military departments to take advantage of any opportunities provided by recommendations to develop joint training capabilities in a future BRAC round, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness—in consultation with the Assistant Secretary of Defense for Energy, Installations, and Environment—to develop and provide specific guidance for the military departments to use in implementing recommendations designed to consolidate training to increase jointness. To improve DOD’s ability to estimate savings, if any, from future consolidation of training—including any consolidation resulting from a future BRAC round—we recommend that the Secretary of Defense direct the military departments to develop baseline cost data. To improve the accounting of any future BRAC rounds, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations, and Environment to issue guidance clarifying what costs should be included in final BRAC accounting. We provided a draft of this report to DOD for review and comment. In response, DOD non-concurred with three recommendations and partially concurred with one recommendation. In its letter, DOD stated that our report misunderstands DOD’s approach to joint and common training and does not completely recognize the unique circumstances of BRAC recommendation development and implementation. We recognize that there is a difference between joint and common training; however, these BRAC recommendations, which DOD proposed and the BRAC Commission approved, emphasized jointness, not just common training. In fact, for several of these recommendations, the Secretary of Defense’s justification included “enhancing jointness” as part of the rationale, or proposed that the recommendation would allow DOD to “train as we fight; jointly.” DOD further stated that our report undervalues the importance of providing DOD components flexibility to determine BRAC costs and has a misplaced emphasis on estimating savings for transformational recommendations. We recognize the importance of flexibility among DOD components for most military decisions; however, as our report notes, flexibility in reporting BRAC costs led to inconsistencies in reporting of these costs across the department. In addition, cost and savings estimates are a part of the BRAC process. Both DOD and the Commission develop such estimates for each recommendation and did so for these six recommendations. Moreover, DOD emphasized savings in some of the six recommendations in its justification to the Commission. Specifically, DOD’s justification for the Joint Center of Excellence for Culinary Training reads in part “It is the military judgement of the JCSG that consolidation at the location with the largest amount of culinary training produces the greatest overall military value to the Department through increased training efficiency at lower cost.” Similarly, for the Joint Center of Excellence for Religious Training and Education, DOD’s justification to the Commission reads in part “Consolidation at Fort Jackson, SC creates a synergistic benefit by having each Service’s officer and enlisted programs conducted in close proximity to operational forces. Realized savings result from consolidation and alignment of similar officer and enlisted educational activities and the merging of common support functions.” Saving money in implementation of any federal program is an important goal. We continue to believe that it is important for DOD’s goals to include saving money where possible. DOD did not concur with our first recommendation, to provide guidance to the program managers of the training functions created under BRAC 2005 on consolidating training. In its response, DOD stated that our report misunderstands the definition of joint training and that DOD and the services are constantly seeking ways to improve training opportunities by either consolidating or collocating individual skills training. DOD further stated that the Interservice Training Review Organization would be the proper entity to address the issues identified in our report. In our report, we noted that the training functions were reviewed and these reviews did not find much overlap in training between services. Several of these reviews were conducted by the Interservice Training Review Organization. Further, one of the purposes of several of these transformational recommendations was to create opportunities to enhance jointness, as stated by DOD in proposing them to the Commission. Enhancing jointness would be going a step further than colocating services and aspiring to consolidate common training. DOD also states in its comments that the Interservice Training Review Organization was involved in implementing the Chaplain recommendation. Still, we found that, even with this involvement, DOD did not take advantage of opportunities to consolidate training to increase jointness in the Chaplain recommendation. We also noted that, in the absence of guidance from DOD, four of the training functions in our review did not make any further effort to consolidate training. We continue to believe that if DOD believes the training functions in this review would benefit from more consolidation of training, it should issue guidance. DOD did not concur with our second recommendation to develop and provide specific guidance for the military departments to use in implementing recommendations designed to consolidate training to increase jointness in the event of future BRAC rounds. DOD stated that while consultation with the Assistant Secretary of Defense for Energy, Installations, and Environment would be required within a future BRAC round, the Under Secretary of Defense for Personnel and Readiness already has the authority to develop this guidance. We recognize that the Under Secretary has the authority but as our report points out, it has not exercised it in this instance, and that guidance is needed to ensure that DOD takes advantage of the opportunities provided by BRAC. DOD did not concur with our third recommendation to develop baseline cost data in the event of any future consolidation of training. DOD stated that data calls for BRAC must ensure that the questions asked do not provide the personnel answering the questions insight into the various scenarios being considered and that all installations must be treated equally. Moreover, DOD stated that this is critical to maintaining the fairness and objectivity of the analysis by preventing the supplied data from being influenced by gaining and losing locations. During BRAC 2005, DOD estimated that it had collected over 25 million pieces of data from hundreds of defense installations and presumably was able to do so in a way that maintained fairness and objectivity without inappropriately disclosing to personnel providing the information something to which they should not be privy. DOD further stated that collecting baseline cost data for training activities in advance of an authorized BRAC process is not effective because the department will not be able to use previously supplied uncertified data. Nothing in our recommendation requires DOD to collect data prior to the implementation of a future, authorized BRAC round. Finally, DOD stated that it is not clear that a future BRAC round would include joint training. However, baseline cost data is needed for measuring either increased costs or savings for changes to any program, not just joint training. Thus, we continue to believe that without sufficient baseline cost information, DOD will be unable to determine the budgetary effect of its actions, including demonstrating cost savings. DOD partially concurred with our fourth recommendation, to issue clarifying guidance regarding what costs should be included in final BRAC accounting. DOD stated that micromanaging every cost decision across such a vast program would have been unreasonable and that ultimately, whether or not to fund various requirements from the BRAC account was a judgment call made by military headquarters officials. However, DOD agreed that it would be reasonable to consider placing additional emphasis on accounting for BRAC costs. We agree that managing a program as large as BRAC is difficult and that guidance on what costs should be included in the final BRAC accounting would help DOD to more accurately report the costs of implementing BRAC. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and the Assistant Secretary of Defense for Energy, Installations, and Environment. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The following table lists selected prior GAO reports on the Base Realignment and Closure (BRAC) 2005 round, our recommendations in those reports, the Department of Defense’s (DOD) response, and DOD’s actions to date in response to the recommendations. The 24 reports listed contained 69 recommendations. DOD concurred or partially concurred with 57 of these recommendations and has implemented 37 of them. According to DOD officials, DOD will be unable to take actions on many of these recommendations until another BRAC round is authorized. GAO recommendation GAO-15-274—Military Base Realignments and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements (Mar. 16, 2015). Update the Base Realignment and Closure (BRAC) homeless assistance regulations to require that conveyance statuses be tracked. These regulatory updates could include requiring the Department of Defense (DOD) to track and share disposal actions with the Department of Housing and Urban Development (HUD) and requiring HUD to track the status following disposal, such as type of assistance received by providers and potential withdrawals by providers. Partial concur. DOD stated that while it concurs with the value of tracking homeless assistance and other conveyances, it can do so without any change to existing regulations. DOD did not identify any actions it will take on how to track the homeless assistance conveyances in the absence of a regulatory update and did not indicate that it would work with HUD to update the regulations. Moreover, DOD did not explain how program staff would know to track the conveyance status in the absence of guidance requiring them to do so. Partial concur. DOD stated that while it already provides generic information about the property, the Local Redevelopment Authorities (LRA) and interested homeless assistance providers can undertake facility assessments following the tours. However, DOD did not provide additional detail or explanation about how it would provide information about the condition of the property or access to it. Pending. Awaiting authorization of a future BRAC round. Pending. Awaiting authorization of a future BRAC round. Specific guidance that clearly identifies the information that should be provided to homeless assistance providers during tours of on-base property, such as the condition of the property. DOD actions Pending. Awaiting authorization of a future BRAC round. Original DOD response Non-concur. DOD stated that the existing regulatory guidance is adequate for providers’ expressions of interest given that these expressions evolve as the redevelopment planning effort proceeds and they learn more about the property. Partial concur. DOD did not commit to taking any actions to provide this information and instead noted that any action should ensure that a legally binding agreement does not bind DOD to disposal actions it is unable to carry out. DOD further noted that the purpose of the legally binding agreement is to provide remedies and recourse for the LRA and provider in carrying out an accommodation following property disposal. Non-concur. DOD stated that providers may be considered only through specific expressions of interest in surplus BRAC property, and these suggested alternatives may be considered only within the context of what is legally permissible given the specific circumstances at each installation. Pending. Awaiting authorization of a future BRAC round. Specific information on legal alternatives to providing on-base property, including acceptable alternative options such as financial assistance or off-base property in lieu of on-base property, information about rules of sale for on-base property conveyed to homeless assistance providers, and under what circumstances it is permissible to sell property for affordable housing alongside the no-cost homeless assistance conveyance. GAO-14-577—DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program (Sept. 19, 2014). Evaluate the 44 support functions identified in DOD’s guidance for joint base implementation to determine which functions are still suitable for consolidation. Subsequently, identify and make any changes that are appropriate to address limitations reported by the joint bases in consolidating installation-support functions, such as limitations related to workforces and geography. Pending. Awaiting authorization of a future BRAC round. Concur. DOD stated that it had already removed some installation-support functions from joint basing because they were not compelled for inclusion as part of the BRAC recommendation and otherwise did not offer opportunities for savings or consolidation. It further stated that in April 2014, the Senior Joint Base Working Group principals tasked their staffs to identify which installation support functions and performance standards were not providing value to the joint bases’ various military missions and to explore whether these functions and standards should continue to be included in joint basing. DOD did not provide time frames for completing such actions. Pending. In July 2015, an OSD official stated that DOD will not revise the 12 memorandums of agreement for the existing joint bases to show that some of the functions should not be consolidated but are using an abbreviated list of functions—excluding the functions we identified as poor candidates for consolidation—in evaluating the viability of new joint bases. GAO recommendation Take policy actions, as appropriate—such as issuing additional guidance—to address any challenges resulting in inefficiencies and inequities regarding efforts to consolidate installation-support functions—including, at a minimum, those identified in this report. Original DOD response Partial concur. DOD stated that it is mindful of challenges in implementing and operating joint bases and agreed that policy actions can address some challenges. However, DOD stated that it does not agree that these challenges require Office of the Secretary of Defense (OSD) level policies, citing instead the existing responsibilities and authorities already assigned to the military departments and the Joint Management Oversight Structure. Evaluate the purpose of the program and determine whether DOD’s current goals of achieving greater efficiencies and generating cost savings for the joint basing program, as stated in the 2005 BRAC Commission recommendation, are still appropriate or whether goals should be revised, and communicate these goals to the military services and joint bases and then adjust program activities accordingly. Subsequent to the evaluation above, provide direction to joint bases on their requirements for meeting the joint base program’s goals. DOD’s leadership should work with the military services to determine what reporting requirements and milestones should be put in place to increase support and commitment for the program’s goals. Non-concur. DOD stated that the goal of joint basing remains to increase the efficiency of delivering installation support at the 12 joint bases as described in the BRAC Commission’s recommendation number 146. DOD actions Pending. In July 2015, an OSD official told us that DOD is taking action on our recommendation to address any challenges resulting in inefficiencies and inequities regarding efforts to consolidate installation-support functions. DOD has drafted a joint basing handbook, which has been signed by the Air Force and the Navy, to address inconsistent service-level guidance. In addition, the Senior Installation Management Group now meets quarterly to handle conflicts and disputes between service policies and to address any challenges resulting in inefficiencies and inequities regarding efforts to consolidate installation- support functions. None planned. As of November 2015, DOD stated that no action is expected. Non-concur. DOD stated that the joint bases have been fully operational since October 2010 and have proven that they can deliver measurable and tangible savings across the installation-support portfolio. Therefore, DOD stated that it does not believe OSD should establish program milestones. None planned. As of November 2015, DOD stated that no action is expected. GAO recommendation GAO-14-630—Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training (July 31, 2014). The Assistant Secretary of Defense for Health Affairs should direct the Director of the Defense Health Agency (DHA) to conduct a fully developed business case analysis for the Education and Training Directorate’s reform effort. In this analysis the Director should Identify the cost-related problem that it seeks to address by establishing the Education and Training Directorate, Explain how the processes it has identified will address the cost-related problem, and Conduct and document an analysis of benefits, costs, and risks. Concur. DOD stated that Medical Education and Training is the only shared service that has never had any type of oversight by the Office of the Assistant Secretary of Defense for Health Affairs or the pre-DHA TRICARE Management Activity. Develop baseline cost information as part of DHA’s metrics to assess achievement of cost savings. Concur. DOD stated that Medical Education and Training is the only shared service that has never had any type of oversight by the Office of the Assistant Secretary of Defense for Health Affairs or the pre-DHA TRICARE Management Activity. Pending. In a September 2014 letter, the Assistant Secretary of Defense for Health Affairs stated that baseline costing would be a key component of the Medical Education and Training Directorate’s strategic plan and would be presented in the form of a “deliverable” in moving forward to the Directorate’s final operating capability. The letter also noted that an inventory of all education and training products and services within the Military Health System will be undertaken shortly, and that this had never been accomplished before. However, the letter did not specifically address the development of metrics to assess achievement of any cost savings as we recommended. As of September 2015, no further actions have been taken. Pending. According to a September 2014 letter from the Assistant Secretary of Defense for Health Affairs, the completion of a business case analysis will be a key component of the Directorate’s strategic plan and will be presented in the form of a “deliverable” to achieve its final operating capability. The letter did not specifically identify the cost-related problem that DOD seeks to address by establishing the Directorate nor did it specifically state if this would be addressed in its business case analysis under development as we recommended. As of September 2015, no further actions have been taken. GAO recommendation GAO-14-538—Defense Infrastructure: DOD Needs to Improve Its Efforts to Identify Unutilized and Underutilized Facilities (Sept. 8, 2014). Establish a strategic plan as part of a results- oriented management framework that includes, among other things, long-term goals, strategies to achieve the goals, and use of metrics to gauge progress to manage DOD’s real property and to facilitate DOD’s ability to identify all unutilized or underutilized facilities for potential consolidation opportunities. Concur. DOD stated that a strategy review is currently under way with initial guidance and initiatives. Pending. In response to a requirement under the Office of Management and Budget’s (OMB) Reduce the Footprint policy, DOD officials told us in July 2015 that they had developed a draft DOD Real Property Efficiency Plan that describes DOD’s strategic and tactical approach to managing its real property effectively and efficiently. Officials stated that this draft plan would also address our recommendation in September 2014 that DOD establish a strategic plan to manage it’s real property and to facilitate its ability to identify potential consolidation and disposal opportunities. This plan has not been finalized and implemented. As of October 2015, an official from the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment stated that the plan is still under review and has not been provided to OMB. The official did not have an estimate for when the plan will be finalized and implemented. Officials also stated that a recently developed draft guide for calculating facility utilization should complement the draft plan in improving utilization data to better identify excess facilities. GAO-13-436—Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth (May 14, 2013). Direct the Secretary of the Army to issue guidance, consistent with DOD guidance, on specific levels of maintenance to be followed in the event of a base closure, based on the probable reuse of the facilities. Concur. DOD stated that the Army agrees to publish property maintenance guidance prior to closing installations in the event of future base closures. Pending. Awaiting authorization of a future BRAC round. In July 2015, DOD stated that the Army has agreed to publish property maintenance guidance prior to closing installations in the event of future base closures. There have been no additional base closures since the date of the report. GAO recommendation Direct the Secretaries of the Army, the Navy, and the Air Force to consider developing a procedure for collecting service members’ physical addresses while stationed at an installation, annually updating this information, and sharing aggregate information with community representatives relevant for local planning decisions, such as additional population per zip code, consistent with privacy and force protection concerns. Direct the Secretaries of the Army and the Air Force to consider creating or designating a civilian position at the installation level to be the focal point and provide continuity for community interaction for future growth installations and to consider establishing such a position at all installations. Original DOD response Partial concur. DOD stated that it agrees that information pertaining to the physical location of installation personnel helps affected communities plan for housing, schools, transportation and other off-post requirements and that existing policy requires the military departments to share planning information with states and communities. DOD also stated that in the event of future basing decisions affecting local communities, it will work with the military departments to assess and determine the best means to obtain, aggregate, and distribute this information to help ensure that adequate planning information is made available. Partial concur. DOD stated that it agrees with the need for a designated position at the installation level and will ensure that each military department is meeting this need through current practices. DOD also stated that many growth installation officials already serve as “ex-officio members” of the community’s growth management organizations and community officials agree that this has been quite valuable for both the department and affected growth communities. DOD actions None planned. In July 2015, DOD stated that there is no immediate need to undertake these efforts. Pending. Awaiting authorization of a future BRAC round. In July 2015, DOD stated that in the event the Department of Defense proceeds with future realignments that could result in a reduced footprint, there are provisions for Base Transition Coordinators to be designated as liaisons with affected communities. In the event these future realignments result in an expanded footprint or personnel growth, the department would consider this recommendation at that time. GAO-13-149—Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds (Mar. 7, 2013). Work with the military services, defense agencies, and other appropriate stakeholders to improve the process for fully identifying recommendation-specific military construction requirements and ensuring that those requirements are entered into the Cost of Base Realignment Actions (COBRA) model and not under stated in implementation cost estimates prior to submitting recommendations to the BRAC Commission. Establish a process for ensuring that information technology requirements associated with candidate recommendations that are heavily reliant on such technology have been identified to the extent required to accomplish the associated mission, before recommendations and cost estimates are submitted to the BRAC Commission. Non-concur. DOD stated that the primary advantage of COBRA is to provide real-time comparison of scenarios to aid analysis and decision making, not to develop budget- quality estimates. None planned. As of November 2015, DOD stated that no action is expected. Partial concur. DOD acknowledged that information technology costs should be better estimated but added that a separate process is not necessary and stated that it can improve cost estimating by reevaluating the standard factors used in COBRA and by providing additional guidance as appropriate. Pending. Awaiting authorization of a future BRAC round. GAO recommendation Ensure that, during the development and comparison of BRAC scenarios, all anticipated BRAC implementation costs— such as relocating personnel and equipment—are considered and included in the COBRA model when comparing alternatives and generating cost estimates. Take steps to ensure that COBRA’s standard factor for information technology is updated and based on technological developments since the most recent COBRA update. Update COBRA guidance to require users to provide a narrative explaining the process, sources, and methods used to develop the data entered into COBRA to develop military personnel position-elimination savings. Identify appropriate measures of effectiveness and develop a plan to demonstrate the extent to which the department achieved the results intended from the implementation of the BRAC round. Original DOD response Non-concur. DOD reiterated that COBRA is not designed to develop budget quality estimates, nor can it reflect future implementation investment decisions made after BRAC recommendations become binding legal obligations for DOD. DOD actions None planned. As of November 2015, DOD stated that no action is expected. Concur. Pending. Awaiting authorization of a future BRAC round. Concur. Pending. Awaiting authorization of a future BRAC round. Non-concur. DOD stated that military value based on force structure and mission needs should continue to be the key driver for BRAC. DOD also stated that its business plan process is the best way to measure effectiveness. Non-concur. DOD stated that goals or overarching capacity targets would subvert the intent of the BRAC statute to develop recommendations based on military value and would preclude examination of a full array of closure and realignment options. None planned. As of November 2015, DOD stated that no action is expected. Establish a target for eliminating excess capacity in its initiating guidance to high-level department-wide leadership, consistent with the BRAC selection criteria chosen for a future BRAC round. None planned. As of November 2015, DOD stated that no action is expected. Limit the practice of bundling many potential stand-alone realignments or closures into single recommendations. Non-concur. DOD does not believe that bundling is problematic and stated that the examples we cited had been bundled because they shared a common mission and purpose, and bundling maximized military value. The practice of bundling can limit visibility into the estimated costs and savings for individual closures or realignments that are elements of the bundle and can make the Commission’s review more difficult, although DOD disputed this latter point. The 2005 BRAC Commission’s executive staff told us that bundling made their review more difficult because they needed to deconstruct the bundle to assess whether any changes were necessary. In some cases bundling is warranted, and it is for this reason that we recommended limiting the practice, not prohibiting it. None planned. As of November 2015, DOD stated that no action is expected. GAO recommendation If DOD determines that bundling multiple realignments or closures into one recommendation is appropriate, itemize the costs and savings associated with each major discrete action it its report to the BRAC Commission. Develop a process to ensure that any data- security issues are resolved in time to provide all information to the BRAC Commission in a timely manner by conducting a security review of all BRAC data during DOD’s recommendation development process, to include a review of the aggregation of unclassified data for potential security concerns and possible classification, if necessary. GAO-13-134—DOD Joint Bases: Management Actions Needed to Achieve Greater Efficiencies (Nov. 16, 2012). Original DOD response Partial concur. DOD stated that where appropriate, the department could highlight cost and savings associated with major actions, and that action would meet the intent of our recommendation. DOD actions Pending. Awaiting authorization of a future BRAC round. Concur. Pending. Awaiting authorization of a future BRAC round. None planned. As of November 2015, DOD stated that no action is expected. paring unnecessary management personnel, consolidating and optimizing contract requirements, establishing a single space management authority to achieve greater utilization of facilities, and reducing the number of base support vehicles and equipment. Non-concur. DOD said that such targets would burden and restrict the authority of local commanders to manage the merger of the formerly stand-alone bases into joint bases while implementing new organizational structures, which would unnecessarily risk negative effects to mission support when operational effectiveness of the bases is paramount. DOD stated that the department should continue its patient approach to obtaining savings and efficiencies at joint bases, because it is working. All of the Air Force-led joint bases reduced civilian positions, and the Navy chose not to fill all of its civilian vacancies. Finally, the creation of the joint bases is equivalent to the mergers of corporations with very different financial systems, management structures, operating procedures, and cultural differences. DOD stated the importance of empowering joint base commanders to design, implement, and adapt cost efficient and effective approaches to their unique situations while adopting new and cross-cutting business practices, as incubators of innovation. DOD decided to allow for an extended transition period and to defer near-term savings. GAO recommendation Continue to develop and refine the Cost Performance and Visibility Framework in order to eliminate data reliability problems, facilitate comparisons of joint basing costs with the cost of operating the separate installations prior to implementing joint basing, and identify and isolate the costs and savings resulting from actions and initiatives specifically resulting from joint basing and excluding DOD- or service-wide actions and initiatives. Original DOD response Partial concur. DOD stated that its Cost Performance and Visibility Framework already provides a method to collect quarterly data on performance toward the Common Output Level Standards, annual data on personnel assigned, and funds obligated for each joint base. However, DOD is addressing inconsistencies in the current data captured in the Framework and is improving its data reliability with considerable investment and the expectation to begin assessing joint base efficiencies by the end of fiscal year 2012. DOD stated that it would be able to make several comparisons–such as comparing the current fiscal year financial and performance data with the baseline and previous year’s obligations and the joint base’s baseline data with the costs of operating the separate installations—prior to implementing joint basing. DOD acknowledged that the comparison of the costs of operating separate installations would not identify cost savings resulting solely from joint basing and asserted the impracticality of isolating and distinguishing joint basing cost savings from the savings that result from DOD- or service- wide actions using the data contained in DOD’s Framework. Further, DOD pointed out that it did not believe that accounting systems were designed to track savings but to track expenses and disbursements. Partial concur. DOD stated that a quarterly feedback process on the joint base common standards and an annual review process that incorporates input from the joint bases already exist. Further, standards may need changing as priorities change and missions evolve, but the current process strikes an appropriate balance between the analytical burden of repeated reviews and the need for clarity and refinement. DOD also stated that it believes that reviewing all the standards simultaneously does not allow for the depth of analysis required to make sound decisions, and suggested that GAO conduct a qualitative assessment of the standards, because the findings appear to be based on an anecdotal assessment. DOD actions Complete. DOD provided guidance to the joint bases that resulted in improved quality of the data obtained for fiscal year 2012. Subsequently, DOD performed an analysis comparing this improved operating cost data with what it had projected would be the costs of operating the separate installations if the joint bases had not been created. This analysis showed that the joint bases were saving money relative to the costs of operating the separate installations. Together these actions met the intent of our recommendation and provided DOD with an improved picture of the cost of operating the joint bases as well as a comparison of the cost of operating the joint bases with the cost of operating the separate installations. Direct the joint bases to compile a list of those common standards in all functional areas needing clarification and the reasons why they need to be clarified, including those standards still being provided or reported on according to service-specific standards rather than the common standard. None planned. As of November 2015, DOD stated that no action is expected. GAO recommendation Amend the OSD joint standards review process to prioritize review and revision of those standards most in need of clarification within this list. Original DOD response Partial concur. See above. DOD actions None planned. As of November 2015, DOD stated that no action is expected. Develop a common strategy to expand routine communication between the joint bases, and between the joint bases and OSD, to encourage joint resolution of common challenges and sharing of best practices and lessons learned. Partial concur. DOD stated that it believed there were already mechanisms in place to facilitate routine communication between the joint bases, as well as between OSD and the joint bases, and that it is increasing those opportunities. DOD listed the various opportunities it has for sharing joint basing information, including yearly joint base site visits and an annual management review meeting with the joint base commanders. Develop guidance to ensure that all the joint bases develop and provide training materials to incoming personnel on how installation services are provided on joint bases. Partial concur. DOD stated that it would ensure that each of the services is providing training materials to incoming personnel; however, joint base commanders need flexibility to tailor training to the needs of their installations. Complete. DOD added an annual meeting beginning in February 2013 for Joint Base commanders to discuss issues the bases are facing, and in August 2013 distributed contact information for all Joint Base commanders and Deputy Joint Base commanders to each of the joint bases. As a result, joint bases have had expanded opportunities to share information on best practices and lessons learned and to resolve common challenges. In part because the annual Joint Base commander’s meeting takes place as part of an annual program review meeting with OSD, together these actions address the intent of this recommendation. Pending. In July 2015, an OSD official told us that OSD is taking action on our recommendation to develop and provide training materials to incoming joint base personnel. DOD has drafted a joint basing handbook, which has been signed off on by the Air Force and the Navy, to address inconsistent service level guidance. In addition, the Senior Installation Management Group now meets quarterly to handle conflicts between service policies and to address any challenges that have resulted in inefficiencies and inequities regarding efforts to consolidate installation- support functions. GAO recommendation GAO-11-814—Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts (Sept. 19, 2011). Develop and implement a methodology for calculating and recording utilization data for all types of facilities, and modify processes to update and verify the accuracy of reported utilization data to reflect a facility’s true status. Partial concur. DOD has already begun some efforts to improve its utilization data and will to develop and implement appropriate procedures. DOD did not specify what actions it has completed to date or its time frames for completion. Develop strategies and measures to enhance the management of DOD’s excess facilities after the current demolition program ends, taking into account external factors that may affect future disposal efforts. Concur. DOD stated that it will work with the military departments to continue to develop and implement the most effective and efficient methods to eliminate excess facilities and excess capacity, but it did not provide any details or specific time frames for these efforts. Complete. In January 2014, the Under Secretary of Defense (Acquisition, Technology and Logistics) issued an update to DOD’s policy on inventory and accountability of real property assets. It includes procedures for inventory data requirements such as including accurate data submission in real-time or near real-time, and the creation of a Real Property Accountable Officer who is responsible for property inventory at the installation level. DOD’s corrective action plan and updated policy address our concerns with calculating, recording, updating, and verifying the accuracy of utilization data. Complete. The services have incorporated demolition into their installation planning and other facility space management programs. For example, the Air Force has incorporated demolition as a key feature in its ongoing initiative to consolidate space and personnel, and to achieve a 20 percent reduction in its property inventory by 2020. Also, DOD is more proactively managing its processes to meet historic preservation requirements, to address environmental preservation concerns, and to expedite completion of required environmental mitigation. Further, the services have begun implementing a policy in line with a January 2014 update to DOD’s policy on inventory and accountability of real property assets, which clarified the roles and responsibilities of the officer responsible for managing property inventory at the installation level, including the requirement to ensure that all disposal records are accurately recorded. GAO recommendation GAO-11-165—Defense Infrastructure: High Level Federal Interagency Coordination Is Warranted to Address Transportation Needs Beyond the Scope of the Defense Access Roads Program (Jan. 26, 2011). Update regulations and clarify guidance for the Defense Access Roads certification and funding process; develop working-level guidance for potential program users; and effectively communicate the regulations and working-level guidance to all federal, state, and local stakeholders. Partial concur. DOD stated that although it will work with the Department of Transportation to update Defense Access Roads regulations and clarify guidance, it believes that sufficient guidance for and awareness of the program exists. Complete. In response to our recommendation, in August 2012 DOD and the Department of Transportation agreed to more closely coordinate approaches to transportation issues. Additionally, in March 2013, DOD officials stated that, based on the results of coordinating a potential change to the Defense Access Roads eligibility criteria, leadership determined that the best approach would be to direct the Defense Access Roads program to update its guidance to ensure that the existing criteria are applied flexibly, as has been the case for urban areas during the implementation of BRAC 2005. Lastly, in June 2013, the Under Secretary of Defense (Acquisition, Technology and Logistics) issued a memo directing the Defense Access Roads Program to update its guidance. In addition, the Military Surface Deployment and Distribution Command Defense Access Roads Program office has begun communicating directly with the commanders of each growth installation to address previously reported issues regarding unawareness of the Defense Access Roads Program. These actions will allow program guidance to be updated to include the program’s procedures and will ensure that the guidance is effectively communicated to all stakeholders so that the program can be used to its fullest extent. GAO recommendation Routinely coordinate with the Secretary of Transportation to meet regularly, identify all existing federal transportation funding resources, and develop a strategy for affording priority consideration for the use of those funds and other resources for the benefit of communities most severely affected by DOD. Original DOD response Partial concur. DOD stated that the department would continue to work closely with the department of Transportation to assist communities affected by DOD actions but that the Department of Transportation does not have discretionary funds that it can use to target communities affected by DOD, and instead, state and local communities must advance defense-related transportation projects. DOD actions Complete. In response to this recommendation, DOD hosted a meeting of the Economic Adjustment Committee in August 2012 to examine Defense Access Roads funding and coordination issues. An outcome of that meeting was consensus that, as DOD develops future re-stationing decisions, greater coordination with local planning entities is essential to assessing effects on transportation. In June 2013, the Under Secretary of Defense (Acquisition, Technology and Logistics) issued a letter to the congressional defense committees detailing the proposed plan for improving the Defense Access Roads Program. As stated in the plan, DOD’s goal is to improve the assessment of effects on transportation; enhance collaboration with planning entities; expand the range of mitigation measures, including joint funding opportunities; and promote additional measures for managing transportation demand. These actions will allow for the effective interagency and intergovernmental coordination that is needed to help address the unmet transportation needs of defense- affected communities. GAO-10-725R—Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs (July 21, 2010). Take steps to capture and appropriately report to Congress any BRAC-related implementation costs that are funded from outside the BRAC process. Concur. DOD noted that it is in the process of drafting new BRAC guidance, which will direct the services and defense agencies to provide a final accounting for all BRAC costs (both inside and outside of the account), among other items,. Complete. On August 5, 2010, the Deputy Undersecretary of Defense (Installations and Environment) issued a guidance memo to the military services and DOD agencies requiring all BRAC business plan managers to fully capture the costs and savings of BRAC 2005 by submitting a final BRAC financial display that captures all BRAC related expenditures (both inside and outside the BRAC account), which will give Congress more visibility over all BRAC implementation costs. GAO recommendation GAO-10-602—Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth (June 24, 2010). Develop and implement guidance that requires the Army Criteria Tracking System to be updated as changes to facility design and criteria are made. Concur. DOD stated that the Army has already taken action to enhance the accuracy of its planning systems to better respond to changing requirements. Develop and implement policies and procedures for linking other systems, such as the Army Range Requirements Model and the Army Health Planning Agency’s system, to the Real Property Planning and Analysis System in order to eliminate any potential confusion as to the correct range and medical facility requirements. Concur. DOD stated that it plans to partly address our recommendation by fielding a comprehensive range planning tool. Complete. In May 2010 the Army incorporated the functionality of the Army Criteria Tracking System into its web-based Real Property Planning and Analysis System, thereby linking the two systems and ensuring that as one is reviewed the other is reviewed and as one is updated the other is updated. The Real Property Planning and Analysis System is web based and changes can be made in real time. Similarly, because the Army Criteria Tracking System is now incorporated into the Real Property Planning and Analysis System, the Army Criteria Tracking System is now web based and changes to it can be made in real time. Complete. The Army stated that as of June 2010, the Army Range Requirements Model was being used to generate the range requirements in the Real Property Planning and Analysis System and that because the Army Health Facility Planning Agency does not have an automated system to generate requirements, the Army was manually obtaining hospital requirements and inputting them into the Real Property Planning and Analysis System. These actions eliminated two sets of requirements for ranges and hospitals, reducing any potential confusion. GAO recommendation Develop a streamlined mechanism to expedite the flow of stationing information to installations. Original DOD response Concur. DOD stated that the Army has already initiated improvements in its process and is evaluating additional streamlining measures. Modify existing guidance to enhance communication between decision makers and installations so that installation facility planners are notified when stationing actions are changed. Concur. DOD stated that the Army has already initiated improvements in its communication process and that the department is evaluating additional measures to ensure that data integrity and transparency are achieved. DOD actions Complete. In January 2012, DOD reported that the Army continues to enhance the flow of stationing information. All unit moves are now combined by installation and by fiscal year, significantly reducing the number of actions being processed. In August 2010, the Army staff issued guidance to the field (Installation Management Command) that clarified formal lines of communication and established protocol to differentiate between official and unofficial taskings, enabling installation commanders to focus on approved official actions. All stakeholders are better involved in the early stages of force structure actions, force design updates, concept plans, and leadership direction. In April 2012, DOD reported that a copy of the August 2010 Army staff guidance that clarified formal lines of communication was provided to the field (Installation Management Command). Complete. In August 2010 the Army issued guidance to better synchronize installations’ participation in stationing efforts. Specifically, the guidance (1) clarified formal lines of communication to ensure that all stakeholders are better involved in the early stages of force structure actions and force design updates and (2) established protocols to enable communication between staff at installations and Army Headquarters during stationing action implementation to ensure efficient completion of stationing actions. As a result, we believe the Army’s actions met the intent of our recommendation. GAO-09-703—Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply-Related Functions at Depot Maintenance Locations (July 9, 2009). Remove savings estimates that are not clearly the direct result of 2005 BRAC actions (including savings sometimes referred to as “BRAC enabled”). Update its 4-year-old data to reflect the most recent estimate of inventory levels available for consolidation. Concur. DOD stated that such savings estimates will be removed from savings estimates reported in the August 2009 business plan submission. Concur. DOD stated that it will use the most recent estimate of inventory levels available and update the savings calculations for inventory reductions in its August 2009 business plan. Complete. In DOD’s 2009 biannual Business Plan, the Defense Logistics Agency had removed those savings from its estimates. Complete. In DOD’s 2009 biannual Business Plan, the Defense Logistics Agency used updated inventory levels in its current estimate for savings related to this BRAC recommendation. GAO recommendation Apply current information on the timing of inventory consolidations (specifically, when they will begin and how long they will take) and exclude projected savings for consolidating Army and Marine Corps inventories with the Defense Logistics Agency. Original DOD response Concur. DOD stated that savings calculations for projected inventory reductions will reflect the current schedule of consolidating materiel and will be updated in the August 2009 business plan. Moreover, DOD stated that the update will show that no Army or Marine Corps inventory is available for consolidation. Revise and finalize an approved methodology which implements these steps and can be consistently followed by all the services and the Defense Logistics Agency over time. Concur. DOD stated that the new calculations would be documented in the August 2009 business plan and updates and revisions would be incorporated and staffed by the end of calendar year 2009. DOD actions Complete. In DOD’s August 2009 biannual Business Plan, The Defense Logistics Agency used current information regarding a later timetable for inventory consolidations and eliminated any savings from the Army and Marine Corps inventories since there will not be any available to consolidate. The resulting savings estimate will provide better information for congressional oversight and help maintain public confidence in the BRAC process. Complete. According to DOD, in 2010 and 2011, the department documented updates and revisions to the methodologies for projecting or tracking, or both, BRAC savings associated with the supply, storage, and distribution functions and inventories in the Cost and Savings Tracking Plan, which was in its second coordination cycle. GAO-09-336—Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses (March 30, 2009). Periodically review the installation support standards as experience is gained with delivering installation support at the joint bases and make adjustments, if needed, to ensure that each standard reflects the level of service necessary to meet installation requirements as economically as possible. Partial concur. DOD stated that further action to implement the recommendation was not necessary because the joint base memorandum of agreement template already requires periodic reviews to ensure that installation support is delivered in accordance with appropriate, common, output level standards. Periodically review administrative costs as joint basing is implemented to minimize any additional costs and prevent the loss of existing installation support efficiencies. Complete. In January 2011, DOD stated that the department now reviews the installation support standards annually for appropriateness, applicability, and performance. In addition to the annual review, the department implemented a cost and performance visibility framework under which the joint bases report how well the standards are being met. DOD stated that the reported information can assist in determining whether any adjustments need to be made to the standards. None planned. Partial concur. DOD stated that further action to implement the recommendation was not necessary because it had already established a process to periodically review joint basing costs as part of its planning, program, budget and execution system and that the joint base memorandum of agreement template requires periodic reviews of effects on missions and resources. GAO recommendation Complete a detailed analysis of the estimated installation support costs from the initial joint bases and report the results of the analysis to Congress in the department’s documents supporting the administration’s annual budget submission or other documents deemed appropriate. Original DOD response Partial concur. DOD stated that it is collecting estimated installation support cost information at the joint bases and that the information will be provided if Congress requests it. Increase the attention given to facility sustainment spending by summarizing and reporting to Congress the amount of budgeted sustainment funds spent on other purposes in the department’s documents supporting the administration’s annual budget submission or other documents deemed appropriate. Partial concur. DOD stated that it would collect and summarize the amount of budgeted sustainment funds spent on other purposes and that the information would be provided if Congress requested it. DOD actions Complete. In July 2011, DOD stated that it had established procedures for collecting installation support costs at the 12 joint bases and, by using a cost and performance visibility framework, the joint bases report cost and manpower annually six weeks after the end of the fiscal year. According to DOD, the information is analyzed in conjunction with performance data reported quarterly, to get an overall assessment of how well the standards for installation support are being met and the costs associated with those standards. DOD stated that it will continue to respond to requests for information from Congress with regard to the joint basing initiative. Complete. In July 2011, DOD stated that the department was monitoring the budgeting and execution of facilities sustainment in order to determine how much of the funding budgeted for sustainment is diverted to other purposes. DOD also stated that the department was currently collecting information at a sampling of installations across DOD on the sustainment tasks that are deferred in a given year and that the information would help inform decision-making with regard to facilities sustainment funding. Finally, DOD previously stated that it would provide Congress with information on the amount of budgeted sustainment funds spent on other purposes, if Congress requests it. GAO recommendation GAO-09-217—Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates (Jan. 30, 2009). Modify the recently issued guidance on the status of BRAC implementation to establish a briefing schedule with briefings as frequently as OSD deems necessary to manage the risk that a particular recommendation may not meet the statutory deadline but at a minimum at 6-month intervals through the rest of the BRAC 2005 implementation period, a schedule that would enable DOD to continually assess and respond to the challenges identified by the services and defense agencies that could prevent DOD from completing the implementation of a recommendation by September 15, 2011. Modify the recently issued guidance on the status of BRAC implementation to require the services and defense agencies to provide information on possible mitigation measures to reduce the effects of those challenges. Concur. DOD noted that BRAC business managers had and would continue to provide briefings on the status of implementation actions associated with recommendations exceeding $100 million, and that these briefings provide a forum for BRAC business managers to explain their actions to mitigate challenges. Complete. The Deputy Under Secretary of Defense (Installations and Environment) issued a memo in November 2008 requiring the military services and defense agencies to provide the OSD BRAC Office status briefings. According to OSD, the briefings were needed to ensure senior leadership was apprised of significant issues affecting BRAC implementation by the statutory deadline. The first round of status briefings took place in December 2008. Concur. DOD noted that BRAC business managers had and would continue to provide briefings on the status of implementation actions associated with recommendations exceeding $100 million, and that these briefings provide a forum for BRAC business managers to explain their actions to mitigate challenges. Complete. According to DOD, in 2009 and 2010, the department required business managers to identify specific mitigation measures for BRAC recommendations that have construction projects that are scheduled to complete within 3 months of the statutory deadline. The purpose of these mitigation measures is to reduce the risk of not completing implementation of a recommendation by the BRAC deadline. These mitigation measures are identified and monitored in a tracking tool to help ensure they are implemented and the risk is reduced. As appropriate, the DOD basing office conducts additional follow-up meetings with business managers for specific issues or follows up via other contacts that occur between the routine 6 month briefing intervals. This helps to ensure DOD is making progress and implementation of recommendations is on track. As part of this process, six recommendations were identified as having particular risk. DOD briefed these six recommendations to key Senate and House staff in March 2010. GAO recommendation Take steps to improve compliance with DOD’s regulation requiring updated BRAC savings estimates. Original DOD response Concur. The department stated that it is emphasizing savings updates during its briefings and in all future business plan approval documentation. DOD actions Complete. On August 5, 2010, the Deputy Under Secretary of Defense (Installations and Environment) issued a guidance memo to the military services and DOD agencies regarding BRAC 2005 Final Business Plans and Other Reporting Requirements. Among other things, this guidance emphasized to the military services and defense agencies that it is imperative that the final financial displays for BRAC 2005 contain updated projections of recurring savings. GAO-08-665—Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD- Related Growth (June 17, 2008). Develop and implement guidance—no later than the end of fiscal year 2008—that is consistent with DOD Directive 5410.12 for the timely, complete, and consistent dissemination of DOD planning information such as estimated time lines and numbers of personnel relocating, as well as demographic data such as numbers of school-aged children, and update this information quarterly. Concur. Although DOD indicated it would continue to work with the cognizant DOD components to ensure compliance with the directive, actions taken to date have not resulted in the military services’ development and implementation of guidance that we believe is necessary for providing more complete and consistent personnel relocation planning data to affected communities. Moreover, DOD did not explicitly say what steps it intends to take to ensure that the military services have implemented such guidance by the end of fiscal year 2008. With respect to our recommended action to provide information updates on a quarterly basis, DOD indicated that not all situations are conducive to quarterly updates. Complete. From January through March 2011, the military services and the head of the Defense Logistics Agency issued guidance for the timely, complete, and consistent dissemination of DOD planning information, such as military and civilian personnel changes and school-age children increases and decreases in accordance with DOD Directive 5410.12. Although DOD missed the deadline for implementing our recommendation, issuing this guidance facilitates the preparation of effective plans to minimize the economic impacts on communities resulting from changes in defense programs. GAO recommendation Implement Executive Order 12788 by holding regular meetings of the full executive-level Economic Adjustment Committee and by serving as a clearinghouse of information for identifying expected community effects and problems, as well as identifying existing resources for providing economic assistance to communities affected by DOD activities. In addition, this information should be updated at least quarterly and made easily available to all interested stakeholders at the local, state, and federal levels. Original DOD response Concur. DOD stated that it will develop an information clearinghouse that will identify federal programs and resources to affected communities, present successful state and local responses, and provide the Economic Adjustment Committee members with a basis to resource their assistance programs. Based on DOD’s comments, it is unclear as to whether DOD, as chair of the Economic Adjustment Committee, intends to call and periodically hold meetings of the full executive-level committee to provide the high-level federal leadership that we believe is necessary to more effectively coordinate federal agency assistance to impacted communities. DOD actions Complete. DOD regularly reconvened the full executive level Economic Adjustment Committee meetings from February 25, 2009 to September 2, 2010 and completed actions that met the intent of our recommendation by establishing a clearinghouse website in December 2009 to support states and communities undertaking local economic adjustment activity and federal agencies working to support such activities. By reconvening the full executive level Economic Adjustment Committee and setting up the clearinghouse website, DOD increased its ability to engage other federal agencies at a high level to promote interagency and intergovernmental cooperation and share information on a continual basis. DOD activated a publicly accessible website in December 2008 (www.eaclearinghouse.gov), managed by the Office of Economic Adjustment, which contains information such as service migration information, information on federal agency assistance programs, community profiles, and community redevelopment plans. GAO-08-315—Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations (March 5, 2008). Revise its business plans to exclude all expected savings that are not the direct result of BRAC actions. Non-concur. DOD stated that while the $172 million in potential savings for implementing the supply, storage, and distribution recommendation and the $71 million in potential savings for implementing the depot-level reparable recommendation were not directly the result of BRAC actions, the estimated savings were enabled by BRAC actions and should be attributable to the recommendations. According to DOD, enabled savings are savings initiatives that were enhanced in some way by the BRAC implementation actions (e.g. increased scope, more aggressively pursued, or moved in new directions). None planned. Original DOD response Concur. clear metrics for measuring the magnitude of actual costs and savings, a comparison of the actual costs and savings to the prior estimates to coincide with the required semiannual business plan updates, and explanations for actual cost and savings variances from estimates presented in the business plans. DOD actions Complete. According to DOD, in 2009, the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics) established a standard DOD format for measuring the magnitude of actual costs and savings, and required DOD components to submit business plans in February and August that compared current costs and savings with prior estimates and justify any changes, by funding category. The Defense Logistics Agency has since updated cost and savings for BRAC recommendations on a semi-annual basis synchronized with the programming and budget cycles and compared actual costs and savings to prior year estimates. The magnitude of actual costs and savings are collected in a relational data base that was developed to compare actual costs and savings to prior year estimates. The data base has data on BRAC Recommendation 176-Depot Level Reparable Management and BRAC Recommendation 177-Supply, Storage, and Distribution Reconfiguration. For example, in the February 2009 business plans for BRAC Recommendation 176 and BRAC Recommendation 177, the Defense Logistics Agency compared costs and savings to prior estimates for each funding category and when there was a variance in a funding category, it included an explanation for the change in cost and savings. GAO recommendation Ensure that necessary funding to meet implementation milestones is reflected in all service and Defense Logistics Agency budget submissions for the remainder of the implementation period ending in fiscal year 2011. Original DOD response Concur. DOD actions Complete. According to DOD, the BRAC decision memorandums provide the resources to fully fund implementation during the 6-year BRAC implementation statutory period. Annually the DOD BRAC office goes through an extensive analysis to compare each business plan requirement to program funding (Program Review). If funding shortfalls are identified, the components are directed via a Program Decision Memorandum to fully fund requirements. The office of the Under Secretary of Defense (Acquisition, Technology and Logistics) issued a June 22, 2007 memorandum directing DOD Components to fully fund BRAC implementation during the 6-year statutory period. GAO-08-159—Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve (Dec. 11, 2007). Explain, in DOD’s BRAC budget submission to Congress, the difference between annual recurring savings attributable to military personnel entitlements and annual recurring savings that will readily result in funds available for other defense priorities. Concur. DOD noted that military personnel reductions attributable to a BRAC recommendation as savings are as real as savings generated through end strength reductions. DOD also stated that while it may not reduce overall end strength, its reductions in military personnel for each recommendation at a specific location are real and these personnel reductions allow the department to reapply these military personnel to support new capabilities and improve operational efficiencies. Complete. The fiscal year 2009 DOD budget estimates for BRAC 2005 included language that stated, “To the extent that savings generated from military personnel reductions at closing or realigning installations are immediately used to fund military personnel priorities, these resources are not available to fund other Defense priorities.” Such language was not included in the prior year (fiscal year 2008) budget submittal to Congress. The Office of the Secretary of Defense stated that the insertion of this language would provide a better explanation to Congress of its estimated annual recurring savings resulting from BRAC. GAO recommendation GAO-07-1040—Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers (Sept. 13, 2007). Develop a plan for routinely bringing together the various stakeholders as a group, to include the state Army National Guard when appropriate, to monitor for and develop steps to mitigate implementation challenges should they occur. These steps should include ways to monitor and mitigate the effects of potential challenges on BRAC completion time frames, project cost and scope, construction quality, and capacity of the facility to meet changing mission requirements. Partial concur. DOD believes that GAO overlooked the various groups, forums, or plans that the Army has in place to assist with BRAC execution and management. DOD stated that the Army already has a plan in place to bring the various stakeholders together, however Army BRAC headquarters officials acknowledged that they could be more proactive in outreaching and communicating with the stakeholders on how to deal with and mitigate particular challenges associated with constructing 125 AFRCs. DOD also stated that the Army BRAC office will begin quarterly BRAC program reviews with the Assistant Secretary of the Army for Installations and Environment, which will further provide a forum for discussing and vetting issues affecting the BRAC program. Complete. The Army BRAC Office has taken several steps to implement the recommendation. In March 2009, the Army BRAC Office provided a BRAC 2005 program update to the Army Vice Chief of Staff, with representation from the Army National Guard and Reserves. In addition, the Army BRAC Division Reserve Component Branch, the Army Reserve Division, and the full time Army National Guard and Army Reserve liaisons assigned to the Army BRAC Office collaborated at BRAC summits in October 2009 and April 2010 where issues affecting US Army Reserve Command were discussed with Army National Guard and Army Reserve Command presenting their concerns. GAO-07-1007—Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth (Sept. 13, 2007). Determine why there are data differences between headquarters and gaining bases with respect to the number of arriving and departing personnel. Partial concur. DOD stated that the Army had determined the cause of the differences and taken corrective action by establishing the Army Stationing Installation Plan (ASIP) as the single, unified source of installation planning population data to be used Army- wide. Complete. In January 2007 the Army designated the ASIP as the single, unified source of installation planning population data to be used Army-wide. In May 2008, the Army issued guidance that helped reduce the differences between the populations reported by Headquarters and the installations by ensuring that ASIP population data be used for reporting external to the Army and allowing pre-decisional unit moves to be used for internal planning. Lastly, in a memorandum of agreement signed in May 2009, the Army established an ASIP quarterly edit cycle to resolve discrepancies between Army official force structure data and “on the ground” situation. GAO recommendation Ensure that Army headquarters and base officials are collaborating to agree on Army personnel movement plans so that base commanders and surrounding communities can effectively plan for expected growth. This collaboration to reach agreement should continue as expected personnel movement actions are revised over time. Original DOD response Partial concur. DOD stated that the Army had already taken corrective action. The Army stated that in May 2007 it issued guidance that allowed installations to plan for anticipated unit moves that may not be reflected in the ASIP and to discuss these plans with local communities as long as they are appropriately identified as pre-decisional and subject to change. Army officials also stated that, in June 2007, they would ensure that installations forward all population and stationing issues to the Department of the Army headquarters for resolution. DOD actions Complete. In May 2007, the Army issued guidance that allowed installations to plan for anticipated moves that may not be reflected in the ASIP and to discuss these plans with local communities as long as they are appropriately identified as pre- decisional and subject to change. In addition, in May 2009 the Army issued a memorandum of agreement between the office of the Assistant Chief of Staff for Installation Management and the Office of the Deputy Chief of Staff G- 3/5/7 to close information gaps and improve timely reconciliation of disparate data among installation planners, force planners, and headquarters. The memorandum established an ASIP quarterly edit cycle to resolve discrepancies between Army official force structure data and the “on the ground” situation. GAO-07-641—Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations (May 16, 2007). Develop a mitigation strategy to be shared with key stakeholders that anticipates, identifies, and addresses related implementation challenges. At a minimum, this strategy should include time frames for actions and responsibilities for each challenge, and facilitate the ability of Air National Guard headquarters officials to act to mitigate potential delays in interim milestones. Partial concur. DOD suggested a modification to the recommendation to clarify that the director, Air National Guard, is normally tasked by the Chief, National Guard Bureau. DOD also stated that mitigation plans cannot be released until they have been thoroughly vetted with all of the key stakeholders. Complete. The National Guard Bureau implemented a Strategic Communication Plan that provides affected units with the information they need to successfully complete BRAC actions and develop opportunities for follow-on missions at BRAC-affected locations. The Air National Guard Strategic Planning process, which is based on state involvement at all levels of the planning process, is the cornerstone and allows states to provide input to the Air National Guard Strategic Plan and ensures that states have the necessary information to implement those plans. The National Guard Bureau Strategic Communication Plan also incorporates Air Force communications. GAO recommendation Expand the Strategic Communication Plan to include how the Air National Guard headquarters will provide the affected Air National Guard units with the information needed to implement the BRAC-related actions. Original DOD response Partial concur. DOD stated it is incumbent upon the Air National Guard and all effected units to maximize established chains of leadership and communication to effectively manage and execute BRAC actions. The Director, Air National Guard, acknowledges that there are challenges in communicating with the units and that some unit commanders may not have the information that they feel they need to implement the BRAC recommendation and their new missions. DOD actions Complete. The National Guard Bureau, an oversight organization over Air National Guard, is providing key stakeholders with access to detailed BRAC implementation action timelines and programming plans, including BRAC contacts at each Air National Guard-affected base. Further, the Air National Guard Strategic Communication Playbook, which was updated in 2009, now focuses leadership attention to various strategic priorities including the implementation of Air National Guard BRAC recommendations. In addition, the Air National Guard Strategic Planning Process includes both Air Force level and National Guard Bureau level communication with various state-level Adjutants Generals about BRAC implementation. As such, the Air Force Chief of Staff and Air National Guard Director have hosted a meeting for all state-level Adjutants Generals to discuss BRAC actions. As a result of implementing our recommendation, Air National Guard headquarters’ ability to identify strategies and determine resources needed to effectively meet BRAC goals has improved. None planned. Report in the Air Force annual BRAC budget submission the costs and source of funding required to establish replacement missions for the Air National Guard units that will lose their flying missions as a result of BRAC 2005. Non-concur. DOD does not believe these costs are BRAC-related, because establishment of replacement missions was not part of the recommendations. DOD stated that BRAC funds cannot be used to establish these missions and that the costs in question have been appropriately programmed and budgeted in the Air Force’s regular military construction account. GAO recommendation GAO-07-304—Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges (June 29, 2007). Update the business plan for the fleet readiness centers (1) to reflect only savings that are directly related to implementing the recommendation, and (2) update projected onetime savings when data are available. Concur. DOD stated it considers military personnel reductions attributable to BRAC recommendations as savings that are just as real as savings generated through end- strength reductions. While the department may not reduce overall end-strength, it believes that the reductions in military personnel for each recommendation at a specific location are real. Complete. The Commander, Fleet Readiness Centers, updated the business plan in August 2009 to reflect savings directly related to the BRAC action to establish fleet readiness centers. The Navy updated projected savings directly related to implementing the recommendation, showing that overall savings projections of $1.151 billion from the August 2007 version of the business plan should not change, since changes to projected savings targets in some of the six Fleet Readiness Center locations that exceeded savings targets in some years were offset by the inability to meet savings targets at other locations or in other years. The Navy updated projected one-time savings when data became available by changing some savings projected in the 2009 version of the business plan (from our recommendation to re-categorize approximately $25 million per year from recurring savings) to one-time savings. GAO recommendation Monitor implementation of the recommendation to determine the extent to which savings already taken from the Navy budget are actually achieved. Original DOD response Concur. DOD actions Complete. The Navy has demonstrated sustained leadership devoted to implementing the BRAC recommendation for establishing Fleet Readiness Centers, as evidenced by successive leaders who have developed implementation plans and completed each phase of implementation over time. In addition, the Navy’s implementation guidance for Fleet Readiness Centers specifies that key measures include, in part, achieving savings targets. As a result, the Navy’s monthly report to the Fleet Readiness Center Commanders includes an analysis of the variance between savings projected and those actually achieved at the six Fleet Readiness Centers. These reports provide objective, outcome-oriented metrics for improving readiness and detailing six separate savings categories. Commanding Officers or Officers-in- Charge of specific centers are evaluated for their results and held accountable for achieving savings targets. Management tools developed by the implementation team for Fleet Readiness Centers have supported the identification of additional opportunities to realize savings. Continuing efforts to monitor implementation and develop mechanisms to improve performance and accountability have allowed the Navy to determine the extent to which savings already taken from the Navy budget for aircraft maintenance are actually achieved. GAO recommendation GAO-07-166—Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property (Jan. 30, 2007). Report all costs (Defense Environmental Restoration Program and non- Defense Environmental Restoration Program)—past and future—required to complete environmental cleanup at each BRAC installation and to fully explain the scope and limitations of all the environmental cleanup costs DOD reports to Congress. We suggest including this information in the annual BRAC budget justification documentation, since it would accompany information Congress considers when making resource allocation decisions. Concur. DOD concurred with our basic recommendation; however DOD’s comments reflect only a partial concurrence, because DOD did not agree with our suggestion to include this information in the annual BRAC budget justification documentation. DOD stated its belief that this would be counterproductive and that Congress has prescribed the types of environmental information it wants presented in the budget documentation, which DOD complies with. Require that the military services periodically report to OSD on the status and proposed strategy for transferring unneeded BRAC properties and include an assessment of the usefulness of all tools at their disposal. We suggest placing this information in an easily shared location, such as a website, so that each service, and even the local communities and private sector, can share and benefit from lessons learned. Concur. DOD concurred with our recommendation to require the military services to periodically report to the Office of the Secretary of Defense on the status and proposed strategy for transferring BRAC properties and to include an assessment of the usefulness of all tools at their disposal. Although DOD did not comment on our suggestion to accomplish this through a shared website in order to maximize the sharing of lessons learned, DOD officials embraced the idea as something easily to do in comments they made during our exit interview with the department. Complete. DOD stated that in October 2008, the Assistant Deputy Under Secretary of Defense for the Environment, Safety, and Occupational Health determined that the Annual Report to Congress is the appropriate and best format to provide Congress with cleanup information on the DOD BRAC environmental programs. The annual report data is updated annually, via the electronic reporting system from the DOD components to the Deputy Under Secretary of Defense for Installations and Environment. The 2007 annual report provided BRAC site cost data through FY2007 and the estimated cost to complete for FY2008. The annual report is a comprehensive document designed to answer the many stakeholder questions that have developed over the many years of executing BRAC cleanup. The cost and budget data that appears in the annual report are also in the annual budget justification submitted to Congress in support of the President’s Budget Request. Complete. According to DOD, military departments are required to report on the status of all excess real property and to include the available acreages and the authority under which the land was transferred, conveyed, or otherwise disposed of. In June 2011, we contacted the responsible OSD office and were provided sufficient evidence that all four of the military services are now (within the last two years) reporting the status of excess real property to OSD. In addition, the DOD Inspector General’s written response of February 25, 2011, in closing out our recommendation, stated that the Deputy Under Secretary of Defense (Installations and Environment) continually reviews the need for new authorities and changes to existing authorities. GAO recommendation GAO-05-785—Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments (July 1, 2005). Establish mechanisms for tracking and periodically updating savings estimates in implementing individual recommendations, with emphasis on both savings related to the more traditional realignment and closure actions and those related more to business process reengineering. Concur. No written comments provided. In providing oral comments on a draft of this report, the Deputy Under Secretary of Defense for Installations and Environment concurred with our recommendation. Complete. The Joint Action Scenario Team, a joint team DOD set up to develop and propose various joint reserve component recommended actions, incorporated our suggestions to include specific information in its summary reports and supporting documentation in order to withstand scrutiny and provide a clear understanding to outside parties— including us and the military service audit agencies—of the process leading to the ultimate decisions regarding recommended BRAC actions. GAO-04-760—Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round (May 17, 2004). Include in the Secretary of Defense’s May 2005 report on recommendations for base closures and realignments a full discussion of relevant assumptions, and allowances made for potential future force structure requirements and changes, including the potential for future surge requirements. Concur. Complete. The Secretary of Defense’s May 2005 report to the BRAC Commission addressed several of these factors. For example, the report contained a discussion about current and future national security threats the department considered during its deliberations. In addition, the report included a copy of the Secretary of Defense’s January 2005 “Policy Memoranda Seven – Surge,” which outlined five steps DOD would take to meet the statutory requirements to consider a surge in the development of BRAC recommendations. Further, some of the military departments and joint cross service groups discussed during their analyses the steps they took to incorporate the possibility of future surge requirements. In addition to the contact named above, Gina Hoffman (Assistant Director), Leslie Bharadwaja, Michele Fejfar, Joanne Landesman, Amie Lesser, Stephanie Moriarty, Carol Petersen, Matthew Spiers, and Michael Willems made key contributions to this report. Military Base Realignments and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements. GAO-15-274. Washington, D.C.: March 16, 2015. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training. GAO-14-630. Washington, D.C.: July 31, 2014. Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth. GAO-13-436. Washington, D.C.: May 14, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System. GAO-12-224. Washington, D.C.: April 12, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004.
|
The 2005 BRAC round was the fifth round of base closures and realignments undertaken by DOD since 1988, and it was the largest, most complex, and costliest. DOD has relied on the BRAC process to reduce excess infrastructure and realign bases to meet changing force structure needs. According to the Secretary of Defense, BRAC 2005 provided opportunities to foster jointness among the military services. House Report 113-446 included a provision for GAO to review the status of BRAC 2005 recommendations to reduce infrastructure and promote opportunities for jointness. This report evaluates the extent to which DOD has (1) implemented the recommendations requiring the services to relocate select training functions to increase opportunities for jointness and (2) determined if implementing these recommendations has achieved cost savings. GAO reviewed guidance, course listings, and cost data; interviewed DOD and service officials. For each of the six recommendations GAO reviewed from the 2005 Base Realignment and Closure (BRAC) round, the Department of Defense (DOD) implemented the recommendations by requiring military services to relocate select training functions; however, GAO found that two of the six training functions reviewed were able to take advantage of the opportunity provided by BRAC to consolidate training so that services could train jointly. In implementing the remaining four BRAC recommendations, DOD relocated similar training functions run by separate military services into one location, but the services did not consolidate training functions. For example, they do not regularly coordinate or share information on their training goals and curriculums. DOD's justification for numerous 2005 BRAC recommendations included the assumption that realigning military department activities to one location would enhance jointness—defined by DOD as activities, operations, or organizations in which elements of two or more military departments participate. For these four training functions, DOD missed the opportunity to consolidate training to increase jointness, because it provided guidance to move personnel or construct buildings but not to measure progress toward consolidated training. Without additional guidance for consolidating training, the services will not be positioned to take advantage of such an opportunity in these types of recommendations as proposed by DOD and will face challenges encouraging joint training activities and collaboration across services. DOD cannot determine if implementing the 2005 BRAC joint training recommendations that GAO reviewed has resulted in savings in operating costs. For three of the recommendations in this review, the services did not develop baseline operating costs before implementing the BRAC recommendations, which would have enabled it to determine whether savings were achieved. Without developing baseline cost data, DOD will be unable to estimate any cost savings resulting from similar recommendations in any future BRAC rounds. Further, costs reported to DOD by the training functions business plan managers for implementation of two of the six recommendations in this review likely did not include all BRAC-related costs funded from outside the BRAC account. A DOD memo requires BRAC business plan managers to submit all BRAC-related expenditures, including those funded from both inside and outside of the BRAC account. GAO identified at least $110 million in implementation costs that likely should have been reported to DOD in accordance with the memo but were not; therefore the $35.1 billion total cost reported for BRAC 2005 is likely somewhat understated. A DOD official stated that it was up to the military departments to ensure that all BRAC implementation costs were accounted for and that the military departments had the flexibility to determine which costs were associated with the BRAC recommendation and which were attributed to other actions. GAO found that this flexibility in determining which costs were to be reported as BRAC costs led to inconsistencies in what kinds of projects had their costs counted as BRAC implementation costs. By clarifying in guidance what is to be included as a BRAC implementation cost, DOD can help ensure that it has an accurate accounting of the final costs for any future BRAC implementation and that DOD and Congress are able to determine how much money is spent on any future BRAC rounds. To help improve the implementation of jointness-focused recommendations in any future BRAC rounds, GAO recommends that DOD provide additional guidance for consolidating training and reporting BRAC costs and require the development of baseline cost data. DOD partially concurred with the recommendation to clarify guidance for reporting BRAC costs but did not concur with the other recommendations, stating that GAO misunderstood its approach to joint training. GAO believes its findings and recommendations are valid and addresses these points in the report.
|
IT can enrich people’s lives and improve organizational performance. For example, during the last two decades the Internet has matured from being a means for academics and scientists to communicate with each other to a national resource where citizens can interact with their government in many ways, such as by receiving services, supplying and obtaining information, asking questions, and providing comments on proposed rules. While investments in IT have the potential to improve lives and organizations, some federally funded IT projects can—and have— become risky, costly, unproductive mistakes. As we have described in numerous reports and testimonies, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission- related outcomes. Further, while IT should enable government to better serve the American people, the federal government has not achieved expected productivity improvements—despite spending more than $600 billion on IT over the past decade. Over the last two decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. Key laws include the Paperwork Reduction Act of 1995, the Clinger-Cohen Act of 1996, and the E-Government Act of 2002. Also, the GPRA (Government Performance and Results Act) Modernization Act of 2010 includes IT management as a priority goal for improving the federal government. Paperwork Reduction Act of 1995. The act specifies OMB and agency responsibilities for managing information resources, including the management of IT. Among its provisions, this law establishes agency responsibility for maximizing the value and assessing and managing the risks of major information systems initiatives. It also requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. Clinger-Cohen Act of 1996. The act places responsibility for managing investments with the heads of agencies and establishes CIOs to advise and assist agency heads in carrying out this responsibility. Additionally, this law requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. E-Government Act of 2002. The act establishes a federal e- government initiative, which encourages the use of web-based Internet applications to enhance the access to and delivery of government information and services to citizens, business partners, employees, and agencies at all levels of government. The act also requires OMB to report annually to Congress on the status of e- government initiatives. In these reports, OMB is to describe the administration’s use of e-government principles to improve government performance and the delivery of information and services to the public. GPRA (Government Performance and Results Act) Modernization Act of 2010. The act establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. It requires OMB, in coordination with agencies, to develop long-term, outcome-oriented goals for a limited number of crosscutting policy areas at least every four years. The act specifies that these goals should include five areas: financial management, human capital management, IT management, procurement and acquisition management, and real property management. On an annual basis, OMB is to provide information on how these long-term crosscutting goals will be achieved. As set out in these laws, OMB is to play a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. Within OMB, the Office of E-government and Information Technology, headed by the Federal CIO, directs the policy and strategic planning of federal IT investments and is responsible for oversight of federal technology spending. In addition, the Office of Federal Procurement Policy (OFPP) is responsible for shaping the policies and practices federal agencies use to acquire the goods and services they need to carry out their missions. Agency CIOs are also expected to have a key role in IT management. Federal law, specifically the Clinger-Cohen Act, has defined the role of the CIO as the focal point for IT management, requiring agency heads to designate CIOs to lead reforms that would help control system development risks; better manage technology spending; and achieve real, measurable improvements in agency performance. In addition, the CIO Council—comprised of the CIOs and Deputy CIOs of 28 agencies and chaired by OMB’s Deputy Director for Management—is the principal interagency forum for improving agency practices related to the design, acquisition, development, modernization, use, sharing, and performance of federal information resources. The CIO Council is responsible for developing recommendations for overall federal IT management policy, sharing best practices, including the development of performance measures, and identifying opportunities and sponsoring cooperation in using information resources. After assessing the most persistent challenges in acquiring, managing, and operating IT systems, in December 2010, the Federal CIO established a 25-point IT Reform Plan designed to address challenges in IT acquisition, improve operational efficiencies, and deliver more IT value to the American taxpayer. The actions were planned to be completed in three different time frames: (1) within 6 months (by June 2011), (2) between 6 and 12 months (by December 2011), and (3) between 12 and 18 months (by June 2012). Several different organizations were assigned ownership of the key action items, including the Federal CIO, the CIO Council, the General Services Administration (GSA), Office of Personnel Management (OPM), OFPP, the Small Business Administration, and other federal agencies. Table 1 contains detailed information on the action items in the IT Reform Plan. Shaded items are those selected for review in this report. Given the challenges that federal agencies have experienced in acquiring and managing IT investments, we have issued a series of reports aimed at improving federal IT management over the last decade. Our reports cover a variety of topics, including data center consolidation, cloud computing, CIO responsibilities, system acquisition challenges, and modular development. Key reports that address topics covered in the IT Reform Plan include: Data center consolidation. In July 2011, we reported on the status of OMB’s federal data center consolidation initiative. Under this initiative, OMB required 24 participating agencies to submit data center inventories and consolidation plans by the end of August 2010. However, we found that only one of the agencies submitted a complete data center inventory and no agency submitted a complete data center consolidation plan. We concluded that until these inventories and plans are complete, agencies might not be able to implement their consolidation activities and realize expected cost savings. We recommended that agencies complete the missing elements in their plans and inventories. In response to our recommendations, in October and November 2011, the agencies updated their inventories and plans. We have ongoing work assessing the agencies’ revised plans, and in February 2012, we reported that our preliminary assessment of the updated plans showed that not all agency plans were updated to include all required information. Cloud computing. In May 2010, we reported on multiple agencies’ efforts to ensure the security of governmentwide cloud computing. We noted that while OMB, GSA, and the National Institute of Standards and Technology (NIST) had initiated efforts to ensure secure cloud computing, significant work remained to be completed.OMB had not yet finished a cloud computing strategy; GSA had begun a procurement for expanding cloud computing services, but had not yet developed specific plans for establishing a shared information security assessment and authorization process; and NIST had not yet issued cloud-specific security guidance. We made several recommendations to address these issues. Specifically, we recommended that OMB establish milestones to complete a strategy for federal cloud computing and ensure it addressed information security challenges. OMB subsequently published a strategy which addressed the importance of information security when using cloud computing, but did not fully address several key challenges confronting agencies. We also recommended that GSA consider security in its procurement for cloud services, including consideration of a shared assessment and authorization process. GSA has since developed an assessment and authorization process for systems shared among federal agencies. Finally, we recommended that NIST issue guidance specific to cloud computing security. NIST has since issued multiple publications which address such guidance. More recently, in October 2011, we testified that 22 of 24 major federal agencies reported that they were either concerned or very concerned about the potential information security risks associated with cloud computing. These risks include being dependent on the security practices and assurances of vendors and the sharing of computing resources. We stated that these risks may vary based on the cloud deployment model. Private clouds, whereby the service is set up specifically for one organization, may have a lower threat exposure than public clouds, whereby the service is available to any paying customer. Evaluating this risk requires an examination of the specific security controls in place for the cloud’s implementation. We also reported that the CIO Council had established a cloud computing Executive Steering Committee to promote the use of cloud computing in the federal government, with technical and administrative support provided by GSA’s Cloud Computing Program Management Office, but had not finalized key processes or guidance. A subgroup of this committee had developed the Federal Risk and Authorization Management Program, a governmentwide program to provide joint authorizations and continuous security monitoring services for all federal agencies, with an initial focus on cloud computing. The subgroup had worked with its members to define interagency security requirements for cloud systems and services and related information security controls. Best practices in IT acquisition. In October 2011, we reported on best practices in IT acquisitions in the federal government. Specifically, we identified nine factors critical to the success of three or more of The factors most commonly identified include seven IT investments.active engagement of stakeholders, program staff with the necessary knowledge and skills, and senior department and agency executive support for the program. We reported that while these factors will not necessarily ensure that federal agencies will successfully acquire IT systems because many different factors contribute to successful acquisitions, they may help federal agencies address the well- documented acquisition challenges they face. IT spending authority. In February 2008, we reported that the Department of Veterans Affairs had taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department’s IT budget and resources. These steps included providing the department’s CIO responsibility for ensuring that there are controls over the budget and for overseeing all capital planning and execution, and designating leadership to assist in overseeing functions such as portfolio management. GAO, Information Technology: VA Has Taken Important Steps to Centralize Control of Its Resources, but Effectiveness Depends on Additional Planned Actions, GAO-08-449T (Washington, D.C.: Feb. 13, 2008). Investment review and oversight. During the past several years, we issued numerous reports and testimonies on OMB’s initiatives to highlight troubled IT projects. We made multiple recommendations to OMB and federal agencies to enhance the oversight and transparency of federal IT projects. For example, in 2005 we recommended that OMB develop a central list of projects and their deficiencies, and analyze that list to develop governmentwide and agency assessments of the progress and risks of the investments, identifying opportunities for continued improvement. In 2006, we recommended that OMB develop a single aggregate list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its internal list of mission-critical projects that needed to improve (called its Management Watch List) and disclosing the projects’ deficiencies. The agency also established a High-Risk List, which consisted of projects identified as requiring special attention from oversight authorities and the highest levels of agency management. Two different budget submissions, called exhibit 53s and exhibit 300s, provide the data accessible through the IT Dashboard. Exhibit 53s list all of the IT investments and their associated costs within a federal organization. An Exhibit 300, also called the Capital Asset Plan and Business Case, is used to justify resource requests for major IT investments and is intended to enable an agency to demonstrate, to its own management and to OMB, that a major investment is well planned. Federal CIO. We have since completed three successive reviews of the data on the IT Dashboard and reported that while it is an important tool for reporting and monitoring major IT projects, the cost and schedule ratings were not always accurate for selected agencies. We made recommendations to improve the accuracy of the data and, in our most recent report, found that the accuracy had improved. GAO, Information Technology: OMB Needs to Improve Its Guidance on IT Investments, GAO-11-826 (Washington, D.C.: Sept. 29, 2011). that OMB update its guidance to establish measures of accountability for ensuring that CIOs’ responsibilities are fully implemented and require agencies to establish internal processes for documenting lessons learned. OMB officials generally agreed with our recommendations and, in August 2011, issued a memo to agencies emphasizing the CIO’s role in driving the investment review process and responsibility over the entire IT portfolio for an agency. The memo identified four areas in which the CIO should have a lead role: IT governance, program management, commodity services, and information security. OMB and key federal agencies have made progress on selected action items identified in the IT Reform Plan, but there are several areas where more remains to be done. Of the 10 key action items we reviewed, 3 were completed and the other 7 were partially completed by December 2011. The action items that are behind schedule share a common reason for the delays: the complexity of the initiatives. In all seven of the cases, OMB and the federal agencies are still working on the initiatives. In a December 2011 progress report on its IT Reform Plan, OMB reported that it made greater progress than we determined. The agency reported that of the 10 action items, 7 were completed and 3 were partially completed. OMB officials from the Office of E-government and Information Technology explained that the reason for the difference in assessments is that they believe that the IT Reform Plan has served its purpose in acting as a catalyst for a set of broader initiatives. They noted that work will continue on all of the initiatives even after OMB declares the related action items to be completed under the IT Reform Plan. We disagree with this approach. In prematurely declaring the action items to be completed, OMB risks losing momentum on the progress it has made to date. Table 2 provides both OMB’s and our assessments of the status of the key action items, with action items rated as “completed” if all of the required activities identified in the reform plan were completed, and “partially completed” if some, but not all, of the required activities were completed. Until OMB and the agencies complete the action items called for in the IT Reform Plan, the benefits of the reform initiatives—including increased operational efficiencies and more effective management of large-scale IT programs—may be delayed. With the last of the action items in the IT Reform Plan due to be completed by June 2012, it will be important for OMB and the agencies to ensure that the action items due at earlier milestones are completed as soon as possible. According to leading practices in industry and government, effective planning is critical to successfully managing a project. Effective project planning includes taking corrective actions when project deliverables fall behind schedule and defining time frames for completing the corrective actions. As noted earlier in this report, we identified seven action items that are behind schedule or falling short of the IT Reform Plan’s requirements. OMB and the agencies have plans for addressing all seven of the action items that we identified as behind schedule, but lack time frames for completing five of them. The seven action items we identified are: Data center consolidation. We noted that agencies’ data center consolidation plans do not include all required elements. In July 2011, OMB directed agencies to complete the missing elements in their plans. The agencies are expected to provide an update on their plans in September 2012. Cloud-first policy. We noted that agencies’ migration plans were missing selected elements. An OMB official stated while OMB did not review the quality of agency migration plans in order to close the reform plan action item, the official responsible for the cloud-first initiative would continue to work with agencies to ensure that the initiative was successful. There are no time frames for agencies to complete their migration plans. Best practice collaboration portal. We found that the best practices collaboration platform is missing key features that would allow the information to be accessible and useable. A CIO Council official noted that the council plans to improve the portal over time by adding the ability to load artifacts, allow users to chat online, contain an expertise repository, and allow or encourage labeling of information to improve the search for artifacts within the platform. However, the CIO Council has not established a time frame for providing additional functionality to the web-based collaboration portal. Guidance and templates for modular contracting. OFPP has not issued guidance or the required templates and samples supporting modular development. It plans to continue developing guidance and templates to support modular development, and the first draft of this guidance is currently undergoing initial review. OFPP plans to issue its guidance and templates in spring 2012. Obtaining new IT budget authorities. OMB is behind schedule in obtaining new IT budget authorities. OMB officials stated that it plans to propose new authorities as part of the 2013 President’s Budget, and intends to work with congressional committees throughout the budget rollout process. However, OMB has not yet established time frames for completing this activity. Consolidating commodity IT under the agency CIO. OMB is behind schedule in consolidating commodity IT spending under agency CIOs. OMB plans to propose new spending models for commodity IT in the 2013 President’s Budget, and to work with Congress to implement these new models. However, OMB has not established a time frame for completing this activity. Redefining roles of agency CIOs and the CIO Council. OMB acknowledges that not all agency CIOs have authority for a full portfolio of IT investments and plans to collect data from agencies during spring 2012 to determine the extent to which the CIOs have this authority. At that point, OMB should be better positioned to determine what more needs to be done to ensure CIO roles are redefined. However, there is no time frame for completing this activity. Until OMB and the agencies establish time frames for completing corrective actions, they increase the risk that key actions will not be effectively managed to closure. For example, without cloud migration plans, agencies risk maintaining legacy systems long after the system has been replaced by one operating in the cloud. Further, these incomplete actions reduce the likelihood of achieving the full range of benefits promised by the IT reform initiatives. The importance of performance measures for gauging the progress of programs and projects is well recognized. In the past, OMB has directed agencies to define and select meaningful outcome-based performance measures that track the intended results of carrying out a program or activity. Additionally, as we have previously reported, aligning performance measures with goals can help to measure progress toward those goals, emphasizing the quality of the services an agency provides or the resulting benefits to users. Furthermore, industry experts describe performance measures as necessary for managing, planning, and monitoring the performance of a project against plans and stakeholders’ needs. According to government and industry best practices, performance measures should be measurable, outcome-oriented, and actively tracked and managed. Recognizing the importance of performance measurement, OMB and GSA have established measures for 4 of the 10 action items we reviewed: data center consolidation, shifting to cloud computing, using contract vehicles to obtain Infrastructure-as-a-Service, and reforming investment review boards. Moreover, OMB reported on three of these measures in the analytical perspectives associated with the President’s fiscal year 2013 budget. Specifically, regarding data center consolidation, OMB reported that agencies were on track to close 525 centers by the end of 2012 and expected to save $3 billion by 2015. On the topic of cloud computing, OMB reported that agencies had migrated 40 services to cloud computing environments in 2011 and expect to migrate an additional 39 services in 2012. Regarding investment review boards, OMB reported that agency CIOs held 294 TechStat reviews and had achieved more than $900 million in cost savings, life cycle cost avoidance, or reallocation of funding. However, OMB has not established performance measures for 6 of the 10 action items we reviewed. For example, OMB has not established measures related to the best practices collaboration platform, such as number of users, number of hits per query, and customer satisfaction. Further, while OMB has designed the guidance and curriculum for developing a cadre of IT acquisition professionals, it has not established measures for tracking agencies development of such a cadre. Table 3 details what performance measures and goals, if any, are associated with the action item. OMB officials, including two policy analysts within the Office of E- government and Information Technology who are responsible for the IT Reform Plan, stated that they do not believe that it is appropriate for OMB to establish measures for the action items in the IT Reform Plan. The officials explained that they believe that the purpose of the IT Reform Plan is to act as a catalyst for initiatives that are expected to continue outside of the plan. For example, the IT Reform Plan called for OMB and agencies to complete several discrete activities to push forward on data center consolidation, but the Federal Data Center Consolidation Initiative will continue on well after the deliverables noted in the reform plan are completed. They acknowledged that it would be appropriate to have performance measures for each of the broader initiatives outside of the IT Reform Plan, but noted that this should be the responsibility of the group running each initiative. We disagree with OMB’s view and believe that performance measures are a powerful way to motivate people, communicate priorities, and improve performance. In our assessment, we sought any available performance measures associated with either the action item or the broader initiative, and in cases like the data center consolidation initiative, gave credit for the measures that were established for the initiative. However, we found that most action items and initiatives lacked any performance measures. Moreover, if OMB encourages individual agencies to establish measures, there will likely be multiple different measures for the action items and it would be more difficult to demonstrate governmentwide progress. Therefore, we believe that it is appropriate for OMB to establish performance measures for each of the action items in order to effectively measure the results of the IT Reform Plan. Until OMB establishes and begins tracking measurable, outcome- oriented performance measures for each of the action items, the agency will be limited it its ability to evaluate progress that has been made and whether or not the initiative is achieving its goals. OMB and selected agencies have made strides in implementing the IT Reform Plan, including pushing agencies to consolidate data centers, migrating federal services to cloud computing, improving the skills of IT acquisition professionals, and strengthening the roles and accountability of CIOs. However, several key reform items are behind schedule and OMB lacks time frames for completing most of them. Despite reporting that selected actions are completed, OMB and federal agencies are still working on them. This sends an inconsistent message on the need to maintain focus on these important initiatives. Moving forward, it will be important for OMB to accurately characterize the status of the action items in the IT Reform Plan in order to keep agencies’ focus and momentum on these important reform initiatives. OMB has not established performance measures for gauging the success of most of its reform initiatives. For example, while OMB is tracking the number of services that agencies move to a cloud computing environment and the number of data center closures, it is not tracking the usefulness of its efforts to develop a best practices collaboration portal or a cadre of IT acquisition professionals. Until OMB and the agencies complete the action items called for in the IT Reform Plan, establish time frames for completing corrective actions, and establish performance measures to track the results of the reform initiatives, the government may not be able to realize the full promise of the IT Reform Plan. The IT Reform Plan’s goals of improving government IT acquisitions and the efficiency of government operations are both ambitious and important, and they warrant a more structured approach to ensure actions are completed and results are achieved. To help ensure the success of IT reform initiatives, we are making four recommendations to OMB. Specifically we are recommending that the Director of the Office of Management and Budget direct the Federal Chief Information Officer to ensure that the action items called for in the IT Reform Plan are completed by the responsible parties prior to the completion of the IT Reform Plan’s 18 month deadline of June 2012, or if the June 2012 deadline cannot be met, by another clearly defined deadline; provide clear time frames for addressing the shortfalls associated with the IT Reform Plan action items; accurately characterize the status of the IT Reform Plan action items in the upcoming progress report in order to keep momentum going on action items that are not yet completed; and establish outcome-oriented measures for each applicable action item in the IT Reform Plan. We are also making two recommendations to the Secretaries of Homeland Security and Veterans Affairs and to the Attorney General of the Department of Justice to address action items in the IT Reform Plan where the agencies have fallen behind. Specifically, we are recommending that they direct their respective agency CIOs to complete elements missing from the agencies’ plans for migrating services to a cloud computing environment, as applicable, and identify and report on the commodity services proposed for migration to shared services. We received comments on a draft of our report from OMB; the Departments of Homeland Security, Justice, and Veterans Affairs; and GSA. OMB agreed with two recommendations and disagreed with two recommendations; the Departments of Homeland Security, Justice, and Veterans Affairs generally agreed with our recommendations; and GSA did not agree or disagree with our recommendations. Each agency’s comments are discussed in more detail below. OMB’s Federal CIO provided written comments on a draft of this report, as well as supplementary comments via e-mail. The written comments are provided in appendix II. The Federal CIO stated that OMB believes our analysis and findings have been critical to driving IT reforms across the federal government, and that OMB plans to use this report to continue the positive momentum on the IT Reform Plan. In addition, the Federal CIO stated that despite agreeing with many of the observations and recommendations in the draft report, OMB had concerns with selected recommendations, observations, and the scope of our review. The agency’s comments and, where applicable, our evaluation follow: OMB agreed with our recommendation to ensure that action items called for in the IT Reform Plan are completed by the end of the IT Reform Plan’s 18-month deadline of June 2012 and stated that OMB intends to complete the action items by the deadline. OMB agreed with our recommendation to provide clear time frames for addressing the shortfalls associated with the IT Reform Plan action items and stated that OMB will provide clear time frames where applicable. OMB disagreed with our recommendation that the agency accurately characterize the status of IT Reform Plan action items in the upcoming progress report. The agency stated that it has accurately characterized the completeness of the action items, and therefore, the recommendation does not apply. As stated in this report, we do not agree with OMB’s characterization of four action items: data center consolidation, cloud-first policy, best practices collaboration portal, and redefining roles of agency CIOs and the CIO Council. OMB considers these action items to be completed. We do not. While OMB has made progress in each of these areas, we found activities specified in the IT Reform Plan that have not yet been completed. Specifically, in the area of data center consolidation, we found that selected agency plans are still incomplete; in the move to cloud computing, selected agency migration plans lack key elements; in the area of the best practices portal, we found that the portal lacks key features that would allow the information to be accessible and useful to program managers; and in revising CIO roles, we identified an agency that does not yet have the envisioned authority over IT acquisitions. Further, in a recent memorandum to agency CIOs, the Federal CIO acknowledged that agency data center consolidation plans are incomplete and required agencies to provide an annual update to the plans. In addition, our assessment that the cloud migration plans are incomplete was affirmed by the three agencies we reviewed agreeing with our recommendation that they complete cloud migration plans. Thus, we believe that our recommendation to OMB to accurately characterize the status of IT Reform action items is valid. OMB disagreed with our recommendation to establish outcome- oriented measures for each applicable action item in the IT Reform Plan, noting that the agency measured the completeness of the IT Reform actions and not the performance measures associated with broader initiatives. OMB also suggested that we erroneously gave the agency credit for performance measures associated with broader initiatives on data center consolidation, cloud computing, and investment review boards. We acknowledge that some of the action items in the IT Reform Plan are subsets of broader initiatives, and where applicable, we gave credit for having measures associated with the broader initiatives. We continue to believe that this approach is appropriate because the action items and the broader initiatives are intrinsically intertwined. For instance, it would have been unfair to state that there are no measures associated with consolidating federal data centers when such measures clearly exist. Moreover, the point remains that there are multiple action items in the IT Reform Plan that are not aligned with broader initiatives and for which there are no measures. Examples include the best practices portal, development of a cadre of specialized IT acquisition professionals, and establishing budget models that align with modular development. Given that the purpose of the IT Reform Plan is to achieve operational efficiencies and improve the management of large-scale IT programs, we continue to assert that it is appropriate to establish performance measures to monitor the IT Reform Plan’s results. According to the administration’s public website intended to provide a window on efforts to deliver a more effective, smarter, and leaner government, performance measurement is a necessary step in improving performance and that it helps set priorities, tailor actions, inform on progress, and diagnose problems. Until OMB establishes and tracks measureable, outcome-oriented performance measures for each of the action items in the IT Reform Plan, the agency will be limited in its ability to evaluate progress that has been made and whether or not the initiative is achieving its goals. OMB stated that the title of our draft report (Information Technology Reform: Progress is Mixed; More Needs to Be Done to Complete Actions and Measure Results) did not accurately capture the substantial and overwhelmingly positive progress made to date. Moreover, OMB stated that the responsible entities have completed 81.5 percent of the required activities associated with the 10 action items we reviewed. We acknowledge the progress OMB and agencies have made on IT Reform Plan items in this report and have modified the title of our report to reflect that progress. However, our analysis of the percentage of completed activities differs from OMB’s calculations. The 10 action items we reviewed include 31 distinct required activities (see table 1). We found that the responsible entities completed 18 of these activities—a 58 percent completion rate. OMB also stated that our assessment should acknowledge that OMB does not have the statutory authority to carry out certain action items without congressional action. These action items involved creating IT budget models to align with modular development and consolidating commodity IT spending under the agency CIOs. The Federal CIO stated that although OMB has taken steps to engage with Congress, the agency cannot unilaterally grant budget flexibilities or consolidate spending. While it is true that completing these items depends upon congressional action, according to the IT Reform Plan, it is the responsibility of OMB and the federal agencies to work with Congress to propose budget models to address these items. In general, OMB stated that it will continue to drive reform throughout the federal government via the completion of the remaining actions in the IT Reform Plan, as well as continuing to work with agencies as they implement broader initiatives such as data center consolidation and the transition to cloud computing. In supplementary comments provided via e-mail, the Federal CIO also expressed concerns with the scope of our report, stating that the intent of the IT Reform Plan was not to reform all federal IT, but to establish some early wins to garner momentum for OMB’s broader initiatives. The Federal CIO also noted that OMB has been consistent in publicizing the IT Reform Plan as an 18-month plan with discrete goals designed to augment and accelerate broader initiatives that existed before the IT Reform Plan was launched and would continue after the plan has been completed. We believe that the scope of our review is appropriate. Since its inception, the scope of our review has focused on the action items and supporting activities noted in the IT Reform Plan. All of the required activities listed in table 1 in the background section of this report are listed in the IT Reform Plan. Moreover, we did not evaluate activities that are outside of the IT Reform Plan, such as OMB’s efforts to establish a cost model for agencies to use in estimating the costs and savings of data center consolidation. Further, we agree that to completely reform IT, OMB and agencies must undertake activities beyond the IT Reform Plan’s 18-month time frame. The activities within the IT Reform Plan are essential building blocks that will carry on well beyond the IT Reform Plan’s end. In written comments, the Department of Homeland Security’s Director of Departmental GAO-Office of Inspector General Liaison Office concurred with our recommendations and identified steps that the agency is undertaking to address them. The department’s written comments are provided in appendix III. In written comments, the Department of Justice’s Assistant Attorney General for Administration generally agreed with our recommendations and identified steps that the agency has undertaken to address them. The department’s written comments are provided in appendix IV. In written comments, the Chief of Staff at the Department of Veterans Affairs agreed with our recommendations and identified steps that the department is taking to implement them. The department’s written comments are provided in appendix V. In comments provided via e-mail, a Management and Program Analyst within GSA’s Office of Administrative Services stated that the agency had no official response or technical comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the secretaries and administrators of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) evaluate the progress the Office of Management and Budget (OMB) and key federal agencies have made on selected action items in the Information Technology (IT) Reform Plan, (2) assess the plans for addressing any action items that are behind schedule, and (3) assess the extent to which sound measures are in place to evaluate the success of the IT reform initiatives. In establishing the scope of our engagement, we selected ten action items for review, focusing on action items that (1) were due at the 6 or 12 month milestones because these were expected to be completed during our review, (2) covered multiple different topic areas, and (3) were considered by internal and OMB subject matter experts to be the more important items. These action items are: Complete detailed implementation plans to consolidate 800 data centers by 2015. Shift to a “cloud first” policy. Stand-up contract vehicles for secure Infrastructure-as-a-Service solutions. Launch a best practices collaboration platform. Design a cadre of specialized IT acquisition professionals. Issue contracting guidance and templates to support modular development. Work with Congress to create IT budget models that align with modular development. Work with Congress to consolidate Commodity IT spending under agency Chief Information Officers (CIO). Reform and strengthen Investment Review Boards. Redefine the role of agency CIOs and the CIO Council. In addition, in the seven cases where multiple agencies are identified as a responsible entity for the action item, we selected three civilian agencies (the Departments of Homeland Security, Veterans Affairs, and Justice) based on factors including (1) high levels of IT spending in fiscal year 2011, (2) poor performance on the IT Dashboard, (3) high number of major IT investments in fiscal year 2011, and (4) coverage of agencies that were not included on other GAO reviews of IT reform initiatives. To evaluate OMB and federal agencies progress in implementing the IT Reform Plan, we evaluated efforts by the entities responsible for each of the action items, including OMB, the General Services Administration (GSA), the Chief Information Officers (CIO) Council, and selected agencies. For each of the 10 action items in the IT Reform Plan, we reviewed OMB’s guidance and identified required activities. We compared agency documentation to these requirements, and identified gaps and missing elements. We rated each action item as “completed” if the responsible agencies demonstrated that they completed the required activities on or near the due date, and “partially completed” if the agencies demonstrated that they completed part of the required activities. We interviewed agency officials to clarify our initial findings and to determine why elements were incomplete or missing. To assess the plans for addressing any action items that are behind schedule, we identified the agencies’ plans for addressing the schedule shortfalls and compared these to sound project planning practices identified by organizations recognized for their experience in project management and acquisition processes. We also interviewed relevant agency officials regarding the reasons that their activities were behind schedule and the impact of any shortfalls in their mitigation plans. To assess the extent to which sound measures are in place to evaluate success, we determined whether performance measures were applicable for each of the selected action items, and if so, how agencies were tracking these measures. We compared these measures to best practices in IT performance management identified by leading industry and government organizations and assessed other options for measuring performance. In addition, we interviewed OMB and selected agency officials regarding progress, plans, and measures. As we were completing our audit work, OMB reported making progress in its efforts to consolidate data centers, transition to a cloud computing environment, and strengthen investment review boards, and provided data on specific measures within each of these areas. We assessed the reliability of the data provided on these measures by obtaining information from agency officials and from the CIO Council regarding their efforts to ensure the reliability of the data. While we identified limitations in the quality of the data that agencies reported, we determined that this data was sufficiently reliable for the purpose of presenting a general overview of progress in establishing performance measures. We conducted our work at multiple agencies’ headquarters in the Washington, D.C., metropolitan area. We conducted this performance audit from August 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Colleen Phillips (Assistant Director), Cortland Bradford, Rebecca Eyler, Kathleen S. Lovett, and Jessica Waselkow. Information Technology: Critical Factors Underlying Successful Major Acquisitions. GAO-12-7. Washington, D.C.: October 21, 2011. Federal Chief Information Officers: Opportunities Exist to Improve Role in Information Technology Management. GAO-11-634. Washington, D.C.: September 15, 2011. Information Technology: VA Has Taken Important Steps to Centralize Control of Its Resources, but Effectiveness Depends on Additional Planned Actions. GAO-08-449T. Washington, D.C.: February 13, 2008. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Information Security: Governmentwide Guidance Needed to Assist Agencies in Implementing Cloud Computing. GAO-10-855T. Washington, D.C.: July 1, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. Washington, D.C.: May 27, 2010. Data Center Consolidation: Agencies Need to Complete Inventories and Plans to Achieve Expected Savings. GAO-11-565. Washington, D.C.: July 19, 2011. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12- 342SP. Washington, D.C.: February 28, 2012. Information Technology: Departments of Defense and Energy Need to Address Potentially Duplicative Investments. GAO-12-241. Washington, D.C.: February 17, 2012. Information Technology: Potentially Duplicative Investments Exist at the Departments of Defense and Energy. GAO-12-462T. Washington, D.C.: February 17, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Investment Management: IRS Has a Strong Oversight Process but Needs to Improve How It Continues Funding Ongoing Investments. GAO-11- 587. Washington, D.C.: July 20, 2011. Information Technology: Investment Oversight and Management Have Improved but Continued Attention Is Needed. GAO-11-454T. Washington, D.C.: March 17, 2011. Information Technology: Treasury Needs to Strengthen Its Investment Board Operations and Oversight. GAO-07-865. Washington, D.C.: July 23, 2007. Information Technology: DHS Needs to Fully Define and Implement Policies and Procedures for Effectively Managing Investments. GAO-07- 424. Washington, D.C.: April 27, 2007. Information Technology: OMB Needs to Improve Its Guidance on IT Investments. GAO-11-826. Washington, D.C.: September 29, 2011. Information Technology: Management and Oversight of Projects Totaling Billions of Dollars Need Attention. GAO-09-624T. Washington, D.C.: April 28, 2009. Information Technology: OMB and Agencies Need to Improve Planning, Management, and Oversight of Projects Totaling Billions of Dollars. GAO- 08-1051T. Washington, D.C.: July 31, 2008. Information Technology: Further Improvements Needed to Identify and Oversee Poorly Planned and Performing Projects. GAO-07-1211T. Washington, D.C.: September 20, 2007. Information Technology: Improvements Needed to More Accurately Identify and Better Oversee Risky Projects Totaling Billions of Dollars. GAO-06-1099T. Washington, D.C.: September 7, 2006. Information Technology: Agencies and OMB Should Strengthen Processes for Identifying and Overseeing High Risk Projects. GAO-06- 647. Washington, D.C.: June 15, 2006. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way To Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. Information Technology: Continued Attention Needed to Accurately Report Federal Spending and Improve Management. GAO-11-831T. Washington, D.C.: July 14, 2011. Information Technology: Continued Improvements in Investment Oversight and Management Can Yield Billions in Savings. GAO-11-511T. Washington, D.C.: April 12, 2011. Information Technology: OMB Has Made Improvements to Its Dashboard, but Further Work Is Needed by Agencies and OMB to Ensure Data Accuracy. GAO-11-262. Washington, D.C.: March 15, 2011. Information Technology: OMB’s Dashboard Has Increased Transparency and Oversight, but Improvements Needed. GAO-10-701. Washington, D.C.: July 16, 2010. Information Technology: Federal Agencies Need to Strengthen Investment Board Oversight of Poorly Planned and Performing Projects. GAO-09-566. Washington, D.C.: June 30, 2009. Federal Housing Administration: Improvements Needed in Risk Assessment and Human Capital Management. GAO-12-15. Washington, D.C.: November 7, 2011. Information Technology: HUD’s Expenditure Plan Satisfies Statutory Conditions and Implementation of Management Controls Is Under Way. GAO-11-762. Washington, D.C.: September 7, 2011. Information Technology: FBI Has Largely Staffed Key Modernization Program, but Strategic Approach to Managing Program’s Human Capital Is Needed. GAO-07-19. Washington, D.C.: October 16, 2006.
|
While investments in IT have the potential to improve lives and organizations, federal IT projects too often experience cost overruns, schedule slippages, and performance shortfalls. To address acquisition challenges, improve operational efficiencies, and deliver more value to the American taxpayer, in December 2010, OMBs Federal CIO issued a 25-point IT Reform Plan. GAO was asked to (1) evaluate the progress OMB and key federal agencies have made on selected action items in the IT Reform Plan, (2) assess the plans for addressing action items that are behind schedule, and (3) assess the extent to which sound measures are in place to evaluate the success of the IT reform initiatives. To do so, GAO selected 10 of the 25 action items from the IT Reform Plan, focusing on the more important activities due to be completed by December 2011; analyzed agency documentation; and interviewed agency officials. The Office of Management and Budget (OMB) and key federal agencies have made progress on action items in the Information Technology (IT) Reform Plan, but there are several areas where more remains to be done. Of the 10 key action items GAO reviewed, 3 were completed and 7 were partially completed by December 2011, in part because the initiatives are complex. OMB reported greater progress than GAO determined, stating that 7 of the 10 action items were completed and that 3 were partially completed. While OMB officials acknowledge that there is more to do in each of the topic areas, they consider the key action items to be completed because the IT Reform Plan has served its purpose as a catalyst for a set of broader initiatives. They explained that work will continue on all of the initiatives even after OMB declares that the related action items are completed under the IT Reform Plan. We disagree with this approach. In prematurely declaring the action items to be completed, OMB risks losing momentum on the progress it has made to date. Until OMB and the agencies complete the action items, the benefits of the reform initiativesincluding increased operational efficiencies and more effective management of large-scale IT programswill likely be delayed. OMB and key agencies plan to continue efforts to address the seven items that GAO identified as behind schedule, but lack time frames for completing most of them. For example, OMB plans to work with congressional committees during the fiscal year 2013 budget process to assist in exploring legislative proposals to establish flexible budget models and to consolidate certain routine IT purchases under agency chief information officers (CIO). However, OMB has not established time frames for completing five of the seven IT Reform Plan action items that are behind schedule. Until OMB and the agencies establish time frames for completing these corrective actions, they increase the risk that key action items will not be completed or effectively managed to closure. Further, they diminish the likelihood of achieving the full benefits of IT reform. OMB has not established performance measures for evaluating the results of most of the IT reform initiatives GAO reviewed. Specifically, OMB has established performance measures for 4 of the 10 action items, including data center consolidation and cloud computing. However, no performance measures exist for 6 other action items, including establishing the best practices collaboration platform and developing a cadre of IT acquisition professionals. Until outcome-oriented performance measures are in place for each of the action items, OMB will be limited in its ability to evaluate progress that has been made and to determine whether or not the initiative is achieving its intended results. GAO is making recommendations to three agencies to complete key IT Reform action items; the agencies generally concurred. GAO is also making recommendations to OMB to complete key action items, accurately characterize the items status, and establish measures for IT reform initiatives. OMB agreed to complete key action items, but disagreed with the latter recommendations, noting that the agency believes it is characterizing the items status correctly and that measures are not warranted. GAO maintains that its recommendations are valid.
|
The vast majority of the Recovery Act funding for transportation programs went to FHWA, FRA, and FTA for highway, road, bridge, rail, and transit projects. More than half of all Recovery Act transportation funds were designated for the construction, rehabilitation, and repair of highways, roads, and bridges (see fig. 1). The remaining funds were allocated among other DOT operating administrations. DOT administered most Recovery Act funds through existing transportation programs. For example, highway funds were distributed under rules governing the Federal-Aid Highway Program generally and the Surface Transportation Program in particular. As a result, officials at state departments of transportation were familiar with project eligibility and other federal requirements. Similarly, transit funds were primarily distributed through established transit programs, and project sponsors (typically transit agencies) were familiar with federal grant application processes. FAA distributed airport funds through the established Airport Improvement Program structure, and MARAD awarded grants through its existing Assistance to Small Shipyards Program. DOT established new grant processes to award high speed intercity passenger rail and TIGER grants. For these programs, DOT published selection criteria, solicited and reviewed applications, and awarded grants to applicants that it judged best met the criteria and complied with legislative and regulatory requirements. The Recovery Act provided 100 percent federal funding for most programs, which is a departure from the typical federally funded transportation programs. On the other hand, the Recovery Act did not alter the 75 percent of project cost the federal government would typically pay under the Assistance to Small Shipyards program administered by MARAD. The Recovery Act also included short deadlines for obligating most transportation funds, and it required preference be given to projects that could be started and completed expeditiously. Obligating funds in a timely manner is an important feature of the Recovery Act, as an economic stimulus package should, as we have previously reported, include projects that can be undertaken quickly enough to provide a timely stimulus to the economy. For example, Recovery Act highway and transit funds were to be obligated within 1 year of the date of apportionment and highway projects which could be completed within 3 years were to be given priority. After the March 2010 1-year obligation deadline for highway funds, states requested that FHWA deobligate $1.25 million of these funds. We reported that deobligations from March 2 to June 7, 2010, were requested primarily because contracts were awarded for less than the original cost estimates. All of these funds were obligated by the September 2010 deadline. All TIGER funds must be obligated by September 30, 2011, and all high speed intercity passenger rail funds must be obligated by September 30, 2012. The Recovery Act also introduced new requirements for existing programs to help ensure that funds add to states’ and localities’ overall economic activity, and are targeted to areas of greatest need. For example, the Recovery Act required state governors to certify that their states would maintain their planned levels of spending for the types of transportation projects funded by the act, from the date of enactment—February 17, 2009—through September 30, 2010. The Recovery Act also required that states give priority to highway projects in economically distressed areas. State and local agencies, contractors, and others that receive Recovery Act funds are also required to submit quarterly reports on the number of jobs created or retained, among other data. These job calculations are based on the total hours worked divided by the number of hours in a full-time schedule, expressed in FTEs—but they do not account for the total employment arising from the expenditure of Recovery Act transportation funds. That is, the data recipients report do not include employment at suppliers (indirect jobs) or in the local community (induced jobs). In addition to reporting quarterly on the numbers of jobs created, states and other recipients are required to submit periodic reports on the amount of funds obligated and expended and the number of projects put out to bid, awarded, or for which work has begun or been completed, among other things. DOT is required to collect and compile this information for its reports to Congress that began in May 2009. Because it had not previously collected and reported this type of information, FHWA established the Recovery Act Data System (RADS) to allow for better oversight and tracking of Recovery Act transportation projects. FHWA uses RADS to compile data from states and existing DOT databases and generates reports to assist states in meeting their Recovery Act reporting requirements. According to DOT data, as of May 31, 2011, DOT had obligated nearly $45 billion (about 95 percent) on over 15,000 projects and had expended more than $28 billion (about 63 percent) of the $48.1 billion it received under the Recovery Act (see table 1). More than 9,200 of the approximately 15,100 transportation projects have been completed, including more than 8,100 highway projects and most of the aviation projects. The rate of expenditure for Recovery Act transportation funds has varied among programs and states, for several reasons, according to federal and state officials: First, obligation deadlines for newly funded competitive grant programs such as high speed intercity passenger rail and TIGER are later, so as of May 31, 2011, a much smaller percentage of those program funds had been obligated and expended. Second, as we have previously reported, the obligation and subsequent expenditure of highway funds suballocated for metropolitan, regional, and local use have lagged behind rates for state projects in some states. FHWA data as of May 31, 2011, indicated that this trend continued for reimbursements in 24 states, including two of the states we visited— Virginia and Texas. According to federal and state transportation officials, federal reimbursement can only occur after costs are incurred; however, localities varied in their approach to billing for reimbursement. For example, in California some localities choose to seek reimbursement for project costs after project completion in an effort to reduce the administrative costs of frequent invoicing. In comparison, localities in Indiana and Washington State bill regularly as expenses are incurred. Third, according to FHWA and state officials, northern states typically tend to have a reduced period of construction activity during the winter. Finally, large or new infrastructure projects may require additional reviews, such as environmental clearances, prolonging project time frames. States and other recipients continue to report using Recovery Act funds to improve the condition of the nation’s transportation infrastructure, as well as invest in new infrastructure. For example, according to DOT data, 68 percent of highway funds have been used for pavement improvement projects, such as resurfacing, reconstruction, and rehabilitation of existing roadways, and almost 75 percent of transit funds have been used for upgrading existing facilities and purchasing or rehabilitating buses (see fig. 2). According to FAA officials, Recovery Act funding was used to rehabilitate and reconstruct airport runways and taxiways, as well as to upgrade or purchase air navigation infrastructure such as air traffic control towers, engine generators, back-up batteries, and circuit breakers. The Recovery Act grant provided to Amtrak has been used to make infrastructure improvements and return cars and locomotives to service. New bridge construction ($0.5 billion) Operating assistance ($0.2 billion) Bridge improvement ($1.2 billion) Rail car purchase and rehab ($0.3 billion) Bridge replacement ($1.4 billion) New construction ($1.8 billion) Preventive maintenance ($0.8 billion) Other capital expenses ($1.0 billion) Other ($3.3 billion) Vehicle purchase and rehab ($2.0 billion) Pavement widening ($4.7 billion) Pavement improvement: resurface ($6.1 billion) Transit infrastructure ($4.5 billion) Pavement improvement: reconstruction/rehabilitation ($7.1 billion) The highway category “other” includes safety projects, such as improving safety at railroad grade crossings, engineering, right-of-way purchases, and transportation enhancement projects, such as pedestrian and bicycle facilities. Highway data are as of June , 2011. Transit obligations include Recovery Act funds that were transferred from FHWA to FTA. “Transit infrastructure” includes engineering and design, acuisition, construction, and rehabilitation and renovation activities. “Other capital expenses” includes leases, training, finance costs, mobility management project administration, and other capital programs. Usually, operating assistance is not an eligible expense for transit agencies within urbanized areas with populations of 200,000 or more. Most recipients did not use as high a percentage of funds for operating expenses, in part, because funds had already been obligated to projects before the Supplemental Appropriations Act was enacted, according to FTA officials. Transit data are as of May 6, 2011. The high speed intercity passenger rail and TIGER programs were newly funded grant programs, and the Recovery Act allowed additional time for DOT to develop criteria, publish notices of funding availability, and award grants. As a result, projects selected for high speed intercity passenger rail and TIGER were announced about a year after enactment, and DOT has been making progress obligating Recovery Act funds for these programs. For example, DOT selected one intercity passenger rail project to rehabilitate track and provide service from Portland to Brunswick, Maine, at speeds up to 70 miles per hour. Another project was selected to initiate the first part of California’s high speed rail system, which envisions service at more than 200 miles per hour between Los Angeles, San Francisco, and the Central Valley, and eventually San Diego. DOT TIGER grants funded projects across different surface transportation modes, including highways, transit, rail, and ports. For example, the California Green Trade Corridor/Marine Highway Project is a collaborative effort of three regional ports in California to develop and use a marine highway system as an alternative to existing truck and rail infrastructure for transporting consumer goods and agricultural products. According to DOT data, a variety of Recovery Act projects have been completed. For example, FHWA reported that many of the completed highway projects involve pavement improvement. Completed transit projects generally included preventive maintenance activities, some bus purchases, and facility construction, according to FTA. Amtrak had also completed a variety of projects, including station upgrades, right-of-way improvements, communications and signaling systems installations, and aging bridge replacement projects, among other things. While no high speed intercity passenger rail projects had been completed as of May 31, 2011, 24 projects were under way, according to FRA. These projects, which represent more than 70 percent of the allotted funding, include track and signaling work to improve reliability and increase operating speeds, improvements to stations, and the environmental analysis and preliminary engineering required to advance projects to construction. States we visited provided numerous examples of infrastructure improvements and other projects funded by the Recovery Act (see fig. 3). Recovery Act funds helped pay for jobs across various transportation modes. At a time when the construction industry was experiencing historically high unemployment and many states could not afford to maintain existing infrastructure, transportation officials we met with told us that the Recovery Act helped to keep the transportation industry in operation while allowing states to tackle some of their infrastructure maintenance priorities. According to data filed by recipients, Recovery Act transportation projects supported between 31,460 and 65,110 FTEs each reporting quarter from October 1, 2009, through March 31, 2011. Recipient-reported FTEs, however, cover only direct jobs funded by the Recovery Act. They do not include the employment impact of suppliers (indirect jobs) or on the local community (induced jobs). According to DOT officials, the full impact on indirect and induced employment is likely to be significant because of supply chain employment effects. In addition, a certain amount of a project’s cost is typically for materials and equipment, and the remainder pays for labor, reported as FTEs. The number of transportation FTEs reported has declined over the past two reporting quarters as construction work on projects has been completed. On average, highway projects accounted for approximately 63 percent of the transportation FTEs reported from October 1, 2009, through March 31, 2011. Transit and “other” transportation projects accounted for the remaining approximately 37 percent of transportation FTEs. However, the relatively low portion of FTEs reported for transportation projects other than highways and transit may increase in future quarters as more high speed intercity passenger rail and TIGER projects get under way. Transportation recipients reported the highest total FTE count during the quarter ending September 2010, owing to the large number of projects under way at that time (see fig. 4). During the most recent reporting quarter, which ended March 31, 2011, the number of transportation FTEs reported reached its lowest point since recipient reporting began—at about 31,460. In addition to the number of jobs funded by Recovery Act transportation funds, federal, state, and local officials describe the following other benefits Better coordination and streamlined processes: DOT officials told us tha er at the Recovery Act encouraged more efficient ways of working togeth the federal, state, and local levels to select projects. According to DOT officials, the TIGER competitive grant program brought together vario modal operating administrations to evaluate grant applications and consider multimodal projects. Generally, state officials told us that their working relationships with FHWA division offices and localities have improved while implementing Recovery Act programs, as has stat d localities’ understanding of federal requirements. Some states improvetheir internal operational efficiency, including shortening their project review and approval processes. For example, th Department of Transportation (MassDOT) streamlined its 26-step bid process from 120 days to 44 days by coordinating the review process through regular meetings of key stakeholders. Innovative communication practices: DOT also implemented new ways to train and communicate with recipients. For instance, FHWA and FTA have used webinars to distribute guidance and host question-and-answer sessions to clarify program requirements. Officials said that the systems they developed to communicate with states have been used to disseminate guidance to states for non-Recovery Act programs and will continue to be used in the future. Accelerated projects that might have otherwise gone unfunded: Transportation officials in several states we visited told us that Recovery Act funds helped reduce backlogs of “shovel-ready” projects. For example, California funded its entire list of shovel-ready projects and began wor new construction projects. Other states reported being able to complete projects that had been planned but lacked sufficient funding. Specifically, Virginia started construction of an interchange on the Fairfax County 80s parkway at Fair Lakes—a project that has been planned since the 19 when the parkway was first built; Massachusetts started construction of bike and pedestrian footb Dig project of the 1990s; and Washington State accelerated work to provide congestion relief on I-405 and extend a high-occupancy-vehicle lane on I-5 near Tacoma. ridge that had been promised as part of the Big However, the long-term impacts of Recovery Act investments in transportation are unknown at this point. Some states have efforts unde way to report on Recovery Act benefits. For example, in 2011, state transportation officials in Washington produced a report that document ed the agency’s progress delivering Recovery Act projects since 2009; the Texas Department of Transportation commissioned a study by the University of Texas to assess the Recovery Act impacts; and MassDOT officials established an Office of Performance Management and Inno to determine program goals; measure program performance against those goals; and report publicly on progress to improve the effectiveness of transportation design and construction, service delivery and policy decision making. However, federal and state officials told us that attributing transportation benefits to Recovery Act funds can be difficult, particularly when projects are funded from multiple sources and historic performance data are not available for particular projects. We recommended that DOT ensure that the results of Recovery Act are assessed and a determination is made about whether these projects produced long-term benefits, but DOT has not committed to assessing the long-term benefits of Recovery Act investments in transportation. Specifically, in the near term, we recommended that FHWA and FTA determine the types of data and performance measures needed to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. DOT officials told us that they expect to be able to report on Recovery Act outputs, such as miles of roads paved, bridges built or repaired, and transit vehicles purchased, which wi help assess the act’s impact. However, they said that limitations in DOll data systems, the costs associated with conducting such an analysis, and the T’s fact that Recovery Act funds represented only about 1 year of additional funding for some transportation programs would make assessing the benefits of Recovery Act projects difficult. We continue to believe, however,tions to measure performance to understand that it is important for organiza the progress they are making toward their goals and to demonstrate results, particularly when the funding totals above $48 billion and most funds were to be spent relatively quickly. For the Recovery Act high speed intercity passenger rail and TIGER grant programs, DOT has set broad performance goals and required recipients to identify potential project benefits. Specifically, FRA has outlined goals for developing high speed intercity passenger rail service in its strategic plan and national rail plan and evaluated grant proposals based on the po project benefits they listed in their applications. However, the identified goals are broad—such as improving transportation safety and economic competitiveness—and do not contain specific targets necessary to determine how or when FRA will realize intended benefits. DOT also incorporated performance measures tailored to each TIGER grant awardee based on the project design and the capacity of the recipient to collect and evaluate data. DOT is evaluating the best methods for measuring objectives and collecting data and is working collaboratively with applicants to weigh options for measuring performance. As many TIGER projects are just being initiated, the effectiveness of these measures will not be clear for several years. Federal, state, and local oversight entities have continued their efforts to ensure appropriate use of Recovery Act transportation funds, and recently published reviews have not revealed major concerns. Since September 2010, the DOT Office of Inspector General (OIG) has issued three reports on Recovery Act aviation, highway, and rail programs. These reports generally found that DOT had complied with Recovery Act requirements, and they identified several areas for improvement (see table 2 for selected OIG recommendations and DOT’s response). The OIG has ongoing Recovery Act oversight work covering multiple transportation programs, including, for example, audits of the high speed intercity passenger rail and TIGER programs, as well as audits of transit and highway programs. Moreover, th OIG continues to investigate criminal and civil complaints related to Recovery Act transportation funds. As of March 31, 2011, the OIG had 51 open Recovery Act investigations, including 19 cases of false statements, e claims, or certifications; 17 cases of disadvantaged business enterprise fraud; and 1 case of corruption, among other allegations. According to the Chairman of the Recovery Accountability and Transparency Bo has been an extremely low level of fraud involving Recovery Act funds. For instance, in June 2011, he noted that less than half a percent of all reporte d Recovery Act contracts, grants, and investigations, and to date there have been 144 convictions involving a little over $1.9 million of total Recovery Act funds for all programs, including those in the transportation sector. Reviews conducted by auditors in the states that we visited have, in most cases, reported few significant problems with the use of Recovery Act transportation funds. State auditors in Massachusetts, for example, found no material weaknesses at MassDOT in its 2010 Single Audit. However, in our review of Single Audit reports for selected states, we found that state auditors identified some inconsistencies with state oversight of subrecipients and some challenges ensuring that award documentation met federal requirements (see table 3). We also reviewed performance audit reports of Recovery Act transportation programs in the states that we visited, and these reviews, generally, focused on compliance with Recovery Act program requirements. For example: The Massachusetts Office of the State Auditor published several reports that examined local transit agency controls over receipts and expenditures of Recovery Act funds and subrecipient monitoring to ensure compliance with reporting requirements. Based on these reviews, the State Auditor found that each transit authority was generally in compliance with applicable laws, rules, and regulations for the areas tested. The California State Auditor’s evaluation of the state’s recipient reports on jobs created and retained found that the California Department of Transportation (Caltrans) did not ensure that complete jobs data were reported for the quarter ending June 30, 2010, and did not monitor its subrecipients to ensure that they reported the required data. Caltrans officials told us that it is a challenge to ensure that all local agencies report FTE data because of turnover at the local level and the challenges associated with training local staff on the reporting requirements. Finally, local auditors in states we visited that reviewed compliance with Recovery Act requirements did not find problems with city use of Recovery Act transportation funds. These reviews generally found that cities had taken various oversight actions to monitor the use of Recovery Act funds. For example, the city auditor of Dallas, Texas, reported in February 2011, that the city had taken action to implement internal control processes aimed at ensuring accountability and transparency of Recovery Act funds. Further, the Dallas city auditor found that although the recipients and uses of funds were reported clearly and in a timely manner, other federal requirements proved challenging for the city and reports were not always submitted accurately. The city auditor of Arlington, Texas, also found that the city had generally complied with Recovery Act quarterly reporting and accountability provisions and the city had accurately calculated jobs created. In Virginia, the city auditor of Virginia Beach examined the city’s Recovery Act expenditures for supporting documentation and concluded that the sampled expenditures were properly supported, reasonable, and applicable to the purpose of the grants. Another performance audit published in September 2010 by the Los Angeles Office of the Controller found that the Los Angeles Department of Transportation made a good faith effort in establishing processes to help ensure it meets Recovery Act requirements, but noted areas that could be improved, such as streamlining contracting processes to ensure that projects are started as quickly as possible and improving processes for reporting and billing to Caltrans. To meet our mandate to comment on recipient reports, we continued to monitor recipient-reported data. For this report, we focused our review on the quality of data reported by transportation grant recipients and efforts made by FHWA to validate that data. Using transportation recipient data from the seventh reporting period, which ended March 31, 2011, we continued to check for errors or potential problems by repeating analyses and edit checks reported in previous reports. We reviewed data associated with 12,443 transportation recipient reports posted on Recovery.gov for the seventh reporting quarter. We found few inconsistencies, and we are generally satisfied with the stability of the data quality. Additionally, our analysis of the data showed that there was a decrease of 759 recipient reports, or about a 5.7 percent drop from the previous quarter. Likewise, as described earlier, the total number of FTEs reported has also decreased over the past two reporting quarters. In the most recent quarter which ended March 31, 2011, for example, the percentage of prime recipients of highway funds reporting any FTEs dropped from approximately 51 percent to approximately 39 percent. DOT officials said that the decreases in the number of recipients reporting any FTEs is likely due to several factors, including projects being completed or functionally complete and awaiting financial closeout. DOT officials noted that decreases in FTEs could also be due to such factors as a winter shutdown of projects in colder climates. We also observed a variety of patterns in the quarterly reporting of FTEs, including consecutive quarters of no FTE reporting. For example, for the 2010 calendar year, approximately 13.5 percent of the highway recipients and approximately 16.7 percent of the transit recipients that filed reports each quarter did not report any FTEs during the year. According to DOT officials, several additional factors that could extend reporting during periods of low job activity include projects awaiting final invoice from contractors, projects delayed in litigation, or recipients’ withholding of final payments to cover periods of maintenance guarantees. They also noted that projects need to be considered on an individual basis and that recipients may use Recovery Act funds to purchase materials and use other funding sources to pay for labor. Each quarter, FHWA performs quality assurance steps on the data that recipients provide to FederalReporting.gov and officials reported that the data quality continues to improve. Based on these reviews and their interactions with recipients, FHWA officials reported that recipients now understand the reporting process and each reporting period has gone better than the previous one. One measure of recipients’ understanding of the reporting process is in the number of noncompliant recipients. According to information available on Recovery.gov, the number of DOT- related noncompliant recipients decreased from 37 in the quarter ending September 30, 2010, to 13 in the quarter ending December 31, 2010, but it increased in the most recent quarter to 19 noncompliant recipients. FHWA officials told us that they routinely check for noncompliance, notify noncompliant recipients of the projects that have not been reported, and follow up with noncompliant recipients to obtain corrective action plans and ensure that errors are corrected and subsequent reports are filed on time. As in previous quarters, FHWA performed a number of automated checks to help ensure quality of highway and rail recipients’ Recovery Act data. To support recipients’ data quality, FHWA asks recipients of highway and rail Recovery Act funds to report each month into FHWA’s RADS system. FHWA officials conduct two data verification steps in RADS to assess the quality of data submitted by recipients, including automated data verification and validation reports. The automated data verification tests occur when state departments of transportation upload monthly data into RADS. If a record does not satisfy one of FHWA’s data verification rules, the state department of transportation is provided with a brief message listing the record and what data check failed. Data cannot be uploaded into RADS until the state department of transportation corrects the error. Examples of data verification rules include rules such as the federal project numbers must be entered without dashes or parentheses, and total cost estimates cannot be less than total Recovery Act estimates for the particular project. The data validation report highlights projects or awards that fail certain verification rules, such as whether the federal project number is in FHWA’s Financial Management Information System, but not in RADS. FHWA also applies data checks based on assumptions about expenditures reported and FTEs reported. FHWA officials reported they also check data quality for nearly 70 data fields each quarter by comparing the data in each recipient report against the corresponding RADS data. According to FHWA guidance, data that do not correspond to the recipient report are flagged for comment and review. Specifically, RADS runs automated quality checks to ensure that data provided by states into RADS match what the states are providing to FederalReporting.gov. If inconsistencies are found, FHWA representatives work with state transportation officials to resolve discrepancies by requiring states to amend or justify state-reported data. Some state transportation officials told us that the number of errors detected in their reports decreased as the reporting system was refined and guidance issued. Finally, according to DOT officials, recipient-reported FTE data provide increased transparency on the use of transportation program funds, but DOT does not plan to use recipient-reported data internally for a variety of reasons. For example, recipient- reported data are only valid on a quarterly basis and cannot be used for monthly or cumulative analysis. In addition, agency officials told us that they prefer to use RADS data for most internal analysis because the RADS data are reported monthly and are more detailed than the recipient-reported data. Federal, state, and local transportation officials we contacted reported that while Recovery Act transportation funds provided many positive outcomes, they also provided lessons learned that may be relevant as Congress considers the next surface transportation reauthorization. Certain Recovery Act provisions not typically required under existing DOT programs proved challenging for some states to meet. Our ongoing and past work indicates that it may have been difficult for states to meet these requirements for a number of reasons, including rapidly changing state economic conditions and confusion about how to interpret and apply the new requirements. Maintenance of effort. We have reported that there were numerous challenges for DOT and states in implementing the transportation maintenance-of-effort provision in the Recovery Act. This provision required the governor of each state to certify that the state would maintain its planned level of transportation spending from February 17, 2009, through September 30, 2010, to help ensure that federal funds would be used in addition to, rather than in place of, state funds and, thus, increase overall spending. A January 2011 preliminary DOT report indicated that 29 states met their planned levels of expenditure, and 21 states did not. States had a monetary incentive to meet their certified planned level of spending in each transportation program area funded by the Recovery Act because those that fail would not be eligible to participate in the August 2011 redistribution of obligation authority under the Federal-Aid Highway Program. States had until April 15, 2011, to verify their actual expenditures for transportation programs covered by the Recovery Act. DOT is reviewing this information to determine if any more states met their planned levels of spending. The DOT preliminary report summarized reasons states did not meet their certified planned spending levels, such as experiencing a reduction in dedicated revenues for transportation due to a decline in state revenues or a lower-than-expected level of approved transportation funding in the state budget. The preliminary report also identified a number of challenges DOT encountered in implementing the provision, such as insufficient statutory definitions of such terms as what constitutes “state funding” or how well DOT guidance on calculating planned expenditures would work in the many different contexts in which it would have to operate. As a result, many problems came to light only after DOT had issued initial guidance and states had submitted their first certifications. DOT issued guidance seven times during the first year after the act was signed to clarify how states were to calculate their planned or actual expenditures for their maintenance-of-effort certifications. The last guidance—issued February 9, 2010—communicated DOT’s decision that the maintenance-of-effort requirement would be applied to each of the program areas funded by the Recovery Act, rather than cumulatively for all the programs. The implication of this decision is that fewer states met the requirement. DOT invested a significant amount of time and work to ensure consistency across states on how compliance with the maintenance-of-effort provision would be certified and reported. As a result, DOT is well-positioned to understand lessons learned—what worked, what did not, and what could be improved in the future. DOT and state officials told us that while the maintenance-of-effort requirement can be useful for ensuring continued investment in transportation, more flexibility to allow for differences in states and programs, and to allow adjustments for unexpected changes to states’ economic conditions, should be considered for future provisions. For example, for the education maintenance-of-effort requirement, the Recovery Act allows the Secretary of Education to waive state maintenance-of-effort requirements under certain circumstances and allows states to choose the basis they use to measure maintenance of effort. The maintenance-of-effort requirement for transportation programs proved difficult for states to apply across various transportation programs because of varying and complex revenue sources to fund the programs. Many states did not have an existing means to identify planned transportation expenditures for a specific period, and their financial and accounting systems did not capture that data. Therefore, according to DOT and some state officials, a more narrowly focused requirement applying only to programs administered by state DOTs or to programs that typically receive state funding could help address the maintenance-of-effort challenges. Consideration of economically distressed areas. Our previous reports have identified challenges DOT faced in implementing the Recovery Act requirement that states give priority to highway projects located in economically distressed areas. For example, while an economically distressed area is statutorily defined, we found that there was substantial variation in how some states identified economically distressed areas and the extent to which some states prioritized projects in those areas. We reported instances of states developing their own eligibility requirements for economically distressed areas using data or criteria not specified in the Public Works and Economic Development Act. Three states—Arizona, California, and Illinois—developed their own eligibility requirements or interpreted the special-needs criterion in a way that overstated the number of eligible counties, and thus the amount of funds, directed to economically distressed areas. Officials in these three states told us that they did so to respond to rapidly changing economic conditions. In May 2010, we recommended that DOT advise states to correct their reporting on economically distressed area designations, and in July 2010 FHWA instructed its division offices to advise states with identified errors to revise their economically distressed area designations. In September 2010, we recommended that DOT make these data publicly available to ensure that Congress and the public have accurate information on the extent to which Recovery Act funds were directed to areas most severely affected by the recession and the extent to which states prioritized these areas in selecting projects for funding. In March 2011, DOT posted an accounting of the extent to which states directed Recovery Act transportation funds to projects located in economically distressed areas on its Web site, and we are in the process of assessing these data. According to officials in most states we visited, state transportation departments considered the requirement to prioritize projects in economically distressed areas in addition to other immediate and long- term transportation goals, as the Recovery Act required. For example, officials in Washington State said that they considered federally recognized economically distressed areas as one of several criteria when selecting projects. Other criteria included state economic data and projects that would be ready to proceed in a short amount of time. However, state officials were also uncertain what the economically distressed area requirement was intended to accomplish, such as whether it was intended to provide jobs to people living in those areas or to deliver new infrastructure to those areas. The economically distressed area provision proved difficult to implement because of changing economic conditions and the difficulty of targeting assistance to economically distressed areas, and it is unclear that it achieved its intended goal. We found that the Recovery Act requirement to obligate funds quickly likely influenced the types of projects selected for funding in some states. State and local officials we interviewed noted that the primary factor considered in project selection was to meet Recovery Act deadlines for obligating funds, which likely limited the types of projects that were selected for funding. Federal and state officials also noted the tension between the purposes of the Recovery Act, which included preserving and creating jobs and promoting economic recovery, and investing in infrastructure to provide long-term economic benefits, among other Recovery Act goals. For example, the Recovery Act provided a relatively quick infusion of federal funding for highway and transit programs, but as we noted earlier, the majority of projects selected for highway and transit funding were pavement rehabilitation and bus purchases. State and local officials told us that to meet the act’s obligation deadlines they prioritized projects that had already progressed significantly through the project development and design process and could move to construction. In some cases, state officials told us that this prohibited other, potentially-higher priority projects from being selected for funding. As a result, many Recovery Act highway projects selected for funding did not require extensive environmental clearances, were quick to design, and were quickly obligated, bid, and completed. Several states told us that their mix of highway projects would likely have been different had the obligation deadlines been longer. For example, officials in California told us that had the Recovery Act timelines been longer they would have likely pursued more large-scale projects. According to Texas transportation officials, projects that had already progressed significantly through the project development process were preferred. However, transportation officials in Virginia and Washington State said that the Recovery Act funding allowed their states to select projects that would meet the obligation time frames while also addressing state priorities, such as investing in infrastructure with potential long-term economic impacts and addressing preservation and safety needs. We have reported that allocating federal funding for surface transportation based on performance in general, and directing some portion of federal funds on a competitive basis to projects of national or regional significance in particular, can more effectively address certain challenges facing the nation’s surface transportation programs. In our recent reports on the high speed intercity passenger rail and TIGER programs, we found that while DOT generally followed recommended grantmaking practices, DOT could have documented more information about its award decisions. Both the high speed intercity passenger rail and TIGER programs represent important steps toward investing in projects of regional and national significance through a merit-based, competitive process. We noted a natural tension between providing funds based on merit and performance and providing funds on a formula basis to achieve equity among the states. A formula approach can potentially result in projects of national or regional significance that cross state lines and involve more than one transportation mode not competing well at the state level for funds. Given that the Recovery Act was intended to create and preserve jobs and promote economic recovery nationwide, Congress believed it important that TIGER grant funding be geographically dispersed. As we noted in our recent report discussing the TIGER grant program, when Congress considers future DOT discretionary grant programs, it may wish to consider balancing the goals of merit-based project selection with geographic distribution of funds and limit, as appropriate, the influence of geographic considerations. We provided a draft of this report to DOT for review and comment. DOT generally agreed with our findings and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to congressional committees with responsibilities for transportation issues, the Secretary of Transportation, and the Director of the Office of Management and Budget. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or herrp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to determine the (1) status, use, and outcomes of the American Recovery and Reinvestment Act of 2009 (Recovery Act) transportation funding nationwide and in selected states; (2) actions taken by federal, state, and local agencies to monitor and ensure accountability of Recovery Act transportation funds; (3) changes in the quality of jobs data reported by Recovery Act recipients of transportation funds over time; and (4) challenges faced and lessons learned from the Department of Transportation (DOT) and recipients. To address these objectives, we obtained and analyzed data provided to us from the Federal Aviation Administration (FAA), Federal Highway Administration (FHWA), Federal Transit Administration (FTA), and Maritime Administration (MARAD), as well as data we obtained from the operating administrations’ Recovery Act Web sites. For the highway and transit programs, these data included the amount of funds obligated and the amount reimbursed by FHWA and FTA through May 31, 2011. These data also included funds awarded by project type, outlays for all regular Federal-Aid Highway Program funds through September 2010, and maintenance-of-effort certification data. For the aviation programs, FAA provided a listing of airport improvement and facilities and equipment grants, including award data, project amount, project description, and project completion dates. For the small shipyard grants, MARAD provided us with data for each grant, including award amount, project description, amount obligated, and outlays to date. We assessed the reliability of the program data we used by reviewing DOT documentation and Inspector General reports on DOT’s financial management system and interviewing knowledgeable DOT officials about the quality of the data and controls in place to ensure data accuracy. We determined the data were sufficiently reliable for our purposes. In addition, to familiarize ourselves with all the transportation programs and track their ongoing status, we reviewed program documentation, both publicly available online and internal documents provided by the agencies; reviewed prior GAO reports on the Recovery Act transportation programs; and reviewed reports published by the DOT Office of Inspector General (OIG). We also interviewed DOT officials from FAA, FHWA, FTA, MARAD, and the Office of the Secretary who were involved in managing Recovery Act programs. During these interviews, we discussed the status of expenditures, challenges facing states or recipients in spending the funds, and the expected impacts from the funds. We also met with representatives from the American Association of State Highway and Transportation Officials. We conducted site visits to six states: California, Indiana, Massachusetts, Texas, Virginia and Washington. In each of the states, we met with representatives of the FHWA division office, state department of transportation, and a local metropolitan planning organization. We also visited Recovery Act transportation projects in each state, except Virginia. In several of these states, we met with officials representing Governors’ offices overseeing Recovery Act-funded programs. Our criteria for selecting these states included total FHWA funding available, number of projects selected and average obligation per project. Our selected states represent about 25 percent, or $6.9 billion of the $27.5 billion, available to states for Recovery Act highway investments, and we selected states with a range of allotted funding, including four that were above the national average and two that were below it. We also considered the Recovery Act highway project status and selected states with a range of underway and completed projects. In selecting our state sample, we also considered geographic dispersion and a mix of more and less populous states, as well as obtaining a mixture of states GAO had previously tracked as part of our prior Recovery Act oversight (California, Massachusetts, and Texas) and states that we had not visited previously to discuss Recovery Act transportation issues (Indiana, Virginia, Washington). This selection of states enabled us to maintain continuity on issues that GAO had previously reported on, such as economically distressed areas, and to speak with transportation officials who were able to provide fresh perspectives on the lessons learned from the Recovery Act transportation experience in their state. To determine the actions, if any, federal, state, and local oversight entities were taking to monitor and ensure accountability of Recovery Act transportation funds, we reviewed OIG reports on various Recovery Act transportation topics and interviewed OIG staff to learn more about their findings and coordinate our audit work. In each of the six states we visited, we contacted state auditors to learn about any efforts at the state level to monitor Recovery Act transportation funding. In those states where the state auditor had conducted performance audits on Recovery Act transportation programs, we interviewed state audit representatives to better understand their ongoing oversight work, challenges faced by recipients in using funds and transportation-related audit findings, and any lessons learned. We also reviewed Single Audit reports for fiscal year 2010 in each of our six sample states. At the local level, we reviewed reports prepared by local government auditors for the six states we visited. We obtained these reports from the Association of Local Government Auditors’ Web site. The recipient reporting section of this report responds to the Recovery Act’s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our review of the seventh submission of recipient reports, covering the period January 1 to March 31, 2011, we continued our monitoring of errors or potential problems by repeating many of the analyses and edit checks reported in our six prior reviews covering the period February 2009 through December 31, 2010. To examine how the quality of jobs data reported by recipients of Recovery Act transportation funds has changed over time, we compared the seven quarters of recipient reporting data that were publicly available at Recovery.gov on April 30, 2011. We performed edit checks and other analyses on the transportation recipient-reported data which included matching DOT-provided funding data from the Financial Management Information System with recipient-reported funding data and reviewing FTE reporting patterns. Our match showed a high degree of agreement between DOT recipient funding information and the information reported by recipients directly to FederalReporting.gov. We also examined the reliability of recipient-reported data, and we reviewed FHWA’s efforts to ensure reliability of the recipient-reported data by comparing it with data contained in DOT’s Recovery Act Data System (RADS). Our assessment activities included reviewing documentation of system processes, conducting logic tests for key variables, and assessing data for out-of-range values. We reviewed agency documentation for the RADS and FHWA’s guidance for validating recipient-reported data in that system. We also reviewed a February 2010 OIG report assessing the Recovery Act recipient data oversight at DOT and other agencies. In general, we consider the data used to be sufficiently reliable for purposes of this report. The results of our FTE analyses are limited to the transportation programs and time periods reviewed and are not generalizable to FTE reporting for any other program. To update the status of open recommendations from previous bimonthly and recipient report reviews, we obtained information from agency officials on actions taken in response to recommendations. We conducted this performance audit from September 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we update the status of agencies’ efforts to implement the 26 open recommendations, and 2 newly implemented recommendations from our previous bimonthly and recipient reporting reviews. Recommendations that were listed as implemented or closed in a prior report are not repeated here. Lastly, we address the status of our Matters for Congressional Consideration. Given the concerns we have raised about whether program requirements were being met, we recommended in May 2010 that the Department of Energy (DOE), in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. develops a best practice guide for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. sets time frames for development and implementation of state monitoring programs. revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost- effectiveness and develops standard methodologies that ensure that priority is given to the most cost-effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommended that DOE clarify its production targets, funding deadlines, and associated consequences while providing a balanced emphasis on the importance of meeting program requirements. DOE generally concurred with these recommendations and has made some progress on implementing them. For example, to clarify the methodology for calculating the average cost per home, DOE has developed draft guidance to help grantees develop consistency in their average cost per unit calculations. The guidance further clarifies the general cost categories that are included in the average cost per home. DOE anticipates issuance of the guidance in June 2011. DOE has also taken steps to address our recommendation that it develop and clarify guidance to generate a best practice guide for key internal controls. DOE distributed a memorandum dated May 13, 2011 to grantees reminding them of their responsibilities to ensure compliance with internal controls and the consequences of failing to do so. This memo is currently under internal review and DOE anticipates it will be released in May 2011. To better ensure that Energy Efficiency and Conservation Block Grant (EECBG) funds are used to meet Recovery Act and program goals, we recommended in April 2011 that DOE, take the following actions: Explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. Solicit information from recipients regarding the methodology they used to calculate their energy-related impact metrics and verify that recipients who use DOE’s estimation tool use the most recent version when calculating these metrics. DOE generally concurred with these recommendations, stating that “implementing the report’s recommendations will help ensure that the Program continues to be well managed and executed.” DOE also provided additional information on steps it has initiated or planned to implement. In particular, with respect to our first recommendation, DOE elaborated on additional monitoring practices it performs over high dollar value grant recipients, such as its reliance on audit results obtained in accordance with the Single Audit Act and its update to the EECBG program requirements in the Compliance Supplement to OMB Circular No. A-133. However, these monitoring practices only focus on larger grant recipients, and we believe that the program could be more effectively monitored if DOE captured information on the monitoring practices of all recipients. With respect to our second recommendation, DOE officials said that in order to provide a reasonable estimate of energy savings, the program currently reviews energy process and impact metrics submitted each quarter for reasonableness, works with grantees to correct unreasonable metrics, and works with grantees through closeout to refine metrics. In addition, DOE officials said that they plan to take a scientific approach to overall program evaluation during the formal evaluation process at the conclusion of the program, which will occur in December 2012. However, DOE has not yet identified any specific plans to solicit information from recipients regarding the methodology they used to calculate their energy- related impact metrics or to verify that recipients who use DOE’s estimation tool use the most recent version when calculating. We recommended that the Environmental Protection Agency (EPA) Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water State Revolving Fund (SRF) program. In part in response to our recommendation, EPA provided additional guidance to the states regarding their oversight responsibilities, with an emphasis on enhancing site-specific inspections. Specifically, in June 2010, the agency developed and issued an oversight plan outline for Recovery Act projects that provides guidance on the frequency, content, and documentation related to regional reviews of state Recovery Act programs and regional and state reviews of specific Recovery Act projects. We found that EPA regions have reviewed all 50 states’ Clean and Drinking Water SRF programs at least once since the Recovery Act was enacted, and have generally carried out the oversight instructions in EPA’s plan. For example, regional officials reviewed files with state documents and information to ensure proper controls over Davis-Bacon, Buy American, and other Recovery Act requirements. Regional staff also visited one drinking water project in every state, but did not meet this goal for clean water projects due to time and budget constraints. We also found that EPA headquarters officials have been reviewing the regions’ performance evaluation reports for states, and the officials said that they implemented a 60-day time frame for completing these reports. In the nine states that we reviewed in this report, program officials described their site visits to projects and the use of the EPA inspection checklist (or state equivalent), according to EPA’s oversight plan. State officials told us that they visit their Recovery Act projects at least once during construction and sometimes more frequently depending on the complexity of the project. We consider these agency actions to have addressed our recommendation. To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, we recommended that the Director of the Office of Head Start (OHS) should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. The Department of Health and Human Services (HHS) disagreed with our recommendation. OHS officials stated that attendance data are adequately examined in triennial or yearly on-site reviews and in periodic risk management meetings. Because these reviews and meetings do not collect or report data on service provision, we continue to believe that tracking services to children and families is an important measure of the work undertaken by Head Start and Early Head Start service providers. To help ensure that grantees report consistent enrollment figures, we recommended that the Director of OHS should better communicate a consistent definition of “enrollment” to grantees for monthly and yearly reporting and begin verifying grantees’ definition of “enrollment” during triennial reviews. OHS issued informal guidance on its Web site clarifying monthly reporting requirements to make them consistent with annual enrollment reporting. While this guidance directs grantees to include in enrollment counts all children and pregnant mothers who have received a specified minimum of services, it could be further clarified by specifying that counts should include only those children and pregnant mothers. According to HHS officials, OHS is considering further regulatory clarification. To provide grantees consistent information on how and when they will be expected to obligate and expend federal funds, we recommended that the Director of OHS should clearly communicate its policy to grantees for carrying over or extending the use of Recovery Act funds from one fiscal year into the next. HHS indicated that OHS will issue guidance to grantees on obligation and expenditure requirements, as well as improve efforts to effectively communicate the mechanisms in place for grantees to meet the requirements for obligation and expenditure of funds. To better consider known risks in scoping and staffing required reviews of Recovery Act grantees, we recommended that the Director of OHS should direct OHS regional offices to consistently perform and document Risk Management Meetings and incorporate known risks, including financial management risks, into the process for staffing and conducting reviews. HHS reported that OHS is reviewing the risk management process to ensure it is consistently performed and documented in its centralized data system and that it has taken related steps, such as requiring the Grant Officer to identify known or suspected risks prior to an on-site review. To facilitate understanding of whether regional decisions regarding waivers of the program’s matching requirement are consistent with Recovery Act grantees’ needs across regions, we recommended that the Director of OHS should regularly review waivers of the nonfederal matching requirement and associated justifications. HHS reports that it has taken actions to address our recommendation. For example, HHS reports that OHS has conducted a review of waivers of the nonfederal matching requirement and tracked all waivers in the Web- based data system. HHS further reports that OHS has determined that they are reasonably consistent across regions. Because the absence of third-party investors reduces the amount of overall scrutiny Tax Credit Assistance Program (TCAP) projects would receive and the Department of Housing and Urban Development (HUD) is currently not aware of how many projects lacked third-party investors, we recommended that HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. HUD responded to our recommendation by saying it will identify projects that are not funded by the HOME Investment Partnerships Program (HOME) funds and projects that have a nominal tax credit award. However, HUD said it will not be able to identify these projects until it could access the data needed to perform the analysis, and it does not receive access to those data until after projects have been completed. HUD currently has not taken any action on this recommendation because it only has data on the small percentage of projects completed to date. It is too early in the process to be able to identify projects that lack third-party investors. The agency will take action once they are able to collect the necessary information from the project owners and the state housing finance agencies. To enhance the Department of Labor’s (Labor) ability to manage its Recovery Act and regular Workforce Investment Act (WIA) formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommended that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state’s reported data on obligations meet Labor’s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states’ reporting on obligations during regular state comprehensive reviews. Labor agreed with both of our recommendations and has begun to take some actions to implement them. To determine the extent of reporting inconsistencies, Labor awarded a contract in September 2010 to perform an assessment of state financial reports to determine if the data reported are accurate and reflect Labor’s guidance on reporting of obligations and expenditures. Since then, Labor has completed interviews with all states and is preparing a report of the findings. To enhance states’ accountability and facilitate their progress in making improvements in reporting, Labor has drafted guidance on the definitions of key financial terms such as “obligations,” which is currently in final clearance. After the guidance is issued, Labor plans to conduct a systemwide webinar and interactive training on this topic to reinforce how accrued expenditures and obligations are to be reported. Our September 2009 bimonthly report identified a need for additional federal guidance in defining green jobs and we made the following recommendation to the Secretary of Labor: To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with our recommendation and has begun to take several actions to implement it. Labor’s Bureau of Labor Statistics has developed a definition of green jobs which was finalized and published in the Federal Register on September 21, 2010. In addition, Labor continues to host a Green Jobs Community of Practice, an online virtual community available to all interested parties. As part of this effort, in December 2010, Labor hosted its first Recovery Act Grantee Technical Assistance Institute, which focused on critical success factors for achieving the goals of the grants and sustaining the impact into the future. The department also hosted a symposium on April 28-29, 2011, with the green jobs state Labor Market Information Improvement grantees. Symposium participants shared recent research findings, including efforts to measure green jobs, occupations, and training in their states. In addition, the department released a new career exploration tool called “mynextmove” (www.mynextmove.gov) in February 2011. This Web site includes the Occupational Information Network (O*NET) green leaf symbol to highlight green occupations. Furthermore, Labor’s implementation study of the Recovery Act-funded green jobs training grants is still ongoing. The interim report is expected in late 2011. To leverage Single Audits as an effective oversight tool for Recovery Act programs, we recommended that the Director of the Office of Management and Budget (OMB) 1. provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; 2. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; 3. evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act; 4. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; 5. issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year; 6. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner; and 7. shorten the timeframes required for issuing management decisions by federal agencies to grant recipients. (1) To provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance, the OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations 2010 Compliance Supplement (Compliance Supplement) required all federal programs with expenditures of Recovery Act awards to be considered as programs with higher risk when performing standard risk-based tests for selecting programs to be audited. The auditor’s determination of the programs to be audited is based upon an evaluation of the risks of noncompliance occurring that could be material to an individual major program. The Compliance Supplement has been the primary mechanism that OMB has used to provide Recovery Act requirements and guidance to auditors. One presumption underlying the guidance is that smaller programs with Recovery Act expenditures could be audited as major programs when using a risk-based audit approach. The most significant risks are associated with newer programs that may not yet have the internal controls and accounting systems in place to help ensure that Recovery Act funds are distributed and used in accordance with program regulations and objectives. Since Recovery Act spending is projected to continue through 2016, we believe that it is essential that OMB provide direction in Single Audit guidance to help to ensure that smaller programs with higher risk are not automatically excluded from receiving audit coverage based on their size and standard Single Audit Act requirements. In May 2011, we spoke with OMB officials and reemphasized our concern that future Single Audit guidance provide instruction that helps to ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. OMB officials agreed and stated that such guidance is included in the 2011 Compliance Supplement which was to be issued by March 31, 2011. On June 1, 2011, OMB issued the 2011 Compliance Supplement which contains language regarding the higher- risk status of Recovery Act programs, requirements for separate reporting of findings, and a list of Recovery Act programs to aid the auditors. We will continue to monitor OMB’s efforts to provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. (2) To address the recommendation for taking additional efforts to encourage more timely reporting on internal controls for Recovery Act programs for 2010 and beyond, OMB commenced a second voluntary Single Audit Internal Control Project (project) in August 2010 for states that received Recovery Act funds in fiscal year 2010. Fourteen states volunteered to participate in the second project. One of the project’s goals is to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action can be taken more quickly. Specifically, the project encourages participating auditors to identify and communicate deficiencies in internal control to program management 3 months sooner than the 9-month time frame currently required under OMB Circular No. A-133. Auditors were to communicate these through interim internal control reports by December 31, 2010. The project also requires that program management provide a corrective action plan aimed at correcting any deficiencies 2 months earlier than required under statute to the federal awarding agency. Upon receiving the corrective action plan, the federal awarding agency has 90 days to provide a written decision to the cognizant federal agency for audit detailing any concerns it may have with the plan. Each participating state was to select a minimum of four Recovery Act programs for inclusion in the project. We assessed the results of the first OMB Single Audit Internal Control Project for fiscal year 2009 and found that it was helpful in communicating internal control deficiencies earlier than required under statute. We reported that 16 states participated in the first project and that the states selected at least two Recovery Act programs for the project. We also reported that the project’s dependence on voluntary participation limited its scope and coverage and that voluntary participation may also bias the project’s results by excluding from analysis states or auditors with practices that cannot accommodate the project’s requirement for early reporting of control deficiencies. Overall, we concluded that although the project’s coverage could have been more comprehensive, the analysis of the project’s results provided meaningful information to OMB for better oversight of the Recovery Act programs selected and information for making future improvements to the Single Audit guidance. OMB’s second Single Audit Internal Control Project is in progress and its planned completion date is June 2011. OMB plans to assess the project’s results after its completion date. The 14 participating states have met the milestones for submitting interim internal control reports by December 31, 2010 and their corrective action plans by January 31, 2011. By April 30, 2011, the federal awarding agencies were to provide their interim management decisions to the cognizant agency for audit. We discussed the preliminary status of these interim management decisions with OMB officials and, as of May 24, 2011, only 1 of the 10 federal awarding agencies had submitted some management decisions on the auditees’ corrective action plans as required by the project’s guidelines. On May 24, 2011, officials from the cognizant agency for audit, HHS, reemphasized to the federal awarding agencies their responsibilities for providing management decisions in accordance with the project’s due dates. In our review of the 2009 project, we noted similar concerns that federal awarding agencies submitted management decisions on proposed corrective actions in an untimely manner and made recommendations in this area, which are discussed later in this report. We will continue to monitor the status of OMB’s efforts to implement this recommendation and believe that OMB needs to continue taking steps to encourage timelier reporting on internal controls through Single Audits for Recovery Act programs. (3) We previously recommended that OMB evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. OMB officials have stated that they are aware of the increase in workload for state auditors who perform Single Audits due to the additional funding to Recovery Act programs and corresponding increases in programs being subject to audit requirements. OMB officials stated that they solicited suggestions from state auditors to gain further insights to develop measures for providing audit relief. However, OMB has not yet put in place a viable alternative that would provide relief to all state auditors that conduct Single Audits. For state auditors that are participating in the second OMB Single Audit Internal Control Project, OMB has provided some audit relief by modifying the requirements under Circular No. A-133 to reduce the number of low- risk programs to be included in some project participants’ risk assessment requirements. OMB is taking initiatives to examine the Single Audit process. OMB officials have stated that they have created a workgroup which combines the Executive Order 13520—Reducing Improper Payments Section 4 (b) Single Audit Recommendations Workgroup (Single Audit Workgroup), and the Circular No. A-87—Cost Principles for State, Local, and Indian Tribal Governments Workgroup (Circular No. A-87 Workgroup). The Single Audit Workgroup is comprised of representatives from the federal audit community; federal agency management officials involved in overseeing the Single Audit process and programs subject to that process; representatives from the state audit community; and staff from OMB. OMB officials tasked the Single Audit Workgroup with developing recommendations to improve the effectiveness of Single Audits of nonfederal entities that expend federal funds in order to help identify and reduce improper payments. In June 2010, the Single Audit Workgroup developed recommendations, some of which are targeted toward providing audit relief to auditors who conduct audits of grantees and grants that are under the requirements of the Single Audit Act. OMB officials stated that the recommendations warrant further study and that the workgroup is continuing its work on the recommendations. OMB officials also stated that the Circular No. A-87 Workgroup has also made recommendations which could impact Single Audits and that the workgroups have been collaborating to ensure that the recommendations relating to Single Audit improvements are compatible and could improve the Single Audit process. The combined workgroups plan to issue a report to OMB by August 29, 2011. We will continue to monitor OMB’s progress to achieve this objective. (4) (5) With regard to issuing Single Audit guidance in a timely manner, and specifically the OMB Circular No. A-133 Compliance Supplement, we previously reported that OMB officials intended to issue the 2011 Compliance Supplement by March 31, 2011. In December 2010, OMB provided to the American Institute of Certified Public Accounts (AICPA) a draft of the 2011 Compliance Supplement which the AICPA published on its Web site. In January 2011, OMB officials reported that the production of the 2011 Compliance Supplement was on schedule for issuance by March 31, 2011. OMB issued the 2011 Compliance Supplement on June 1, 2011. We spoke with OMB officials regarding the reasons for the delay of this important guidance to auditors. OMB officials stated that its efforts were refocused toward priorities relating to the expiration of several continuing resolutions that temporarily funded the federal government for fiscal year 2011, and the Department Of Defense And Full-Year Continuing Appropriations Act, 2011, which was passed by the Congress in April 2011, averting a governmentwide shutdown. OMB officials stated that, as a result, although they had taken steps to issue the 2011 Compliance Supplement by the end of March, such as starting the process earlier in 2010 and giving agencies strict deadlines for program submissions, they were only able to issue it on June 1, 2011. We will continue to monitor OMB’s progress to achieve this objective. (6) (7) In October 2010, OMB officials stated that, based on their assessment of the results of the project, they had discussed alternatives for helping to ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner, including possibly shortening the time frames required for federal agencies to provide their management decisions to grant recipients. However, OMB officials have yet to decide on the course of action that they will pursue to implement this recommendation. OMB officials acknowledged that the results of the 2009 OMB Single Audit Internal Control Project confirmed that this issue continues to be a challenge. They stated that they have met individually with several federal awarding agencies that were late in providing their management decisions in the 2009 project to discuss the measures that the agencies will take to improve the timeliness of their management decisions. Earlier in this report, we discussed that preliminary observations of the results of the second project have identified that several federal awarding agencies’ management decisions on the corrective actions that were due April 30, 2011, have also not been issued in a timely manner. In March 2010, OMB issued guidance under memo M-10-14, item 7, (http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_20 10/m1014.pdf) that called for federal awarding agencies to review reports prepared by the Federal Audit Clearinghouse regarding Single Audit findings and submit summaries of the highest-risk audit findings by major Recovery Act program, as well as other relevant information on the federal awarding agency’s actions regarding these areas. In May 2011, we reviewed selected reports prepared by federal awarding agencies that were titled Use of Single Audit to Oversee Recipient’s Recovery Act Funding. These reports were required by memo M-10-14 for reports from the Federal Audit Clearinghouse for fiscal year 2009. The reports were developed for entities where the auditor issued a qualified, adverse, or disclaimer audit opinion. The reports identified items such as (1) significant risks to the respective program that was audited; (2) material weaknesses, instances of noncompliance, and audit findings that put the program at risk; (3) actions taken by the agency; and (4) actions planned by the agency. OMB officials have stated that they plan to use this information to identify trends that may require clarification or additional guidance in the Compliance Supplement. OMB officials also stated that they are working on a metrics project with the Recovery Accountability and Transparency Board to develop metrics for determining how federal awarding agencies are to use information available in the Single Audit and which can serve as performance measures. We attended a presentation of the OMB Workgroup that is working with the Recovery Accountability and Transparency Board in developing the metrics project in May 2011 and note that it is making progress. OMB officials have stated that the metrics could be applied at the agency level, by program, to allow for analysis of Single Audit findings, along with other uses to be determined. One goal of the metrics project is to increase the effectiveness and timeliness of federal awarding agencies’ actions to resolve single audit findings. We will continue to monitor the progress of these efforts to determine the extent that they improve the timeliness of federal agencies’ actions to resolve audit findings so that risks to Recovery Act funds are reduced and internal controls in Recovery Act programs are strengthened. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommended that the Secretary of Transportation direct FHWA to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. In its response, DOT stated that it implemented measures to further improve data quality in the Recovery Act Data System, including additional data quality checks, as well as providing states with additional training and guidance to improve the quality of data entered into the system. DOT also stated that as part of its efforts to respond to our draft September 2010 report in which we made this recommendation on economically distressed areas, it completed a comprehensive review of projects in these areas, which it provided to GAO for that report. DOT recently posted an accounting of the extent to which states directed Recovery Act transportation funds to projects located in economically distressed areas on its Web site, and we are in the process of assessing these data. To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long-term benefits. Specifically, in the near term, we recommended that the Secretary direct FHWA and FTA to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority was needed. DOT plans to take some steps to assess its data needs, but it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. We continue to believe that Congress should consider changes related to the Single Audit process. To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We continue to believe that Congress should consider changes related to the Single Audit process. To provide housing finance agencies (HFA) with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We continue to believe that Congress should consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. In addition to the contact named above, Thomas Beall, Jonathan Carver, Andrew Ching, John Healey, Sharon Hogan, Thomas James, Bert Japikse, Delwen Jones, Heather MacLeod, SaraAnn Moessbauer, Josh Ormond, Carol Patey, Beverly Ross, Jonathan Stehle, and Pamela Vines made key contributions to this report.
|
This report responds to two GAO mandates under the American Recovery and Reinvestment Act of 2009 (Recovery Act). It is the latest report on the uses of and accountability for Recovery Act funds in selected states and localities, focusing on the $48.1 billion provided to the Department of Transportation (DOT) to invest in transportation infrastructure. This report also examines the quality of recipients' reports about the jobs created and retained with Recovery Act transportation funds. This report addresses the (1) status, use, and outcomes of Recovery Act transportation funding nationwide and in selected states; (2) actions taken by federal, state, and other agencies to monitor and ensure accountability for those funds; (3) changes in the quality of jobs data reported by Recovery Act recipients of transportation funds over time; and (4) challenges faced and lessons learned from DOT and recipients. GAO analyzed DOT and recipient reported data; reviewed federal legislation, guidance, and reports; reviewed prior work and other studies; and interviewed DOT, state, and local officials. As of May 31, 2011, nearly $45 billion (about 95 percent) of Recovery Act transportation funds had been obligated for over 15,000 projects nationwide, and more than $28 billion had been expended. Recipients continue to report using Recovery Act funds to improve the nation's transportation infrastructure. Highway funds have been primarily used for pavement improvement projects, and transit funds have been primarily used to upgrade transit facilities and purchase buses. Recovery Act funds have also been used to rehabilitate airport runways and improve Amtrak's infrastructure. The Recovery Act helped fund transportation jobs, but long-term benefits are unclear. For example, according to recipient reported data, transportation projects supported between approximately 31,460 and 65,110 full-time equivalents (FTE) quarterly from October 2009 through March 2011. Officials reported other benefits, including improved coordination among federal, state, and local officials. However, the impact of Recovery Act investments in transportation is unknown, and GAO has recommended that DOT determine the data needed to assess the impact of these investments. Federal, state, and local oversight entities continue their efforts to ensure the appropriate use of Recovery Act transportation funds, and recent reviews revealed no major concerns. The DOT Inspector General found that DOT generally complied with Recovery Act aviation, highway, and rail program requirements. Similarly, state and local oversight entities' performance reviews and audits generally did not find problems with the use of Recovery Act transportation funds. GAO's analysis of Recovery.gov data reported by transportation grant recipients showed that the number of FTEs reported, number of recipients filing reports, and portion of recipients reporting any FTEs decreased over the past two reporting quarters as an increasing number of projects approached completion or were awaiting financial closeout. The Federal Highway Administration performs automated checks to help ensure the validity of recipient reported data and observed fewer data quality issues than in previous quarters but does not plan to use the data internally. Certain Recovery Act provisions proved challenging. For example, DOT and states faced numerous challenges in implementing the maintenance-of-effort requirement, which required states to maintain their planned level of spending or be ineligible to participate in the August 2011 redistribution of obligation authority under the Federal-Aid Highway Program. In January 2011, DOT reported that 29 states met the requirement while 21 states did not because of reductions in dedicated revenues for transportation, among other reasons. The economically distressed area provision also proved difficult to implement because of changing economic conditions. With regard to the high speed intercity passenger rail and Transportation Investment Generating Economic Recovery (TIGER) grant programs, GAO found that while DOT generally followed recommended grant-making practices, DOT could have better documented its award decisions. GAO updates the status of agencies' efforts to implement its previous recommendations but is making no new recommendations in this report. DOT officials generally agreed with GAO's findings and provided technical comments, which were incorporated as appropriate.
|
The Department of Energy (DOE) is responsible for some of the nation’s largest and most impressive scientific facilities. The agency’s nine national multiprogram laboratories employ more than 50,000 people and have annual operating budgets that exceed $6 billion. DOE estimates that more than $100 billion has been invested in the laboratories over the past 20 years. The laboratories’ work covers many scientific areas—from high-energy physics to advanced computing—at facilities located throughout the nation. Although DOE owns the laboratories, it contracts with universities and private-sector organizations for their management and operation—a practice that has made the laboratories more attractive to scientists and engineers. The laboratory contractors and DOE form unique partnerships at each site, but the Department remains responsible for providing the laboratories with their missions and overall direction, as well as for giving them specific direction to meet both program and administrative goals. The laboratories provide the nation with unique research and development (R&D) capabilities. Specifically, the laboratories enable researchers to work on complex, interdisciplinary problems that dominate current science and technology; permit the study of large-scale, high-risk problems that would be difficult for industry or universities to undertake; and provide unique research facilities for universities and industry to use while serving as focal points for research consortia. DOE’s laboratories have made wide-ranging contributions to defense and civilian technologies. For example, the laboratories have long produced and applied nuclear isotopes now used in thousands of diagnostic medical procedures daily. Safer cars and planes have evolved using computer crash simulation software developed at one laboratory. In 1994, the laboratories’ technological achievements received 25 of the 100 prestigious “R&D 100 Awards” given annually by R&D Magazine for the year’s most technologically significant products. Appendix I contains information on the staffing and funding, as well as the contractor and programmatic emphases, at each laboratory. When DOE was created in 1977, it inherited the national laboratories with a management structure that had evolved from the World War II “Manhattan Project,” whose mission was to design and build the world’s first atomic bombs. From this national security mission, the laboratories generated expertise that initially developed nuclear power as an energy source. The laboratories’ missions broadened in 1967, when the Congress recognized their role in conducting environmental as well as public health and safety-related research and development. In 1971, the Congress again expanded the laboratories’ role, permitting them to conduct nonnuclear energy research and development. During the 1980s, the Congress enacted laws to stimulate the transfer of technology from the laboratories to U.S. industry. DOE estimates that over the past 20 years, the nation has invested more than $100 billion in the laboratories. The 1990s have brought the most dramatic changes affecting the multiprogram laboratories, including the following. The Soviet Union’s collapse has reduced the nuclear arms race, raising questions about the need to maintain three separate weapons laboratories. The weapons laboratories, facing reduced funding in nuclear weapons research, have diversified their work in order to maintain their preeminent talent and facilities. Expectations are growing that all laboratories can and should help improve the nation’s economic competitiveness by working with industry to develop commercial technologies. As the laboratories have aged, concerns have arisen about their ability to maintain their skills in weapons programs. Major investments will be needed to provide up-to-date facilities and attract younger scientists. In light of the general budget austerity facing the federal government, a stable funding environment is no longer guaranteed, and the laboratories will increasingly need to show useful results. These and other forces have accelerated the laboratories’ diversification from defense and nuclear research. For example, the nuclear weapons laboratories—Los Alamos, Sandia, and Lawrence Livermore—although created to design, develop, and test nuclear weapons, now devote less than half of their budgets to work on nuclear weapons. While these laboratories have been affected most dramatically by recent geopolitical changes, all DOE laboratories have been influenced by recent events and are redirecting their priorities. The federal government owns the facilities and grounds of the laboratories and funds the work but has relied on contractors to manage and operate them. These contracts generally run for 5 years; however, some of the laboratories have been run by the same contractor for decades, even since their inception in the early 1940s. The laboratories’ history of relative autonomy in daily research and operational management has led to concerns about their business practices as well as their attention to environmental, safety, and health issues. The objective of this report was to identify and examine the principal issues affecting the laboratories’ missions and DOE’s approach to laboratory management. The Congress has expressed considerable interest in these topics over the years, and our prior work at the laboratories, as well as other studies, has demonstrated that the laboratories’ missions and management are key concerns. This work was carried out as part of our general management review of the Department of Energy. Our work focused on DOE’s nine multiprogram laboratories because of their size and importance as national science and technology resources. We selected laboratory staff to interview by asking each laboratory to identify five programs that best represented its current contributions and future capabilities. (App. II contains the list of programs the laboratories identified.) From these programs, we selected three for assessment. This approach allowed us to examine both the strengths as well as the weaknesses of the laboratories. When collecting information, we strove to identify and assess mission and management issues from the experience of the laboratory managers responsible for directing the programs we had selected. Our work also focused on each laboratory’s technology transfer activities because of the increased national emphasis on using the laboratories to enhance U.S. technological competitiveness. We collected information about the laboratories’ missions and management from multiple sources with direct knowledge of these issues. At the laboratories, we interviewed managers who were responsible for the research programs we had chosen. We also held discussions with laboratory directors, senior officials responsible for technology transfer activities, and contractor representatives. At DOE, we interviewed program managers—Washington-based executives responsible for the research programs we had selected at the laboratories—and DOE field office managers, who oversee the Department’s contractors at the laboratories. To validate and refine our findings, we conducted two focus groups. The first group, which met with our staff in Chicago, consisted of one program manager from each of the nine laboratories. A second group, comprising program managers from DOE headquarters, met with our staff in Washington, D.C. To obtain independent views about the laboratories’ missions and management, we interviewed experts and industry representatives who were not associated with the laboratories. In addition, the National Academy of Public Administration assisted us in convening a panel of experts with backgrounds in (1) managing research in government and industry and (2) science and technology policy. Table 1 lists the panelists and their relevant professional experience. We also reviewed information and analyses from the laboratories, DOE, the Congress, industry, and independent experts, as well as legislative proposals and testimony, DOE documents, budget materials, and previous studies conducted by government and private organizations. In analyzing information, we compared and contrasted views about laboratory mission and management issues. We found considerable agreement among all types of respondents on both topics. To give the reader concrete illustrations of how mission and management issues were viewed, we have used quotations from sources we interviewed throughout this report. We obtained written comments on a draft of this report from DOE. The agency’s comments and our evaluation are presented in appendix III and at the end of chapter 5. We conducted our work from July 1992 through December 1994 in accordance with generally accepted government auditing standards. As the manager of the laboratories, DOE has not clarified how the laboratory system can and should meet national priorities. Although research programs set laboratory priorities to meet their own goals, DOE has not used the laboratories as a coordinated network of talent and facilities to meet missions that cut across programs. This approach not only inhibits the development of clear and coordinated missions for the multiprogram laboratories but also fails to draw upon the laboratories’ expertise in multiple disciplines to solve complex, cross-cutting problems in science and technology. These concerns are not new. In the past, many advisory groups emphasized the need to clarify and redefine the laboratories’ missions. Although DOE recently developed a Strategic Plan and processes intended to integrate the Department’s missions and programs in five major areas, questions remain about DOE’s overall capacity to lead the national laboratories into new mission areas. Each national laboratory must have clearly defined, specific missions which support the over-arching missions of DOE to ensure the best technical and management performance and the greatest value to the nation. Only with clear missions, experts believe, can implementation strategies or “road maps” be developed that describe how each mission will be accomplished and guide each organization’s day-to-day operations. have not seen crisp, specific mission statements from individual laboratories, nor specific mission statements that would cover all DOE’s laboratories. Furthermore, DOE has not been able to describe the mission of the laboratories, nor are the laboratories’ missions defined in any piece of legislation. . . .It is not possible to run a $6 billion organization without specific mission statements. Laboratory managers we spoke with were also concerned that the Congress, DOE, and the laboratories do not share a “common vision” of the laboratories’ missions. Such a common vision among the key “stakeholders” is crucial if the laboratories are to use their resources most effectively to support departmental programs and national goals—the main purpose of the laboratories’ existence. Developing clear and more coordinated missions is particularly important, given the growing expectation that the laboratories will work together toward achieving national security, energy, environmental, and commercial technology goals. (Ch. 4 contains the opinions of our panel of experts on suitable missions for the national laboratories.) The responsibility for developing the common vision rests with DOE. However, laboratory managers believed that DOE headquarters and operations offices have divergent views of the laboratories and their goals, and DOE has not been able to develop a consensus with the Congress on the future of the laboratories. Without a coordinated set of laboratory missions, DOE is unable to address issues that require cooperation and coordination across its many mission areas. This not only inhibits cooperation among research programs but also keeps DOE from using its laboratories to achieve departmental missions. Laboratory and DOE managers are concerned that DOE has not built on its individual programs to encourage valuable cross-program and cross-laboratory interactions, which are essential to meeting both current and future missions. Both laboratory and DOE program managers describe DOE’s management as “fractured” and not particularly adept at combining the expertise of various program areas to tackle cross-disciplinary problems. Laboratory managers cited difficulties DOE has in establishing bridges between its basic science programs and applied science groups. Developing clear and coordinated missions—and strategies to implement them—would provide the necessary bridges between and among the laboratories on cross-cutting projects, according to many laboratory and independent experts. Many laboratory managers believe that DOE and its laboratories lack effective coordinating mechanisms—among the most serious challenges facing the Department as an organization. One manager described as a “horrible problem” the limited emphasis on cross-program coordination. To illustrate the difficulties in combining expertise from different programs to achieve core missions, several laboratory managers cited the fragmented research on preventing the proliferation of nuclear weapons. Although solutions to proliferation problems require expertise in identifying the effects of weapons, the nonproliferation and weapons missions are carried out in different laboratories and are managed by different assistant secretaries. Laboratory managers also cited weak links among the energy conservation, fossil fuel, and nuclear energy research programs as having limited DOE’s progress in commercializing energy technologies. When DOE and the laboratories have successfully combined their multidisciplinary resources, impressive results have occurred. For example, laboratory managers attributed the rapid progress toward a coordinated understanding of global environmental change in DOE’s Global Studies Program to the use of nine laboratories’ diverse capabilities. According to another laboratory manager, cross-laboratory cooperation in the fusion energy program is leading to a long-range strategy to guide research. These examples illustrate the potential for greater collaboration on technical issues that require multidisciplinary talent. Concerns about the need to update and clarify the laboratories’ missions are long-standing. Past studies and reviews of the laboratories have all reached the same conclusion, as the following examples show: In 1983, the White House Science Council Federal Laboratory Review Panel issued a report (commonly known as the Packard Report) addressing all federal laboratories. The report found that while some of the laboratories, particularly DOE’s, had clearly defined missions for parts of their work, most activities were fragmented and unrelated to the laboratories’ main responsibilities. This report recommended that all parent agencies review and redefine the missions of their laboratories. In 1992, a DOE Secretary of Energy Advisory Board found that the broad missions the laboratories were addressing, coupled with rapidly changing world events, “. . . ha caused a loss of coherence and focus at the laboratories, thereby reducing their overall effectiveness in responding to their traditional missions as well as new national initiatives. . . .” The Board identified the most important cause of the stress between DOE and its laboratories as “. . . the lack of a common vision as to the missions of the laboratories. . . .” A 1993 report of an internal DOE task force on laboratory missions reported that the missions “must be updated to support DOE’s new directions and to respond to new national imperatives. . . .” None of these past studies and reviews has resulted in overall consensus about the future missions of the multiprogram laboratory system, raising questions about DOE’s capacity to provide a vision for this system. A 1982 DOE Energy Research Advisory Board task force provided some insights into this question. The Advisory Board acknowledged the impressive nature of the research and development conducted throughout the system but noted that certain weaknesses prevented the laboratories from achieving their full potential. The Advisory Board found, for example, that structural problems and fragmented programs required the laboratories to interact with DOE on an excessive number of levels. The Advisory Board recommended that DOE designate a high-level official to focus solely on the laboratories. DOE did not follow the Advisory Board’s recommendations. In early 1993, however, DOE created an Office of Laboratory Management whose purpose was, in part, to coordinate the interests of the various DOE program offices that interact with the laboratories on a program-by-program basis. However, according to DOE officials, the plan was not implemented, and the existing office does not coordinate laboratory activities for all program offices and does not report directly to the Secretary. We called attention to the limitations of DOE’s program-by-program approach to directing its laboratories as early as 1978, after reviewing the laboratories’ contributions to nonnuclear energy, a critical policy issue at that time. The laboratories’ activities in this area were limited by several factors. First, DOE’s organizational alignment created obstacles; specifically, the laboratories reported to three different senior officials. This arrangement focused the efforts of the laboratories on particular programs and eroded their abilities to pursue research on topics cutting across several areas, such as nonnuclear energy. Second, the roles of the laboratories were determined in a piecemeal way so that each laboratory was given small, fragmented responsibilities. We recommended that DOE align the laboratories under a separate high-level office that was not responsible for specific programs. Most of what we do is determined from the bottom-up . . . in other words, the program level in DOE—and DOE program managers don’t about what the missions are. They want to know where the talent is, and they want to know where the capability is, and that’s where they put their work. A DOE operations office manager said that the Department’s program-oriented approach toward the laboratories fails to recognize DOE’s “corporate” responsibility for them. Another manager cited the need for DOE to develop a strategic approach to the laboratories. Laboratory managers pointed out that DOE’s approach to the laboratories through individual research programs has not effectively linked the laboratory system’s collective resources to DOE’s missions. A laboratory manager described DOE as increasingly focused on individual programs; its management is concentrated at the assistant secretary level, even though many projects do not fall within any one assistant secretary’s program responsibilities. How best to develop missions for the laboratories—and how best to manage them—is the subject of growing debate in the scientific community and was discussed by our panel of experts. For example, proposals suggested or debated during our review included the following. Convert some laboratories, particularly those working closely with the private sector, into independent entities. Transfer the responsibility for one or more laboratories to another agency, whose responsibilities and mission are closely aligned with a particular DOE laboratory. Create a “lead lab” arrangement, under which one laboratory is given a leadership role in a mission or technology area and other laboratories are selected to work in that area. Consolidate the responsibility for research, development, and testing on nuclear weapons within a single laboratory. While we have not analyzed these alternatives, each has advantages and disadvantages and needs to be evaluated in light of the laboratories’ capabilities for designing nuclear weapons and pursuing other missions of national and strategic importance. Furthermore, the government may still need facilities dedicated to national and defense missions, a factor that would heavily influence any future organizational decisions. Important budgetary considerations also accompany each alternative. An expert panelist advised caution in restructuring the laboratories, expressing concern that decades of national investment in these facilities have produced important assets that, if dispersed, could take many years and billions of dollars to reassemble. The previous Congress was also active in the debate on the laboratories’ missions. For example, a House bill introduced in 1993 defined future missions for DOE’s laboratories and suggested methods for measuring progress toward goals, along with incentives for improving the overall quality of research at the laboratories. This proposed legislation also sought to require more rigorous evaluation of the laboratories, articulated several missions for them (such as advancing nuclear science and technology for national security purposes), and advocated that they work with private industry to develop environmental technology and technology transfer activities. A bill passed by the Senate in 1993 contained similar provisions and was designed to sharpen the laboratories’ focus on technology transfer and cooperative research agreements. This bill would have required the laboratories to allocate 20 percent of their budgets to partnerships with industry and academia. Recognizing the important role that the multiprogram laboratories should play in accomplishing departmental goals and national priorities, DOE is making another attempt to define the laboratories’ missions. In February 1994, the Secretary commissioned an independent task force to address the appropriate roles of DOE’s laboratories. Chaired by the former chief executive officer of Motorola Corporation, this task force—the Secretary of Energy Advisory Board Task Force on Alternative Futures for the Department of Energy National Laboratories—is charged with, among other things, examining “alternative scenarios for future utilization of these laboratories for meeting national missions.” The task force’s charter encompasses examining the future roles and responsibilities of the national laboratories, including questions about their accountability and consolidation. The task force’s report to the Secretary is expected by February 1995. DOE has also initiated a strategic planning process that it believes will form a framework for coordinating the laboratories’ missions with the agency’s goals and objectives. DOE’s Strategic Plan will focus the agency’s efforts on five main areas: preserving national security, conserving energy resources, promoting environmental protection, applying science and technology to national needs, and encouraging industrial competitiveness. Strategic plans have also been developed for each of these areas. In addition, DOE has begun a major reorganization effort, which is designed to follow the structure of its new Strategic Plan. Reorienting existing programs and the laboratories to best address these areas remains the Department’s challenge. Laboratory managers see DOE’s management of the multiprogram laboratories as costly and inefficient, creating tensions that impede the development of clear and coordinated missions for the laboratories and action steps that lead toward achieving these missions. According to laboratory managers, DOE micromanages the laboratories, particularly in overseeing their compliance with growing numbers of administrative requirements. Laboratory managers fault DOE for failing to set priorities or provide guidance about how to satisfy both research goals and administrative requirements. Experts we consulted, as well as many laboratory and DOE managers, expressed concern that without a more effective management relationship between DOE and the laboratories, rising research costs may price the laboratories out of collaborative research with industry—a new mission area in which the laboratories are expected to make major contributions. In addition to meeting their research and technology objectives, laboratory managers are responsible for satisfying a wide variety of administrative requirements in areas such as procurement; travel; human resources; and environment, safety, and health. Prompted by criticism of its business practices and past inattention to environment, safety, and health issues, DOE has greatly increased its oversight of the laboratories during recent years. Coping with the new requirements that have accompanied DOE’s expanded oversight is, according to a consensus of laboratory managers, a major burden that not only increases research costs but also diverts attention from basic research. Although laboratory managers recognize the importance of meeting administrative goals—particularly in the area of environment, safety, and health—they want DOE to set priorities for their administrative activities and help them balance research and administration. Administrative requirements increased under the former Secretary of Energy, largely in response to the well-publicized call for greater attention to the environment, safety, and health throughout the nuclear weapons complex. Thus, over 70 percent of the requirements listed in DOE’s 1993 Directives Checklist are new or have been revised since 1989. A DOE operations office manager estimated that DOE has about 8,400 environment, safety, and health requirements. Directives define required actions to meet certain objectives; these actions range from preparing reports to conducting inspections. Both laboratory and DOE operations office managers who administer directives told us they were “numb” from the proliferation of requirements. According to a consensus of both laboratory and DOE managers, the laboratories have been overwhelmed, not only by the volume of new requirements but also by their detail and by inconsistent guidance for implementing them. Closely related to the proliferation of administrative requirements has been the equally aggressive expansion of oversight activity. Oversight—or the assessment of how well managers handle the programs and requirements for which they are accountable—is critical to the operation of federal programs and is a key management responsibility. Despite its vital role, laboratory managers, the experts on our panel, and DOE managers agreed that sharply increased oversight in recent years has not been an effective management approach for DOE. DOE and other agencies conduct as many as 400 reviews annually at each laboratory. One laboratory manager calculated that his program was reviewed more than once a day in 1992. Laboratory managers deplored the enormous amount of time required to prepare for oversight reviews, adding that the impact of losing the best researchers’ time during reviews is difficult to quantify. Many scientists have become discouraged by administrative chores. One manager complained that administrative oversight consumed as much as 40 percent of his working time, and many managers questioned whether DOE’s expanded oversight has produced benefits commensurate with its costs. There are myriad rules and regulations that require a substantial amount of interpretation. In the absence of a single environment, safety, and health oversight organization, every laboratory will have a different level of compliance because each field office has a different interpretation of environment, safety, and health rules. We end up treating very simple chemical experiments as if people were working with commercial nuclear reactors. . . our costs have gone right through the roof and our staff’s ability to turn out the volume has decreased dramatically. If lab A screws up—say environmental health or quality assurance—[DOE] headquarters decides that everybody’s guilty and we’re then overrun with sieges and inspections. Instead of going back to that laboratory and trying to understand why that went bad, we’re all condemned by the same punishment. Meeting all of these responsibilities presents a significant challenge, especially as budgets decline. Yet laboratory managers maintained that DOE has provided little guidance or assistance in setting priorities to help them balance their responsibilities. There is a split in DOE between the people who run programs and those who issue regulations. . . .Funds tend to come in at the bottom to scientists, while regulations tend to come in at the top of the organization . . . often the scientists do not understand the rationale for regulations. Managers at the laboratories, in DOE programs, and at DOE operations offices were troubled by the costs associated with achieving the Department’s administrative goals. Although little information or analysis has been completed on this issue, DOE’s administrative compliance approach has had two results, according to both DOE and laboratory managers. First, it has been costly. Second, it has raised research costs and reduced the laboratories’ ability to compete with universities for research sponsored by industry and other government agencies. For example, a laboratory manager told us that operating a reactor costs significantly more under DOE’s safety regulations than under the Nuclear Regulatory Commission’s regulations for non-DOE reactors. A DOE operations office manager added that it would cost billions more than is currently spent to be in full compliance with all rules and regulations at several laboratories, even though these laboratories have lower-priority problems. Laboratory and DOE managers agreed that DOE has not provided the funding required to achieve compliance, particularly with environment, safety, and health regulations. A DOE operations office manager noted that no additional funds had been received at one laboratory where expenditures of more than $1 billion would be required to correct environment, safety, and health problems. There is a trend toward imposing the full range of government procurement requirements on the laboratories, and this could kill government-industry cooperation. . . .For industry to find cooperative research agreements with laboratories a viable option, laboratory costs must be fully competitive. Laboratory and DOE managers and an expert on our panel believe that administrative programs should be cost-effective and have priorities for compliance so that resources can be concentrated on the most significant risks. However, DOE has not systematically set priorities for its administrative requirements, and cost-benefit analyses have not been used to assess risks. DOE has begun to streamline the directives system and correct other oversight problems. Also, the Department is now seeking to avoid duplicative or unnecessary oversight reviews and is more careful about overloading laboratories with such reviews. In addition, DOE has begun to implement “total quality management” and is developing performance measures to guide its evaluation of the laboratories’ management. DOE believes that these efforts should help both it and the laboratories balance their research and administrative goals more effectively in the future. DOE’s “management and operating” contracts with the academic institutions that operate most of the multiprogram laboratories pose a further stumbling block both to a more favorable relationship between the Department and the laboratories and to a reduction in DOE’s oversight. Under these contracts, a contractor assumes responsibility for managing and operating a facility but incurs only limited liability. DOE pays virtually all of the contractor’s costs except those resulting from willful misconduct or bad faith by top management or those designated as unallowable. Furthermore, under its contracts with the laboratory contractors, most of which are nonprofit or academic institutions, DOE has limited financial incentives for influencing the contractor’s actions: It cannot adjust the fee that it pays to these contractors because it has historically negotiated a fixed fee with them that is not tied to their performance. In contrast, DOE pays its for-profit contractors a fee, called an “award fee,” that is based on its assessment of their performance. The tensions created by the arrangements between DOE and its nonprofit contractors have raised questions about whether DOE’s current contracting approach is effective for managing the laboratories. DOE and various oversight groups, including GAO, have expressed concerns about the laboratories’ past business practices and have called for changes in contracts that better reflect the needs of the laboratories and the requirements of DOE. DOE is changing its relationship with contractors. Under its contract reform initiative, contractors will be evaluated on the basis of performance measures—a process that DOE believes will better enable it to hold contractors accountable for results. In addition, according to DOE staff, the use of performance measures will lead to a more rational, risk-based approach toward compliance with the increased number of requirements placed on the laboratories in recent years. We support DOE’s contract reform efforts and believe that, once implemented, they offer opportunities for substantially improving the way the agency does business with its contractors, including its laboratory contractors. We are concerned, however, that the scope of DOE’s current contract reform may not address all the major management problems that characterize the agency’s relationship with the laboratories. For example, it is uncertain how contract reform will resolve the proliferation of laboratory oversight activities, which poses a major problem for laboratory managers. Furthermore, it could take many years for contract reform to take effect, given the multiyear time frame for existing contracts. Our panel of experts and other experts believe that, with proper mission focus and management direction, the multiprogram laboratories can make vital contributions in many areas important to DOE and the nation. According to the panel, the highest-priority missions for the laboratories are national defense, energy, the environment, and commercial technology. While the laboratories have already made contributions in these areas—such as effective weapons systems, energy conservation programs, environmental cleanup techniques, and commercialized technologies—our panel concluded that clarifying and, in some cases, redefining the current missions for the laboratory system as a whole would enhance the value of the laboratories. Our panel of experts agreed that the laboratories’ national security work will continue to be important. Until the Department of Defense has decided whether to support defense work at the laboratories and DOE’s missions are clear, the defense roles of Los Alamos, Sandia, and Lawrence Livermore are unclear. However, several panelists anticipated a defense mission with new and continuing objectives that would use these three laboratories’ nuclear weapons competence and other laboratories’ experience. In nuclear weapons technologies, several of the experts on our panel predicted that the laboratories’ missions would continue to shift from designing weapons to overseeing and dismantling the nuclear stockpile, verifying international nuclear treaties, and conducting research on nonproliferation. Because the Department is substantially responsible for overseeing the weapons stockpile, it will require the laboratories’ unique competencies. Ensuring that stockpiled nuclear weapons are reliable and safe is a major responsibility that will persist as long as the nation needs to sustain a nuclear stockpile, a panelist pointed out. The defense mission also makes the laboratories responsible for overseeing the dismantling of nuclear weapons in accordance with the nation’s international treaty obligations—a task that will take decades to complete at the current pace, a laboratory director pointed out. According to a laboratory director, the United States and Russia each estimate that they can dismantle only 2,000 weapons a year. The current U.S. stockpile contains many thousands of weapons. The proliferation of nuclear technology and materials will be an increasingly important national concern. As a DOE manager noted, a growing number of nations are now able to make nuclear weapons, and more have the political will to develop them. Our panel of experts concurred that the laboratories have unique knowledge to address these issues. For example, the laboratories already have experience in detecting clandestine nuclear weapons programs, locating terrorists’ weapons, responding to nuclear weapons emergencies, and identifying the origin of nuclear materials and weapons. Energy and the environment are areas in which the laboratories have already made useful contributions. However, our panel of experts suggested that the laboratories could enhance their contributions by linking their missions in these areas to focus on energy-related environmental problems—an increasingly important issue, according to a DOE secretarial advisory board. This linkage would demonstrate the effect of research in one area on work in another, an important consideration because energy development and use underlie most of the nation’s serious environmental problems. For example, the use of electric vehicles would reduce emissions of hydrocarbons but create problems in disposing of batteries. Similarly, the production of commercial nuclear power reduces some air quality problems but creates a need for technologies to dispose of radioactive wastes. As a panelist pointed out, linking energy and environmental research would draw upon the laboratories’ ability to address cross-disciplinary problems. This linkage would benefit research in both areas and enhance the ability of DOE and the laboratories to set research and policy priorities. Our panel of experts agreed that the laboratories have an important energy research mission. One panelist described it as perhaps their principal mission because developing energy sources and efficient uses of energy is vital to the nation’s economy. However, another panelist maintained that although the laboratories’ energy mission is broad, it has become fuzzy. Panelists also noted that despite substantial investment, the laboratories’ energy research has been disappointing. One panelist noted that the nation has been unable to decide on an energy policy to guide the laboratories’ work. DOE has produced several different national energy strategies over the years, each with different priorities, making long-term planning for the laboratories difficult. Despite these conditions, however, panelists agreed that a redirected energy mission would serve the United States very well and provide opportunities for large-scale interactions between industry and government. One panelist urged DOE to consider the laboratories’ experience, encourage closer laboratory-industry interactions to define priorities, and focus on path-breaking, high-risk, cross-industry research with the potential for major payback in 10 to 20 years. The laboratories’ environmental mission has been more implicit than explicit, according to one panelist. Although the laboratories have been developing environmental technologies, the scope of their environmental mission has not been clear. However, several of the panelists envisioned that the laboratories could make unique contributions, particularly in environmental technology—an area where other federal agencies have limited experience—and in nuclear waste disposal. Significant contributions may also stem from the laboratories’ ability to model environmental impacts with their advanced computing facilities. Several panelists believed that greater coordination between DOE and the Environmental Protection Agency would be needed to maximize the value of this type of laboratory work. A laboratory director emphasized to us that through their basic research competencies the laboratories can make a major contribution to solving environmental challenges, but their strengths have been underutilized. According to the director, “Waste remediation cannot continue on its present course without ‘bankrupting the country’ because it is being done without a knowledge base.” As a laboratory manager noted, developing a basic understanding of underlying problems before developing waste cleanup technologies is important. If the basic science is not understood, environmental remediation problems may elude solution, just as efforts to cure cancer during the 1970s were unsuccessful because not enough was then known about basic cancer virology. Our panel of experts agreed that a commercial technology mission for the laboratories is legitimate and important. However, several panelists and other experts we consulted maintained that this mission should be broadly conceived—that is, it should emphasize research and development that can benefit all U.S. industries and should be integrated with other laboratory missions rather than become a central mission. According to panelists, the principal reason for enlisting the laboratories in improving the nation’s global competitive position is that they are building the intellectual foundation that allows the nation’s economy to prosper. A laboratory director pointed out that U.S. industry has sometimes been at a disadvantage because public-private research is better coordinated in other countries. There was considerable agreement among both the experts on our panel and other experts we consulted about the need to change the laboratories’ current focus on transferring existing technology to industry on a project-by-project basis. Industry, expert, and government sources concurred that the technology mission would be more productive if it supported nonproprietary research that could help all industries compete; technology research as an integral part of the national security, energy, and environmental missions; long-term cooperative research relationships between the laboratories and industry; and training in science for future progress in technology. According to a panelist, nonproprietary research that can benefit all industries is important but has been underfunded and conducted without focus. The panelist emphasized that the government can usefully and appropriately support research that underpins a broad array of specific technology applications in many different industries, stopping short of supporting proprietary technology that companies themselves should fund. For example, experts noted that laboratory research to improve the U.S. transportation system could enhance U.S. manufacturers’ ability to compete. Similarly, a panelist noted that laboratory work on advanced computer-aided design tools could improve productivity throughout the U.S. manufacturing sector. Although a commercial technology mission is important, laboratory managers, industry representatives, and experts cautioned that developing technology should not become the laboratories’ primary mission or reason for existence. The officials described the challenge as defining a broad technology mission that supports long-term relationships between the laboratories and industry while sustaining the laboratories’ other missions and abilities. For example, the laboratories develop technology through other missions that have technological needs of their own. Sustaining the laboratories’ basic research is also important. Laboratory managers observed that a balance is needed between basic and applied research in order to avoid “eating the seed corn” that leads to new technologies. In addition, not all programs—such as high-energy physics—lend themselves to cooperation with industry. A laboratory manager said that with only a technology transfer mission, the laboratories would be out of business in 5 years. Several of the experts on our panel encouraged laboratories and industry to develop long-term cooperative research relationships that can allow each party to better understand the other’s needs and increase the potential for results. Panelists and other experts we consulted agreed that training in science and mathematics is essential to the nation’s future competitiveness in high-technology products and services and that helping train students is important to a commercial technology mission. Several panelists also urged that, to enhance industry’s ability to produce marketable innovations, the laboratories expand their training programs to include mid-career technical retraining for industry personnel. Working with industry on a commercial technology mission at the laboratories presents special challenges for DOE and laboratory management. Although some laboratories have considerable experience in working with industry, broad-scale cooperation represents a new venture for the laboratories. DOE has begun to work with the laboratories and industry to develop a strategic plan for technology partnerships. However, successful implementation of this new mission requires clearly defined roles for the laboratories and DOE, realistic expectations about the laboratories’ potential to improve U.S. competitiveness, encouragement to experiment, well-defined mission objectives, and closer links between the laboratories and industry to ensure that the laboratories’ work reflects the market’s needs. U.S. taxpayers have a significant investment in the national laboratory network. DOE has a major responsibility to ensure that work at the laboratories is properly focused and intelligently managed so that the laboratories can make maximum contributions to national priorities. Achieving these goals requires two efforts: First, senior leadership needs to develop clear missions and implementation strategies that treat the laboratories as a coordinated set of facilities; second, DOE needs to adopt a management approach that supports the laboratories’ achievement of their research missions and administrative responsibilities. DOE has not been able to develop a consensus among laboratory and government leaders on appropriate missions for the national laboratories, even though past studies and special task forces have called for such action. Furthermore, the Department’s management approach impedes progress toward current goals, raising questions about DOE’s overall capacity to achieve these important objectives. The results from the Secretary’s Advisory Board Task Force on Alternative Futures for the National Laboratories could set the foundation for developing clear and coordinated missions for the national laboratory network. The success of these results can best be measured by the extent to which they help shape a consensus among key stakeholders: the Congress, DOE, and the laboratories. Such a consensus on the future missions for the national laboratories has not resulted from past advisory board recommendations. DOE’s ongoing contract reform efforts—especially the planned use of performance measures to guide and evaluate the laboratories’ activities—could form a solid basis for an improved management approach that supports the laboratories’ mission goals and administrative requirements. These goals will be difficult to achieve, however, given current management practices and the contracting constraints under which both DOE and the laboratories operate. For these and other reasons, experts are beginning to question where alternative forms of laboratory management should be considered. As public debate on the future of the laboratories grows—for example, the Congress, in a previous session, proposed legislation setting specific missions for the laboratories—DOE’s leaders cannot afford to delay efforts to define clear and coordinated missions and to implement a management approach that supports these missions. Indeed, if the laboratories do not begin to function more as a system, it may be necessary to consider alternatives to the present DOE-laboratory relationship. Above all, strong DOE leadership is needed to establish a shared vision about the laboratories’ expected contributions. DOE leadership is especially important to implementing the new commercial technology mission. There are encouraging signs that DOE is committed to involving industry in this implementation and improving its access to the laboratories. We recommend that the Secretary of Energy evaluate alternatives for managing the laboratories that more fully support clear missions, achieve results by linking the laboratories’ activities to DOE’s missions, and maximize the laboratories’ resources. Such a strategy could start by addressing the many management issues raised in this report and should be consistent with DOE’s major efforts to reform contract management. The strategy must also support goals for DOE and the laboratories to comply with environment, safety, and health initiatives. To help achieve this goal, the Secretary should strengthen the Office of Laboratory Management by providing it with sufficient resources and authority to facilitate cooperation with the laboratories and resolution of management issues across all DOE program areas. If DOE is unable to refocus the laboratories’ missions and develop a management approach consistent with these new missions, the Congress may wish to consider alternatives to the present DOE-laboratory relationship. Such alternatives might include placing the laboratories under the control of different agencies or creating a separate structure for the sole purpose of developing a consensus on the laboratories’ missions. DOE officials believe that they are taking a number of actions that address our concern about DOE’s leadership in providing mission focus for the national laboratories. Specifically, in its letter to GAO, and in discussions with us, DOE cited its new strategic planning process, which resulted in a Strategic Plan that, in turn, is supported by five separate plans covering each of the Department’s core “business lines.” DOE anticipates that this process, together with the upcoming report expected from the Secretary’s Energy Advisory Board Task Force on Alternative Futures for the Laboratories, will provide the means through which the Department will exercise new leadership for its national laboratories. GAO is encouraged by these initiatives. Coupled with the Department’s contract reform efforts, they should, once fully in place, strengthen DOE’s ability to improve its own management as well as provide a foundation for refocusing the laboratories’ missions. The outcome of these efforts bears close monitoring by the Congress. Our optimism is tempered, however, by DOE’s having reorganized before and having had planning efforts in the past. Furthermore, DOE has not used the recommendations of past advisory groups to refocus the laboratories or improve its management of them. DOE expressed concern that our report would force “tight mission-driven parameters” for the laboratories, which would inhibit the laboratories’ flexibility in conducting fundamental research. We are not suggesting that DOE narrow the laboratories’ missions. Instead, we believe that DOE should clarify mission-focused research and development within its laboratories and coordinate these activities among them. The need to clarify and focus the laboratories’ missions reflected a widespread consensus among the laboratory and DOE managers, as well as among the experts, with whom we spoke.
|
GAO reviewed the Department of Energy's nine multiprogram national laboratories, focusing on the: (1) laboratories' current and future missions; and (2) DOE approach to laboratory management. GAO found that: (1) the DOE laboratories do not have clearly defined missions and laboratory managers believe that the lack of DOE direction is compromising their ability to achieve national priorities; (2) DOE manages the laboratories on a program-by-program basis and has underutilized the laboratories' special multidisciplinary abilities to solve complex, cross-cutting scientific and technology problems; (3) although DOE has developed a strategic plan to integrate its missions and programs in five main areas, it still may not be able to effectively manage the laboratories in the future; (4) the costly and inefficient day-to-day management of the laboratories inhibits a productive working relationship between the laboratories and DOE; (5) DOE does not balance laboratory research and administrative objectives; (6) the laboratories fear that rising research costs due to costly administrative requirements will limit their ability to compete for research projects, which in turn will hamper their commercial technology mission; (7) DOE has instituted contract reforms which it believes will lead to a more productive management approach; and (8) the laboratories can make vital contributions in many important areas such as weapons systems, energy conservation, environmental cleanup, and commercialized technologies with proper mission focus and management direction.
|
The Bureau’s mission is to provide comprehensive data about the nation’s people and economy. The 2010 census enumerates the number and location of people on Census Day, which is April 1, 2010. However, census operations begin long before Census Day and continue afterward. For example, address canvassing for the 2010 census will begin in April 2009, while the Secretary of Commerce must report tabulated census data to the President by December 31, 2010, and to state governors and legislatures by March 31, 2011. The decennial census is a major undertaking for the Bureau that includes the following major activities: Establishing where to count. This includes identifying and correcting addresses for all known living quarters in the United States (address canvassing) and validating addresses identified as potential group quarters, such as college residence halls and group homes (group quarters validation). Collecting and integrating respondent information. This includes delivering questionnaires to housing units by mail and other methods, processing the returned questionnaires, and following up with nonrespondents through personal interviews (nonresponse follow-up). It also includes enumerating residents of group quarters (group quarters enumeration) and occupied transitional living quarters (enumeration of transitory locations), such as recreational vehicle parks, campgrounds, and hotels. It also includes a final check of housing unit status (field verification) where Bureau workers verify potential duplicate housing units identified during response processing. Providing census results. This includes tabulating and summarizing census data and disseminating the results to the public. Automation and IT are to play a critical role in the success of the 2010 census by supporting data collection, analysis, and dissemination. Several systems will play a key role in the 2010 census. For example, enumeration “universes,” which serve as the basis for enumeration operations and response data collection, are organized by the Universe Control and Management (UC&M) system, and response data are received and edited to help eliminate duplicate responses using the Response Processing System (RPS). Both UC&M and RPS are legacy systems that are collectively called the Headquarters Processing System. Geographic information and support to aid the Bureau in establishing where to count U.S. citizens are provided by the Master Address File/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) system. The Decennial Response Integration System (DRIS) is to provide a system for collecting and integrating census responses from all sources, including forms and telephone interviews. The Field Data Collection Automation (FDCA) program includes the development of handheld computers for the address canvassing operation and the systems, equipment, and infrastructure that field staff will use to collect data. Paper-Based Operations (PBO) was established in August 2008 primarily to handle certain operations that were originally part of FDCA. PBO includes IT systems and infrastructure needed to support the use of paper forms for operations such as group quarters enumeration activities, nonresponse follow-up activities, enumeration at transitory locations activities, and field verification activities. These activities were originally to be conducted using IT systems and infrastructure developed by the FDCA program. Finally, the Data Access and Dissemination System II (DADS II) is to replace legacy systems for tabulating and publicly disseminating data. As stated in our testing guide and the Institute of Electrical and Electronics Engineers (IEEE) standards, complete and thorough testing is essential for providing reasonable assurance that new or modified IT systems will perform as intended. To be effective, testing should be planned and conducted in a structured and disciplined fashion that includes processes to control each incremental level of testing, including testing of individual systems, the integration of those systems, and testing to address all interrelated systems and functionality in an operational environment. Further, this testing should be planned and scheduled in a structured and disciplined fashion. Comprehensive testing that is effectively planned and scheduled can provide the basis for identifying key tasks and requirements and better ensure that a system meets these specified requirements and functions as intended in an operational environment. In preparation for the 2010 census, the Bureau planned what it refers to as the Dress Rehearsal. The Dress Rehearsal includes systems and integration testing, as well as end-to-end testing of key operations in a census-like environment. During the Dress Rehearsal period, running from February 2006 through June 2009, the Bureau is developing and testing systems and operations, and it held a mock Census Day on May 1, 2008. The Dress Rehearsal activities, which are still under way, are a subset of the activities planned for the actual 2010 census and include testing of both IT and non-IT related functions, such as opening offices and hiring staff. The Dress Rehearsal identified significant technical problems during the address canvassing and group quarters validation operations. For example, during the Dress Rehearsal address canvassing operation, the Bureau encountered problems with the handheld computers, including slow and inconsistent data transmissions, the devices freezing up, and difficulties collecting mapping coordinates. As a result of the problems observed during the Dress Rehearsal, cost overruns and schedule slippage in the FDCA program, and other issues, the Bureau removed the planned testing of several key operations from the Dress Rehearsal and switched key operations, such as nonresponse follow-up, to paper-based processes instead of using the handheld computers as originally planned. Through the Dress Rehearsal and other testing activities, the Bureau has completed key system tests, but significant testing has yet to be done, and planning for this is not complete. Table 1 summarizes the status and plans for system testing. Effective integration testing ensures that external interfaces work correctly and that the integrated systems meet specified requirements. This testing should be planned and scheduled in a disciplined fashion according to defined priorities. For the 2010 census, each program office is responsible for and has made progress in defining system interfaces and conducting integration testing, which includes testing of these interfaces. However, significant activities remain to be completed. For example, for systems such as PBO, interfaces have not been fully defined, and other interfaces have been defined but have not been tested. In addition, the Bureau has not established a master list of interfaces between key systems, or plans and schedules for integration testing of these interfaces. A master list of system interfaces is an important tool for ensuring that all interfaces are tested appropriately and that the priorities for testing are set correctly. As of October 2008, the Bureau had begun efforts to update a master list it had developed in 2007, but it has not provided a date when this list will be completed. Without a completed master list, the Bureau cannot develop comprehensive plans and schedules for conducting systems integration testing that indicate how the testing of these interfaces will be prioritized. With the limited amount of time remaining before systems are needed for 2010 operations, the lack of comprehensive plans and schedules increases the risk that the Bureau may not be able to adequately test system interfaces, and that interfaced systems may not work together as intended. Although several critical operations underwent end-to-end testing in the Dress Rehearsal, others did not. As of December 2008, the Bureau had not established testing plans or schedules for end-to-end testing of the key operations that were removed from the Dress Rehearsal, nor has it determined when these plans will be completed. These operations include enumeration of transitory locations, group quarters enumeration, and field verification. The decreasing time available for completing end-to-end testing increases the risk that testing of key operations will not take place before the required deadline. Bureau officials have acknowledged this risk in briefings to the Office of Management and Budget. However, as of January 2009, the Bureau had not completed mitigation plans for this risk. According to the Bureau, the plans are still being reviewed by senior management. Without plans to mitigate the risks associated with limited end-to-end testing, the Bureau may not be able to respond effectively if systems do not perform as intended. As stated in our testing guide and IEEE standards, oversight of testing activities includes both planning and ongoing monitoring of testing activities. Ongoing monitoring entails collecting and assessing status and progress reports to determine, for example, whether specific test activities are on schedule. In addition, comprehensive guidance should describe each level of testing and the types of test products expected. In response to prior recommendations, the Bureau took initial steps to enhance its programwide oversight; however, these steps have not been sufficient. For example, in June 2008, the Bureau established an inventory of all testing activities specific to all key decennial operations. However, the inventory has not been updated since May 2008, and officials have no plans for further updates. In another effort to improve executive-level oversight, the Decennial Management Division began producing (as of July 2008) a weekly executive alert report and has established (as of October 2008) a dashboard and monthly reporting indicators. However, these products do not provide comprehensive status information on the progress of testing key systems and interfaces. Further, the assessment of testing progress has not been based on quantitative and specific metrics. The lack of quantitative and specific metrics to track progress limits the Bureau’s ability to accurately assess the status and progress of testing activities. In commenting on our draft report, the Bureau provided selected examples where they had begun to use more detailed metrics to track the progress of end-to-end testing activities. The Bureau also has weaknesses in its testing guidance. According to the Associate Director for the 2010 census, the Bureau did establish a policy strongly encouraging offices responsible for decennial systems to use best practices in software development and testing, as specified in level 2 of Carnegie Mellon’s Capability Maturity Model® Integration. However, beyond this general guidance, there is no mandatory or specific guidance on key testing activities such as criteria for each level or the type of test products expected. The lack of guidance has led to an ad hoc—and, at times—less than desirable approach to testing. In our report, we are making ten recommendations for improvements to the Bureau’s testing activities. Our recommendations include finalizing system requirements and completing development of test plans and schedules, establishing a master list of system interfaces, prioritizing and developing plans to test these interfaces, and establishing plans to test operations removed from the Dress Rehearsal. In addition, we are recommending that the Bureau improve its monitoring of testing progress and improve executive-level oversight of testing activities. In written comments on the report, the department had no significant disagreements with our recommendations. The department stated that its focus is on testing new software and systems, not legacy systems and operations used in previous censuses. However, the systems in place to conduct these operations have changed substantially and have not yet been fully tested in a census-like environment. Consistent with our recommendations, finalizing test plans and schedules and testing all systems as thoroughly as possible will help to ensure that decennial systems will work as intended. In summary, while the Bureau’s program offices have made progress in testing key decennial systems, much work remains to ensure that systems operate as intended for conducting an accurate and timely 2010 census. This work includes system, integration, and end-to-end testing activities. Given the rapidly approaching deadlines of the 2010 census, completing testing and establishing stronger executive-level oversight are critical to ensuring that systems perform as intended when they are needed. Mr. Chairman and members of the subcommittee, this concludes our statement. We would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. If you have any questions about matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov or Robert Goldenkoff at (202) 512-2757 or goldenkoffr@gao.gov. Other key contributors to this testimony include Sher`rie Bacon, Barbara Collier, Neil Doherty, Vijay D’Souza, Elizabeth Fan, Nancy Glover, Signora May, Lee McCracken, Ty Mitchell, Lisa Pearson, Crystal Robinson, Melissa Schermerhorn, Cynthia Scott, Karl Seifert, Jonathan Ticehurst, Timothy Wexler, and Katherine Wulff. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Decennial Census is mandated by the U.S. Constitution and provides vital data that are used, among other things, to reapportion and redistrict congressional seats and allocate federal financial assistance. In March 2008, GAO designated the 2010 Decennial Census a high-risk area, citing a number of long-standing and emerging challenges, including weaknesses in the U.S. Census Bureau's (Bureau) management of its information technology (IT) systems and operations. In conducting the 2010 census, the Bureau is relying on both the acquisition of new IT systems and the enhancement of existing systems. Thoroughly testing these systems before their actual use is critical to the success of the census. GAO was asked to testify on its report, being released today, on the status and plans of testing of key 2010 decennial IT systems. Although the Bureau has made progress in testing key decennial systems, critical testing activities remain to be performed before systems will be ready to support the 2010 census. Bureau program offices have completed some testing of individual systems, but significant work still remains to be done, and many plans have not yet been developed (see table below). In its testing of system integration, the Bureau has not completed critical activities; it also lacks a master list of interfaces between systems and has not developed testing plans and schedules. Although the Bureau had originally planned what it refers to as a Dress Rehearsal, starting in 2006, to serve as a comprehensive end-to-end test of key operations and systems, significant problems were identified during testing. As a result, several key operations were removed from the Dress Rehearsal and did not undergo end-to-end testing. The Bureau has neither developed testing plans for these key operations, nor has it determined when such plans will be completed. Weaknesses in the Bureau's testing progress and plans can be attributed in part to a lack of sufficient executive-level oversight and guidance. Bureau management does provide oversight of system testing activities, but the oversight activities are not sufficient. For example, Bureau reports do not provide comprehensive status information on progress in testing key systems and interfaces, and assessments of the overall status of testing for key operations are not based on quantitative metrics. Further, although the Bureau has issued general testing guidance, it is neither mandatory nor specific enough to ensure consistency in conducting system testing. Without adequate oversight and more comprehensive guidance, the Bureau cannot ensure that it is thoroughly testing its systems and properly prioritizing testing activities before the 2010 Decennial Census, posing the risk that these systems may not perform as planned.
|
Since the 1970s, a variety of work requirements have been tied to the receipt of food stamp benefits, including participation in the Food Stamp E&T Program. Funding for the program has been provided through a combination of federal grants to states, state funds, and federal matching funds. Under the Workforce Investment Act (WIA) of 1998, services for many other federally funded employment and training programs were coordinated through a single system—called the one-stop center system— but the Food Stamp E&T Program was not required to be part of this system. The Food Stamp Program, administered at the federal level by USDA, helps low-income individuals and families obtain a more nutritious diet by supplementing their income with food stamp benefits. The states and FNS jointly administer the Food Stamp Program. The federal government pays the cost of food stamp benefits and 50 percent of the states’ administrative costs. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households and issuing benefits to participants. In fiscal year 2001, the Food Stamp Program served an average of 17.3 million people per month and provided an average monthly benefit of $75 per person. Throughout the history of the Food Stamp Program, a variety of employment and training requirements have been tied to the receipt of food stamp benefits. The Food Stamp Program requires all recipients, unless exempted by law, to register for work at the appropriate employment office, participate in an employment and training program if assigned by a state agency, and accept an offer of suitable employment. Food stamp recipients are exempted from registering for work and engaging in employment and training activities if they are under age 16 or over age 59 or physically or mentally unfit for employment. In addition, they are exempted if they are caring for a child under the age of 6, employed 30 hours a week, or subject to and complying with work requirements for other programs, such as those required by TANF. Still others are exempted because they are receiving unemployment insurance compensation, participating in a drug or alcohol treatment and rehabilitation program, or are students enrolled at least half time. The Food Security Act of 1985 created the Food Stamp E&T Program to help participants gain skills, training, or experience that will increase their ability to obtain regular employment. The act requires each state to operate a Food Stamp E&T Program with one or more of the following employment and training activities: job search, job search training, education, vocational training, or work experience. While the act mandates that all nonexempt food stamp recipients register for work, states have the flexibility to determine which local areas will operate a Food Stamp E&T Program and, based on their own criteria, whether or not it is appropriate to refer these individuals to the Food Stamp E&T Program. Since passage of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) in 1996, food stamp recipients aged 18-49, who are “able-bodied” and not responsible for a dependent child— termed able-bodied adults without dependents or ABAWDs—have a time limit for the receipt of food stamp benefits and specific work requirements. PRWORA marked the first time that federal legislation imposed a time limit on the receipt of benefits for any category of food stamp recipients. Under PRWORA, ABAWDs are limited to 3 months of food stamp benefits in a 36-month period unless they meet one of the following ABAWD work requirements: participate in a qualifying work activity 20 hours per week, work 20 hours per week, engage in any combination of qualifying activities for a total of 20 hours per week, or participate in a work experience program. Qualifying activities include education, vocational training, or work experience. ABAWDs may engage in job search or job search training activities within the first month of participation in a work experience program. In addition, ABAWDs can engage in job search activities as part of their work requirements as long as job search does not account for more than half of the time they spend engaged in qualified activities. At the request of states, FNS may waive ABAWDs from the 3- out of 36-month requirement and the ABAWD work requirement if they live in an area where the unemployment rate is over 10 percent or where the state can document that there are not a sufficient number of jobs to provide employment for these individuals. The Balanced Budget Act (BBA) of 1997 allowed states to exempt an additional 15 percent of ABAWDs, also from the time limit and ABAWD work requirements, based on criteria developed by the state, such as participants in remote counties. However, ABAWDs are still required to comply with Food Stamp Program requirements, such as registering for work at an appropriate employment office. Food Stamp E&T participants other than ABAWDs—including 16- or 17- year-old heads of households, individuals age 50-60, and individuals age 18-49 who are responsible for a dependant age 6-17—must comply with any Food Stamp E&T work requirement established by the state where they reside. Some states maintain the same work requirements for these participants as they do for ABAWDs. Other states may impose less rigorous requirements, such as engaging in job search activities a few hours a week. (See table 1.) Funding for the Food Stamp E&T Program has been provided through a combination of federal grants to states, state funds, and federal matching funds. USDA provides matching funds by reimbursing states 50 percent for their program administrative costs. The agency also reimbursed states for 50 percent of support services—such as participant transportation—up to $12.50 per participant per month. While this basic funding structure is still in place, several changes have been made since the late 1990s. In response to concerns over the ability of ABAWDs to meet the work requirements imposed by PRWORA, the Balanced Budget Act authorized additional federal grant funding each year between 1998 and 2002 for the Food Stamp E&T Program. The additional funding ranged from $31 million in 1999 to $131 million in 1998 and 2001. In order to access this additional funding, the legislation required that states spend the same amount of state funds on their Food Stamp E&T Program that they did in 1996—referred to as a state’s maintenance-of-effort. In addition, the legislation required that states spend at least 80 percent of their total federal grant funds on work activities for ABAWDs. States had the option to expend only 20 percent of their federal funds if they chose not to focus services on ABAWDs. Between 1998 and 2001, states spent 40 percent or less of the federal allocation. In 2001, over half of the states spent 25 percent or less of their federal grant allocation while only eight states spent more than three-fourths of their allocation. (See fig. 1.) These low spending rates may reflect both the rapid decline in the number of ABAWDs participating in the Food Stamp Program, as well as states’ decisions about how to structure their programs. The 2002 Farm Bill repealed some of the funding provisions enacted by the Balanced Budget Act. The bill eliminated the additional BBA funds for 2002 and provided $90 million for each year between 2002-2007. In addition, the bill provided an additional $20 million in each of these years for states that provide a work activity to every ABAWD who would otherwise be subject to the 3- out of 36-month time limit. Fiscal year 2001 and unspent prior year funds were rescinded, unless states already had obligated them. The Farm Bill also repealed the requirement that states meet their maintenance-of-effort requirement. In addition, states no longer have to spend 80 percent of federal grant funds on work activities for ABAWDs. However, the Farm Bill did not eliminate the 3- out of 36-month time limit for benefits or alter the work requirements for ABAWDs. States continue to receive the 50-percent matching federal funds for program administrative costs, and the Farm Bill eliminated the cap on reimbursements to states for support services, such as transportation, allowing states to be reimbursed for 50 percent of all support service expenses. (See fig. 2.) The Workforce Investment Act, which was passed in 1998, requires states and localities to coordinate many federally funded employment and training services through a single system, called the one-stop center system. Through one-stop centers, individuals can access a range of services, including job search activities and employment-related activities. WIA mandated that 17 categories of federal employment and training programs across four federal agencies be coordinated through the one- stop system, including three WIA-funded programs—WIA Adult, WIA Dislocated Worker, and WIA Youth. These programs provide three tiers, or levels, of service for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job searches and labor market information and are available to anyone coming into a one- stop center. These activities may be self-service or require some staff assistance. Intensive services include such activities as comprehensive assessment and case management—activities that require greater staff involvement. Training services include such activities as occupational skills or on-the-job training. Coordination between the 17 programs generally takes one of two forms: colocation, whereby clients access employment and training services at a local one-stop, or through referrals and electronic linkages to off-site programs. While other employment and training programs, such as TANF and the Food Stamp E&T Program, are not required to be a part of the one-stop system, some states have required localities to include these programs in the one-stop system. The Food Stamp E&T Program serves a small proportion of the food stamp population who do not usually receive assistance from other programs and who, according to state and local program officials, have characteristics that make them hard to employ. While USDA collects some nationwide data on the food stamp population for quality control purposes, it does not collect the information in a way that allows the agency to distinguish food stamp recipients participating in the Food Stamp E&T Program from recipients who are participating in other employment and training programs, such as TANF or WIA. However, because most food stamp recipients are exempt from food stamp work requirements due to their age or health, the proportion of food stamp recipients potentially served by the Food Stamp E&T Program is small. While nationwide data on the number of and characteristics of Food Stamp E&T participants are not available, state and local officials in the 15 states we reviewed described the population as generally hard to employ because they have little education, a limited work history, and are prone to substance abuse problems and homelessness. The officials also noted that many of these characteristics are more prevalent among ABAWDs and that this group is the most difficult to serve and employ. Food Stamp E&T participants comprise less than 9 percent of the food stamp population because most food stamp recipients are exempted from work requirements, such as registering for work or participating in the Food Stamp E&T Program. In fiscal year 2001, 91 percent of food stamp recipients were not required to meet work requirements. Over 60 percent were exempted due to their age—most were under 18 or over 59 (see fig. 3). Another 30 percent of food stamp recipients—working age adults— were exempted, over 40 percent of whom were disabled. Other working age adults were exempted because they were caring for a dependent child under age 6 or because they were working at least 30 hours per week. Working age adults may also have been exempted because they were already complying with work requirements of other programs, such as TANF. Food stamp recipients who participate in key federal cash assistance programs—such as TANF, Supplemental Security Income, or Unemployment Insurance Program—are exempt from the Food Stamp E&T Program. As a result, those who participate in the Food Stamp E&T Program generally do not receive any federal public cash assistance other than food stamps. Working age adults exempt from work requirements (ages 18-59) Food stamp recipients subject to work requirements (1,556,000 recipients) Food stamp recipients exempted from work requirements (15,713,000 recipients) complying with work requirements for another program, such as TANF, or were enrolled at least part time in school or a training program. Not all food stamp recipients subject to work requirements participate in the Food Stamp E&T Program. States have the flexibility to establish their own criteria for selecting which food stamp recipients are referred to the program. As a result of this flexibility, in 17 of the 50 states, according to USDA data, over 80 percent of food stamp recipients who were subject to work requirements—including ABAWDs and other mandatory work registrants—were required to participate in the program. However, 8 states required 20 percent or less to participate. (See fig. 4.) While USDA collects nationwide data on the food stamp population for quality control purposes, the agency does not collect the information in a way that identifies the specific employment and training program in which food stamp recipients are participating. Although data from the fiscal year 2001 quality control survey indicate that 8 percent of food stamp recipients are participating as mandatory participants in an employment and training program, USDA officials said questions in this survey regarding program participation do not specify a particular program. Rather, questions are general and could refer to the Food Stamp E&T Program or other employment and training programs such as TANF and WIA-funded programs. As a result, the agency is unable to identify food stamp recipients active in the Food Stamp E&T Program from food stamp recipients active in other employment and training programs. This prevents the agency from using the quality control survey to estimate the number or provide characteristics of Food Stamp E&T participants. While there are no nationwide data on the characteristics of Food Stamp E&T participants, state and local officials we spoke with in all 15 states said their Food Stamp E&T participants have multiple characteristics that make them hard to employ. Officials noted that Food Stamp E&T participants generally have limited education; often they have not completed high school. They also said that program participants frequently have a limited work history and few work skills. They noted that Food Stamp E&T participants often depend on seasonal employment such as tourism-related jobs, and at least one official said that many of their participants rarely hold a job for more than 3 months. Program officials also told us that participants, particularly those in rural settings, often lack transportation, making their continued employment difficult. Finally, officials identified mental health issues, substance abuse, and homelessness as additional characteristics making participants hard to employ. Officials from Colorado estimated, for example, that at least 40 percent of their Food Stamp E&T participants had substance abuse problems and 40 percent were homeless. In addition to providing anecdotal information on Food Stamp E&T participants, some states were able to provide quantitative data on a limited number of participant characteristics. While not required to collect or report these data to USDA, 8 of 15 states we contacted collected data on the gender, age, or income of Food Stamp E&T participants. In 6 of the 8 states, Food Stamp E&T participants were predominantly women—as were the majority of Food Stamp recipients—(see fig. 5) and data from 5 states show that most of their participants are between the ages of 18 and 40. (See app. I for a comparison of food stamp recipients and Food Stamp E&T participants by age.) Similar to all food stamp recipients, Food Stamp E&T participants generally have very low incomes. Three states provided us with data on participant incomes. Officials from California said the majority of their participants had incomes less than $800 per month, and officials from Colorado and Illinois said most participants have incomes less than $200 per month. According to officials from 8 of the 15 states we contacted, ABAWDs— who comprised 4 percent of the food stamp population nationwide in fiscal year 2001—have characteristics that make them the most difficult to serve and employ of all Food Stamp E&T participants. While a nationwide estimate of the number of ABAWDs participating in the Food Stamp E&T Program is not known, 8 states were able to provide data on the proportion of participants who were ABAWDs. The proportion varied greatly from 1 percent in New Mexico to 100 percent in Florida and Illinois. (See fig. 6.) Program officials said that ABAWDs—who are most often men—are more likely to lack basic skills such as reading, writing, and basic mathematics than other food stamp participants. In addition, officials said mental health issues, substance abuse, and homelessness are more prevalent among ABAWDs than other participants. A recent report cites these three characteristics as among the most common barriers to serving ABAWDs. The report also concludes that ABAWDs have less income—earned and unearned—than other food stamp recipients age 18 to 49. While the characteristics that make Food Stamp E&T participants hard to employ are more pronounced among ABAWDs, this group also presents unique challenges that add to the difficulties of serving them. First, ABAWDS are usually transient and, as a result, often only participate in the program for short durations. Moreover, officials also said ABAWDs are often unwilling to participate and frequently fail to show up for appointments. Some officials suggested that this unwillingness to participate stems partly from ABAWDs’ perception that their benefit level—an average of $118 of food stamp benefits per month—is too low to warrant participation in the program. Officials we spoke with and a recent report note that monitoring the activities of ABAWDs has been difficult due to the complexities of program requirements. For example, in order to determine whether ABAWDs may continue to receive food stamp benefits, states track ABAWDs to ensure that they are engaged in a qualifying work activity. ABAWDs may only receive benefits for 3 out of 36 months if they are not engaged in a qualifying work activity. Program officials said these requirements, in combination with ABAWDs’ sporadic participation in the program and reluctance to participate, discourage states from using their Food Stamp E&T resources to serve these individuals. In 2001, 25 states spent 20 percent or less of their federal grant allocation. Eight of the 25 states chose not to serve ABAWDs and as a result, were limited to spending only 20 percent of their federal grant funds. The other 17 states also spent 20 percent or less but may have served ABAWDs as well as other mandatory participants. While the 2002 Farm Bill removed the requirement that states spend 80 percent of federal grant funds on work activities for ABAWDs, states must still track ABAWD compliance with the 3- out of 36-month time limit. States provide Food Stamp E&T participants with case management services and offer some support services, such as transportation assistance. While states may provide participants with a range of employment and training activities, in 2001, states most often placed participants in job search and work experience. Other programs that serve low-income populations, such as TANF and the WIA Adult Program, provide similar activities. Legislative changes in the 2002 Farm Bill, however, may affect services that states provide to Food Stamp E&T participants. According to USDA officials, most states provide Food Stamp E&T participants with case management services. Case management services may include assessing a participant’s needs, developing an employment plan, or helping participants’ access services provided by other programs. For example, one state official told us that case managers work with participants and local housing organizations to help find shelter for the participants or get mental health services so they are ready to go to work. Case managers also work with Food Stamp E&T participants to help them access support services—services that provide assistance with transportation and work or education-related expenses. USDA data show that in fiscal year 2001 45 states provided transportation funds to Food Stamp E&T participants. In addition to basic transportation and other services paid for in part with federal grant funds, program officials told us some local Food Stamp E&T Programs provide participants with additional support services. Some local programs use state funds or coordinate with community-based organizations to obtain other services for participants. For example, one local Food Stamp E&T Program provides bicycles donated by a community-based organization to some participants who need transportation to get to work, while another provides basic hygiene products, such as soap and shampoo, because food stamp recipients may not use food stamp benefits to buy these products. While most Food Stamp E&T participants receive case management services, they also may engage in a range of employment and training activities to qualify for food stamp benefits. These include job search, job search training, work experience, education, and vocational training. Participants may also enroll in WIA or a Trade Adjustment Act-funded program. Job search activities may include self-directed or staff-assisted activities. Job search training activities include job skills assessment and participation in job clubs, wherein participants meet with other job seekers and local employers to obtain information on the jobs available in the area and assistance in marketing their skills. Participants engaged in work experience activities are required to work without pay in exchange for food stamp benefits. Education activities may include literacy training, high school equivalency programs, or postsecondary education, while vocational training provides skill-related training. While USDA does not require states to report individual participant activities, it does collect data on the number of participants placed in each activity. In fiscal year 2001, 40 of the 50 states provided data to USDA for participant employment and training activities. The data show that case managers most frequently assigned Food Stamp E&T participants to job search activities, including job search and job search training. (See fig. 7.) However, while job search accounted for about 49 percent of participant activities, the extent to which states provided job search activities varied. (See fig. 8.) For example, 2 states did not report offering any job search activities to participants, while in 11 of the 40 states, job search activities accounted for almost all of participant activities. (See app. II for a complete listing of the percent of program activities provided to participants.) Legislative changes enacted by the 2002 Farm Bill may affect the services that states provide to program participants by reducing the total amount of Food Stamp E&T federal funds available to states to $110 million—or $274 million lower than funds they had available in fiscal year 2001. As a result, most states will receive a smaller allocation in 2003 than they received in 2001, although 4 states will receive a greater allocation, in part due to changes in USDA’s funding formula. However, this funding decrease may have a greater impact on some states than others because not all states have been spending a large proportion of their federal grant allocation. For example, in 2001, more than half of the states spent less than 25 percent of their allocation, while only 8 states spent more than 75 percent. As a result of the funding decrease and states’ varied spending rates, about one-third of the states will receive a smaller allocation in 2003 than they spent in 2001. (See app. III for a comparison of what states spent in fiscal year 2001 and their allocations in fiscal years 2001 and 2003.) However, because the Farm Bill also eliminated the requirement that states reserve 80 percent of federal grant funds for activities for ABAWDs, states may choose to spend as much of their federal allocation as they did before the requirement became effective in 1998. For example, in 1997, 46 states spent more than 75 percent of their allocation, with states spending 94 percent of the total federal allocation. In 13 of the 15 states we contacted, the agency that administers the TANF block grant also oversees the Food Stamp E&T Program; in the 2 other states, the Food Stamp E&T Program is administered by the workforce development system. However, services are provided through a variety of local entities, including welfare offices and one-stop centers. While all but 1 of the states we contacted delivered at least some of their Food Stamp E&T services at the one-stops, the extent to which states use the one-stops to deliver these services varies considerably. For example, in Virginia, only two Food Stamp E&T Programs are colocated at the one-stops. In other counties, services are delivered at welfare offices. In Colorado, about one- third of the counties that provide Food Stamp E&T services—primarily the larger counties—deliver their Food Stamp E&T services through the one- stops. Other counties in Colorado deliver services through local welfare agencies or community-based organizations, such as Goodwill Industries. In Texas, the state’s workforce commission administers the Food Stamp E&T Program, and all program services statewide are delivered through the one-stop system. Food Stamp E&T participants may receive job search services through the one-stop centers, but according to many local program officials, few participants receive other services from employment and training programs available at the centers, such as the WIA Adult Program. In Pennsylvania, Food Stamp E&T participants are referred to the one-stops for job search activities, and in Vermont, almost all participants receive WIA-funded core services through the one-stop system. These services may include job search activities but may also include a preliminary assessment of skills and needs. Most state officials told us that they did not collect data on how many Food Stamp E&T participants were referred to or received services from other employment and training programs at the one-stops. However, local officials in 10 of the 15 states told us that few, if any Food Stamp E&T participants actually receive services from other employment and training programs at the one-stops, and a few provided estimates. For example, a local official in New Mexico estimated that his office referred about one-fourth of its Food Stamp E&T participants to the WIA Adult Program in any given year, but less than half of these are actually enrolled in the program. Local officials in Idaho, by comparison, said that while about one-third of their Food Stamp E&T participants are referred in any given year, only about 2 percent are enrolled in WIA-funded intensive or training services. A Food Stamp E&T administrator in Michigan told us that, even though the Food Stamp E&T Program is colocated at a one-stop center in his county, the center served only three or four clients a year. Program officials cited several reasons that Food Stamp E&T participants may not receive services from other employment and training programs. Officials from eight of the states we spoke with suggested that local WIA staff might be reluctant to provide WIA-funded intensive and training services to a population less likely to get and keep a job—such as those in the Food Stamp E&T Program—out of concern that they would adversely affect their performance as measured under WIA. While job seekers who receive core services that are self-service in nature are not included in these performance measures, participants enrolled in WIA-funded intensive or training programs are tracked in areas such as job placement, retention, and earnings change. WIA established these performance measures, and states are held accountable by the U.S. Department of Labor for their performance in these areas. If states fail to meet their expected performance levels, they may suffer financial sanctions; if states meet or exceed their levels, they may be eligible to receive additional funds. While employment and training programs at the one-stops offer some of the activities that Food Stamp E&T participants need, officials from 12 of the 15 states we contacted told us that most participants are not ready for these activities, in part, because they lack basic skills (such as reading and computer literacy) that would allow them to successfully participate. Officials from 5 states also noted that mental health problems often prevent Food Stamp E&T participants from participating in other more intensive employment and training programs at the one-stops. Program officials told us participants often need specialized case management services that might not be available from other program staff. Despite concerns about performance measures and the skill level of Food Stamp E&T participants, program officials from all 15 states we contacted cited advantages to colocating the Food Stamp E&T Program at the one- stops. The most frequently cited advantage was that Food Stamp E&T participants would benefit from having access to a broader array of employment and training services. In addition, officials from 9 of the states noted that colocation would provide a better use of program resources and staff, and program officials from 8 states said that the one-stops offer a more positive environment—one focused more on work and training than might be found in local welfare offices. Finally, officials from 7 states said that for those who may lack transportation, colocation of services would be advantageous. Little information is available about whether the Food Stamp E&T Program is effective in helping participants get and keep a job. Although USDA does not require the reporting of outcome data, 7 of the 15 states we contacted collected data in fiscal year 2001 on job placements, and 2 of these states also collected data on wages. Their job placement rates ranged from 15 percent in one state to 62 percent in another, and the average starting wages reported by the 2 states was about $7.00 per hour or about $1.91 above the federal minimum wage. In the late 1980s, USDA developed outcome measures for the Food Stamp E&T Program, but these measures were not implemented because of concerns among state and federal officials regarding the feasibility of collecting outcome data. In 1988, the Hunger Prevention Act directed the Secretary of Agriculture to work with states and other federal agencies to develop outcome-based performance standards for the program. The proposed measures included a targeted job placement rate (25 percent of those completing Food Stamp E&T activities) and a targeted average starting wage of $4.45—about the same as the minimum wage in the early 1990s. FNS published the proposed performance standards in 1991. According to USDA officials, reaction to implementing the proposed standards was overwhelmingly negative, with a consensus among state and federal officials that data collection would impose an unreasonable burden on state agencies and that the costs associated with collecting the data would be disproportionate relative to the program’s funding. The mandate to collect outcome data was subsequently removed from the legislation in 1996. Outcome measures became a much greater factor in how agencies assess the effectiveness of their programs with the passage of the 1993 Government Performance and Results Act (GPRA). GPRA shifted the focus of accountability for federal programs from inputs, such as staffing and activity levels, to outcomes. GPRA requires that each federal agency develop a multiyear strategic plan identifying the agency’s mission and long-term goals and connecting these goals to program activities. In addition, the President’s 2004 Budget contains increased emphasis on performance and management assessments, including a focus on short- term and long-term performance goals and the need to track performance data in order to assess a program’s achievements. For example, the Office of Management and Budget expects agencies to submit performance-based budgets in 2005 and is requiring that many adult employment and training programs (25) collect performance data in four areas-job placements, job retention, earnings gained, and program cost per job placement. This focus may lend new urgency for programs to collect outcome data. While outcome measures are an important component of program management in that they assess whether a participant is achieving an intended outcome—such as obtaining employment—they cannot measure whether the outcome is a direct result of program participation. Other influences, such as the state of the local economy, may affect an individual’s ability to find a job as much or more than participation in an employment and training program. Many researchers consider impact evaluations to be the best method for determining the effectiveness of a program—that is, whether the program itself rather than other factors leads to participant outcomes. In 1988, USDA commissioned an impact study to determine the effectiveness of the Food Stamp E&T Program and found that those required to enroll in the program did not fare any better, in terms of employment or wages, than those excluded from participating. While the study found that those required to enroll in the program increased their employment and earnings during the 12 months after certification for food stamp benefits, it found no difference between that group and those not required to participate. The study notes, however, that only 43 percent of those required to participate actually received employment and training activities in 1988 and that the services received by the program participants consisted primarily of referrals to job search activities. According to USDA officials, the agency has no plans to conduct another effectiveness evaluation of the Food Stamp E&T Program. They noted that the program is not a research priority for the agency’s food and nutrition area, and no mention of the program is noted in FNS’s strategic plan. They also noted that the cost of an evaluation might not be warranted, given the limited funding for the program. Federal funding for the program (including reimbursements for administrative costs) is small compared with other programs—averaging about $172 million per year between 1994 and 2001—compared to about $3.8 billion for WIA programs in fiscal year 2001. However, the federal government and the states have spent over $2 billion since 1994 on the Food Stamp E&T Program without any nationwide data documenting whether the program is helping its participants. While impact evaluations may be expensive and complex to administer, they are being used to assess the effectiveness of some federal programs. For example, the Department of Health and Human Services (HHS) is conducting evaluation studies on early childhood programs, and the Department of Labor recently evaluated the impact of the Job Corps program on student employment outcomes. In addition, both of these agencies are conducting research over the next 5 years that focuses on strategies to assist the hardest-to-serve, but they do not include the Food Stamp E&T population. HHS is commissioning an evaluation of programs that serve the hard-to-employ low-income parents, in part, to determine the effects of such programs on employment and earnings. And, Labor has plans to examine the most effective strategies for addressing employment barriers such as substance abuse and homelessness. The Food Stamp E&T Program was established to help some food stamp recipients get a job and reduce their dependence on food stamps. For many Food Stamp E&T participants—who often lack the skills to be successful in other employment and training programs and who usually are not eligible for most other federal assistance programs—this program is the only one focused on helping them enter the workforce. But little is known at any level—federal, state, or local—about whether the program is achieving this goal. Little nationwide data exist to tell us who is participating or if they are getting a job. Even less is known about whether the services provided by the program make a difference in program outcomes. With limited knowledge of whom the program is serving, what outcomes the program is achieving, or whether program services are making a difference, it is difficult to make informed decisions about where to place limited employment and training resources. Given recent legislative changes that reduce most states’ funds, while allowing more discretion as to whom they serve, it may be even more essential to understand what works and what does not. While the Food Stamp E&T Program is small relative to other federal employment and training programs, wise investment of these resources could help reduce long-term spending on food stamp benefits. To help USDA better understand who the Food Stamp E&T Program is serving, what the program is achieving, and whether the program is effective, we recommend that USDA do the following: Use its quality control survey to collect nationwide estimates on the number of food stamp recipients participating in the Food Stamp E&T Program and their characteristics, such as age and gender. To do so, USDA should clarify its instructions for reporting the data so that states clearly identify which food stamp recipients are in the Food Stamp E&T Program. Establish uniform outcome measures for the Food Stamp E&T Program and require states to collect and report them. Work with the Department of Labor and/or the Department of Health and Human Services on a research agenda that will allow for an evaluation of the effectiveness of the Food Stamp E&T Program. We provided a draft of this report to USDA for comment. While FNS did not provide written comments, FNS officials provided us with oral comments on the draft, including technical changes, which we incorporated where appropriate. FNS generally agreed with the benefit of collecting more data on the Food Stamp E&T Program; however, the agency had concerns that the potential benefits of more data may not be worth the effort or cost. Regarding our recommendation for more data on whom the program is serving, FNS said that because the Food Stamp Quality Control survey collects information from only a sample of food stamp households—and that individuals participating in the Food Stamp E&T Program would comprise a small percentage of those included in the sample—the data collected would be of limited use at the state level. While we agree that characteristic data gathered from the survey may not be useful at the state level, the survey could provide a cost-effective means to obtain nationwide data that are currently not available and would allow FNS to better understand the population that the program is serving. While FNS agreed with the need to assess what the Food Stamp E&T Program is achieving, agency officials expressed concerns regarding the cost of implementing our recommendation related to outcome data. Specifically, the officials are concerned that states will find it overly burdensome to collect outcome data given the limited funding for this program and that costs associated with collecting these data might reduce funding available for program participants. The officials noted that other employment and training programs that collect outcome data, such as WIA-funded programs, are funded at much higher levels than the Food Stamp E&T Program and that costs associated with collecting data for these programs might not be as onerous as for the Food Stamp E&T Program. We considered the costs associated with collecting outcome data and while we agree that collecting data will entail additional administrative costs for the states, we believe that the benefits of collecting uniform outcome measures outweigh the costs to states. Having some measures of what the program is achieving is necessary for FNS and state administrators as they strive to improve program services—about half of the states we contacted already collect some data on program performance. In addition, outcome data provide the Congress with key information necessary to evaluate the effectiveness of federal employment and training programs. Many federal employment and training programs, including ones that have funding levels similar to the Food Stamp E&T Program, have integrated outcome measures into the administration of their programs. The emphasis on performance evaluation is reflected in the President’s 2004 Budget and the Office of Management and Budget’s requirement that agencies submit performance-based budgets and that employment and training programs collect uniform performance data. Finally, FNS reiterated that given its limited research funds and other high- priority research areas, evaluation of the Food Stamp E&T Program is not a research priority for the agency at this time. However, regarding our recommendation concerning the feasibility of an effectiveness evaluation, FNS acknowledged the usefulness and cost-effectiveness of working with other agencies that are evaluating employment and training services for hard-to-serve populations. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix IV. Appendix II: Percent of Food Stamp E&T Activities Provided to Program Participants, Fiscal Year 2001 11.4% 0.6% 9.6% 14.4% 8.7% 49.6% 4.0% 5.4% 0.7% 3.2% 4.3% 27.0% experience Workfare 4.9% 91.4% 0.7% 18.5% 7.0% 2.2% 5.0% 2.1% 5.9% 17.4% 11.5% Data not provided by state to USDA. Difference between FY01 allocation and FY03 allocation ($7,657,966) Federal grant allocation $2,376,356 $376,570 $2,500,167 $2,866,326 $7,113,981 ($202,741) $1,065,870 $430,834 $4,714,894 $2,304,569 $431,163 $359,623 $527,708 $613,691 $3,143,729 ($24,278,056) ($1,039,510) ($5,942,618) ($244,226) ($9,375,829) ($11,209,832) ($1,851,862) ($264,241) ($8,083,577) ($6,636,074) ($2,405,236) ($464,819) $1,792,731 $680,346 $6,830,663 $1,247,911 $1,523,416 $1,803,099 $313,204 $506,145 $444,404 $217,301 $2,014,694 ($3,713,045) ($4,302,658) ($4,118,661) ($1,580,538) ($32,836,861) ($3,850,159) ($4,979,671) ($11,591,348) ($412,803) ($770,517) ($590,538) ($14,403) ($15,340,008) ($3,705,241) ($21,292,003) ($13,137,176) ($778,911) ($3,104,861) ($2,494,247) ($6,229,728) Difference between FY01 allocation and FY03 allocation ($27,958,590) $5,177,268 $327,237 $1,389,975 $3,019,575 $9,512,763 $611,950 $228,246 $1,948,464 $2,375,751 $2,274,490 ($1,368,533) $64,935 ($5,054,671) ($5,586,941) ($500,333) ($1,354,908) ($9,870,690) ($440,661) ($2,455,796) ($3,295,588) ($273,431) Elspeth Grindstaff and Angela Miles made significant contributions to this report. In addition, Jessica Botsford provided legal support, Marc Molino provided graphic design assistance, and Susan Bernstein provided writing assistance. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Department of Agriculture, Food and Nutrition Service: Food Stamp Program: Work Provisions of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 and Food Stamp Provisions of the Balanced Budget Act of 1997. GAO-02-874R. Washington, D.C.: July 17, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of Wiz’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Food Stamp Program: Implementation of the Employment and Training Program for Able-Bodied Adults Without Dependents. GAO-01-391R. Washington, D.C.: February 27, 2001. Department of Agriculture, Food and Nutrition Service: Food Stamp Program—Food Stamp Provisions of the Balanced Budget Act of 1997. GAO/OGC-99-66. Washington, D.C.: September 17, 1999. Food Stamp Program: Information on Employment and Training Activities. GAO/RCED-99-40. Washington, D.C.: December 14, 1998.
|
Since the late 1990s, many funding changes have been made to the Food Stamp E&T Program. In 1997, legislation required states to spend 80 percent of their funds on participants who lose their food stamp benefits if they do not meet work requirements within a limited time frame. The legislation also increased funds by $131 million to help states serve these participants. But spending rates for the program declined until, in 2001, states spent only about 30 percent of the federal allocation. In 2002, the Congress reduced federal funds to $110 million a year. While it is too soon to know the impact of these changes, GAO was asked to determine whom the program serves, what services are provided, and what is known about program outcomes and effectiveness. Food Stamp Employment and Training (E&T) participants are a small proportion of the food stamp population and do not usually receive cash assistance from other programs. While the U.S. Department of Agriculture (USDA) does not collect nationwide data on the number and characteristics of Food Stamp E&T participants, program officials in the 15 states GAO contacted described the population as generally hard to employ because they have little education and a limited work history. States may provide program participants with a range of employment and training activities that qualify them for food stamp benefits. USDA data show that, in fiscal year 2001, job search accounted for about half of all participant activities. Work experience--whereby participants receive food stamp benefits in exchange for work--accounted for about 25 percent. Food Stamp E&T services are delivered through a variety of local entities, such as welfare offices or one-stop centers--sites designed to streamline the services of many federal employment and training programs. While all but 1 of the 15 states delivered at least some of their Food Stamp E&T services at the one-stops, Food Stamp E&T participants do not usually engage in intensive services provided by other programs at the one-stops. Program officials from most of the 15 states noted that Food Stamp E&T participants generally lack basic skills that allow them to use other program services successfully. No nationwide data exist on whether the Food Stamp E&T Program helps participants get a job. While some outcome data exist at the state level, it is not clear the outcomes were the result of program participation. USDA has no plans to evaluate the effectiveness of the program nor have the Departments of Labor or Health and Human Services included Food Stamp E&T participants in their studies of the hardest-to-employ.
|
The transportation of air cargo between global trading partners provides the world economy with critical goods and components. Air cargo valued at almost $400 billion entered the United States in fiscal year 2004. According to TSA, approximately 200 U.S. and foreign air carriers currently transport cargo into the United States from foreign countries. During calendar year 2005, almost 9.4 billion pounds of cargo was shipped by air into the United States. About 40 percent of this amount, or 4 billion pounds, traveled onboard passenger aircraft. Typically, about one-half of the hulls of each passenger aircraft transporting cargo are filled with cargo. Air cargo includes freight and express packages that range in size from small to very large, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, other dry goods, fresh cut flowers, fresh seafood, fresh produce, tropical fish, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices that allow many packages to be consolidated into one container that can be loaded on an aircraft, wooden crates, assembled pallets, or individually wrapped/boxed pieces, known as break bulk cargo. Participants in the international air cargo shipping process include shippers, such as individuals and manufacturers; freight forwarders or regulated agents, who consolidate shipments and deliver them to air carriers; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and passenger and all-cargo carriers that store, load, and transport air cargo. International air cargo may have been transported via ship, train, or truck prior to its loading onboard an aircraft. Shippers typically send cargo by air in one of two ways. Figure 1 depicts the two primary ways in which a shipper may send cargo by air to the United States. A shipper may take its packages to a freight forwarder, or regulated agent, which consolidates cargo from many shippers and delivers it to air carriers. The freight forwarder usually has cargo facilities at or near airports and uses trucks to deliver bulk freight to air carriers—either to a cargo facility or to a small-package receiving area at the ticket counter. A shipper may also send freight by directly packaging and delivering it to an air carrier’s ticket counter or sorting center where either the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. The shipper may also have cargo picked up and delivered by an all-cargo carrier, or choose to take cargo directly to a carriers’ retail facility for delivery. As noted in figure 1, the inspections of air cargo can take place at several different points throughout the supply chain. For example, inspections can take place at freight forwarders or regulated agent’s consolidation facility, or at the air carrier’s sorting center. The Aviation and Transportation Security Act (ATSA) charged TSA with the responsibility for ensuring the security of the nation’s transportation systems, including the transportation of cargo by air into the United States. In fulfilling this responsibility, TSA (1) enforces security requirements established by law and implemented through regulations, security directives, TSA-approved security programs, and emergency amendments, covering domestic and foreign passenger and all-cargo carriers that transport cargo into the United States; (2) conducts inspections to assess air carriers’ compliance with established requirements and procedures; (3) conducts assessments at foreign airports to assess compliance with international aviation security standards, including those related to air cargo; and (4) conducts research and development of air cargo security technologies. Air carriers (passenger and all-cargo) are responsible for implementing TSA security requirements, predominantly through a TSA-approved security program that describes the security policies, procedures, and systems the air carrier will implement and maintain in order to comply with TSA security requirements. These requirements include measures related to the acceptance, handling, and inspection of cargo; training of employees in security and cargo inspection procedures; testing employee proficiency in cargo inspection; and access to cargo areas and aircraft. If threat information or events indicate that additional security measures are needed to secure the aviation sector, TSA may issue revised or new security requirements in the form of security directives or emergency amendments applicable to domestic or foreign air carriers. The air carriers must implement the requirements set forth in the security directives or emergency amendments in addition to those requirements already imposed and enforced by TSA. Under TSA regulations, the responsibility for inspecting air cargo is assigned to air carriers. TSA requirements, described in air carrier security programs, security directives, and emergency amendments, allow air carriers to use several methods and technologies to inspect domestic and inbound air cargo. These include manual physical searches and comparisons between airway bills and cargo contents to ensure that the contents of the cargo shipment matches the cargo identified in documents filed by the shipper, as well as using approved technology, such as X-ray systems, explosive trace detection systems, decompression chambers, explosive detection systems, and TSA explosives detection canine teams. (For an example of X-ray technology used by air carriers to inspect air cargo prior to its transportation to the United States, see fig. 2). TSA currently requires passenger air carriers to randomly inspect a specific percentage of non exempt air cargo pieces listed on each airway bill. Under TSA’s inbound air cargo inspection requirements, passenger air carriers can exempt certain cargo from inspection. TSA does not regulate foreign freight forwarders, or individuals or businesses that have their cargo shipped by air to the United States. To assess whether air carriers properly implement TSA inbound air cargo security regulations, the agency conducts regulatory compliance inspections of foreign and domestic air carriers at foreign airports. Currently, TSA conducts compliance inspections of domestic and foreign passenger carriers transporting cargo into the United States, but does not perform such inspections of all air carriers transporting inbound air cargo. TSA inspects air cargo procedures as part of its broader international aviation security inspections program, which also includes reviews of regulations such as aircraft and passenger security. Compliance inspections can include reviews of documentation, interviews of air carrier personnel, and direct observations of air cargo operations. Air carriers are subject to inspection in several areas of cargo security, including accepting cargo from unknown shippers, access to cargo, and security training and testing. Appendix II contains a detailed description of TSA’s efforts to assess air carrier compliance with inbound air cargo security requirements. In addition, TSA assesses the effectiveness of the security measures maintained at foreign airports that serve U.S. air carriers, from which foreign air carriers serve the United States, or that pose a high risk of introducing danger to international air travel. To conduct its assessments, TSA must consult with appropriate foreign officials to establish a schedule to visit each of these foreign airports. TSA assessments evaluate the security policies and procedures in place at a foreign airport to ensure that the procedures meet baseline international aviation security standards, including air cargo security standards. For further information on TSA’s foreign airport assessments including the results of its assessment conducted during fiscal year 2005, see appendix III. CBP determines the admissibility of cargo entering the United States and is authorized to inspect inbound air cargo for security purposes. Specifically, CBP requires air carriers to submit cargo manifest information prior to the aircraft’s arrival in the United States. CBP also has authority to negotiate with foreign nations to place CBP officers abroad to inspect persons and merchandise prior to their arrival in, or subsequent to their exit from, the United States, but has not yet negotiated arrangements with foreign host nations to station CBP officers overseas for the purpose of inspecting high-risk air cargo shipments. At U.S. airports, CBP officers may conduct searches of persons, vehicles, baggage, cargo, and merchandise entering or departing the United States. Since September 11, 2001, CBP’s priority mission has focused on keeping terrorists and their weapons from entering the United States. To carry out this responsibility, CBP employs several systems and programs. CBP’s Automated Targeting System (ATS) is a model that combines manifest and entry declaration information into shipment transactions and uses historical, specific enforcement, and other data to help target cargo shipments for inspection. ATS also has targeting rules that assign a risk score to each arriving shipment based in part on manifest information, as well as other shipment information, and potential threat or vulnerability information, which CBP staff use to make decisions on the extent of inspection to be conducted once the cargo enters the United States. To support its targeting system, CBP requires air carriers to submit cargo manifest information prior to the flight arriving in the United States. CBP officers use the ATS risk scores to help them make decisions regarding the extent of inspection to be conducted once the cargo arrives in the United States. Shipments identified by CBP as high risk through its ATS targeting system are to undergo mandatory security inspections. CBP officers may also inspect air cargo if they determine that a particular shipment is suspicious or somehow poses a threat. CBP uses a variety of non intrusive technologies and methods to inspect some air cargo once it arrives in the United States. For example, CBP officers carry personal handheld radiation detectors, as well as handheld radioactive isotope identification devices which can distinguish between different types of radiological material, such as that used in medicine or industry from weapons-grade material. Other technologies and methods CBP uses to inspect inbound air cargo include mobile X-ray machines contained in vans, pallet X-ray systems, mobile vehicle and cargo inspection systems (VACIS), and canine teams. The results of the nonintrusive inspections determine the need for additional measures, which could include physical inspections conducted by CBP officers. Figure 3 shows an example of CBP officers using nonintrusive technology to inspect inbound air cargo upon its arrival in the United States. To strengthen the security of the inbound cargo supply chain, the U.S. Customs Service (now CBP) initiated the voluntary Customs-Trade Partnership Against Terrorism (C-TPAT) program in November 2001. This program provides companies that implement CBP-defined security practices a reduced likelihood that their cargo will be inspected once it arrives in the United States. To become a member of C-TPAT, companies must first submit signed C-TPAT agreements affirming their desire to participate in the voluntary program. Companies must also provide CBP with security profiles that describe the current security procedures they have in place, such as pre-employment screening, periodic background reviews, and employee training on security awareness and procedures. CBP reviews a company’s application to identify any weaknesses in the company’s security procedures and work with the company to resolve these weaknesses. Once any weaknesses are addressed, CBP signs an agreement stating that the company is considered to be a certified C-TPAT member, eligible for program benefits. After certification, CBP has a process for validating that C-TPAT members have implemented security measures. During the validation process, CBP staff meet with company representatives to verify supply chain security measures. The validation process includes visits to the company’s U.S. and foreign sites, if any. Upon completion of the validation process, CBP reports back to the company on any identified areas that need improvement and suggested corrective actions, as well as a determination of whether program benefits are still warranted for the company. According to CBP officials, they use a risk-based approach for identifying the priority in which C-TPAT participants should be validated. The International Civil Aviation Organization (ICAO) is a specialized agency of the United Nations in charge of coordinating and regulating international air transportation. ICAO was established by the Convention on International Civil Aviation (also known as the Chicago Convention) in 1944 and is composed of over 180 member nations with aviation service capabilities. In 1974, ICAO established aviation security standards and recommended practices to ensure a baseline level of security. These standards are aimed at preventing suspicious objects, weapons, explosives, or other dangerous devices from being placed on board passenger aircraft either through concealment, in otherwise legitimate shipments, or through gaining access to air cargo shipments via cargo- handling areas. The standards call for member nations to implement measures to ensure the protection of air cargo being moved within an airport and intended for transport on an aircraft, and to ensure that aircraft operators do not accept cargo on passenger flights unless application of security controls has been confirmed and accounted for by a regulated agent or that such cargo has been subjected to appropriate security controls. ICAO standards also provide that except for reasons of aviation security, member states should not require the physical inspection of all air cargo that is imported or exported. In general, member states should apply risk management principles (such as targeting higher-risk cargo) to determine which goods should be examined and the extent of that examination. While compliance with these standards is voluntary, all 180 ICAO members, including the United States, have committed to incorporating these standards into their national air cargo security programs. The International Air Transport Association (IATA) represents about 260 air carriers constituting 94 percent of international scheduled air traffic. Building upon ICAO’s standards, IATA issued voluntary recommended practices and guidelines to help ensure that global air cargo security measures are uniform and operationally manageable. For example, IATA published a manual that, among other things, encourages air carriers to implement measures and procedures to prevent explosives or other dangerous devices from being accepted for transport by air, conduct pre-employment checks on individuals involved in the handling or inspection of air cargo, and ensure the security of all shipments accepted from persons other than known shippers or regulated agents through physical inspection or some type of screening process. IATA also developed guidelines to assist air carriers in developing security policies by providing detailed suggestions for accepting, handling, inspecting, storing, and transporting air cargo. The World Customs Organization (WCO) consists of 166 member nations, representing 99 percent of global trade, including cargo transported by air. In June 2005, WCO established its Framework of Standards to Secure and Facilitate Global Trade that, among other things, sets forth principles and voluntary minimum security standards to be adopted by its members. The framework provides guidance for developing methods to target and inspect high-risk cargo, establishes time frames for the submission of information on cargo shipments, and identifies inspection technology that could be used to inspect high-risk cargo. Risk management is a tool for informing policy makers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. In recent years, the President, through Homeland Security Presidential Directives (HSPD), and Congress, more recently through the Intelligence Reform and Terrorism Prevention Act of 2004, required federal agencies with homeland security responsibilities to apply risk- based principles to inform their decision making regarding allocating limited resources and prioritizing security activities. The National Commission on Terrorist Attacks Upon the United States (also known as the 9/11 Commission), recommended that the U.S. government identify and evaluate the transportation assets that need to be protected, set risk- based priorities for defending them, select the most practical and cost- effective ways of doing so, and then develop a plan, budget, and funding to implement the effort. In addition, DHS issued the National Strategy for Transportation Security in 2005 that describes the policies DHS will apply when managing risks to the security of the U.S. transportation system. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. As applied in the homeland security context, risk management can help officials make decisions about resource allocations and associated trade- offs in preparing defenses against acts of terrorism and other threats. We have recommended that TSA apply a comprehensive risk-based approach for securing the domestic air cargo transportation system. The Homeland Security Act of 2002 also directed the department’s Directorate of Information Analysis and Infrastructure Protection to use risk management principles in coordinating the nation’s critical infrastructure protection efforts. This includes integrating relevant information, and analysis and vulnerability assessments to identify priorities for protective and support measures by the department, other federal agencies, state and local government agencies and authorities, the private sector, and other entities. Homeland Security Presidential Directive 7 and the Intelligence Reform and Terrorism Prevention Act of 2004 further define and establish critical infrastructure protection responsibilities for DHS and those federal agencies given responsibility for particular industry sectors, such as transportation. In June 2006, DHS issued the National Infrastructure Protection Plan (NIPP), which named TSA as the primary federal agency responsible for coordinating critical infrastructure protection efforts within the transportation sector, which includes all modes of transportation. The NIPP requires federal agencies to work with the private sector to develop plans that, among other things, identify and prioritize critical assets for their respective sectors. In accordance with the NIPP, TSA must conduct and facilitate risk assessments in order to identify, prioritize, and coordinate the protection of critical transportation systems infrastructure, as well as develop risk- based priorities for the transportation sector. TSA officials reported that work is now under way on specific plans for each mode of transportation, but as of January 2007, they were not completed. To provide guidance to agency decision makers, we have created a risk management framework, which is intended to be a starting point for applying risk-based principles. Our risk management framework entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. DHS’s NIPP describes a risk management process that closely mirrors our risk management framework. Setting strategic goals, objectives, and constraints is a key first step in applying risk management principles and helps to ensure that management decisions are focused on achieving a purpose. These decisions should take place in the context of an agency’s strategic plan that includes goals and objectives that are clear and concise. These goals and objectives should identify resource issues and other factors to achieving the goals. Further, the goals and objectives of an agency should link to a department’s overall strategic plan. The ability to achieve strategic goals depends, in part, on how well an agency manages risk. The agency’s strategic plan should address risk-related issues that are central to the agency’s overall mission. Risk assessment, an important element of a risk-based approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application often involves assessing three key elements—threat, vulnerability, and criticality or consequence. A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. A criticality or consequence assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy, as a basis for identifying which structures or processes are relatively more important to protect from attack. Information from these three assessments contributes to an overall risk assessment that may characterize risks on a scale such as high, medium, or low and provides input for evaluating alternatives and management prioritization of security initiatives. The risk assessment element in the overall risk management cycle may be the largest change from standard management steps and can be important to informing the remaining steps of the cycle. For further details on our risk management framework, see appendix IV. The two components within DHS responsible for air cargo security, TSA and CBP, have initiated efforts to better secure inbound air cargo, but these efforts are in the early stages and could be enhanced. While TSA and CBP have taken some preliminary steps to use risk management principles to guide their decisions related to inbound air cargo security, most of TSA’s and CBP’s efforts to enhance inbound air cargo security are still largely in the planning stages. For instance, TSA has completed a strategic plan to address domestic air cargo security and has identified the primary threats associated with inbound air cargo. However, the agency has not identified goals and objectives for addressing inbound air cargo security, such as how it will coordinate with CBP to ensure that all relevant areas of inbound air cargo security are addressed. Further, TSA has not assessed which areas of inbound air cargo are most vulnerable to attack and which assets are deemed most critical to protect. Another action TSA has taken is the publication of its final air cargo security rule in May 2006 that included a number of provisions aimed at enhancing the security of inbound air cargo. However, TSA’s inbound air cargo inspection requirements continue to allow for a number of exemptions for cargo transported on passenger air carriers, which could be exploited to transport an explosive device. In addition, TSA conducts compliance inspections of domestic and foreign passenger air carriers transporting cargo into the United States, but the agency has not developed an inspection plan that would establish goals and measures for its inspection program to evaluate air carriers’ performance against expected results. Also within DHS, CBP has recently initiated efforts to mitigate the threat of a WMD entering the United States by targeting inbound air cargo transported on passenger and all-cargo aircraft that may pose a security risk and inspecting such cargo once it arrives in the United States. CBP also manages the C-TPAT program, which encourages those businesses involved in the transportation of cargo into the United States to enhance their security practices. However, CBP is still in the early stages of developing specific security criteria for air carriers participating in the program. In addition, DHS is in the early stages of researching, developing, and testing technologies to enhance the security of air cargo, but has not yet assessed the results or determined whether these technologies will be deployed abroad. Finally, TSA and CBP have taken steps to coordinate their responsibilities to safeguard air cargo transported into the United States, but the two agencies do not have a systematic process in place to share information that could be used to strengthen their efforts to secure inbound air cargo. Within DHS, TSA and CBP have begun incorporating risk management principles into their inbound air cargo security programs, but these efforts are in the early stages and more work remains to be done. Applying a risk management framework to decision making is one tool to help provide assurance that programs designed to combat terrorism are properly prioritized and focused. Thus, risk management, as applied in the homeland security context, can help decision makers to more effectively and efficiently prepare defenses against acts of terrorism and other threats. Risk management principles can be incorporated on a number of different levels within an agency’s operations. For example, CBP’s ATS system uses information from various sources to assign risk scores to cargo, as part of its risk-managed approach to cargo security. Another example of a risk management activity is considering risk when allocating resources. TSA has underscored the importance of implementing a risk- based approach that protects against known threats, but that is also sufficiently flexible to direct resources to mitigate new and emerging threats. According to TSA, the ideal risk model would be one that could be used throughout the transportation sector and applicable to different threat scenarios. As part of TSA’s risk-based approach, the agency issued an Air Cargo Strategic Plan in November 2003 that focused on securing the domestic air cargo supply chain and transportation system. However, this plan does not describe how the agency plans to secure inbound air cargo. TSA’s Air Cargo Strategic Plan describes an approach for screening or reviewing information on all domestic air cargo shipments to determine their level of relative risk, ensuring that 100 percent of cargo identified as posing an elevated risk is physically inspected, and pursuing technological solutions to physically inspect air cargo. This approach to target elevated risk domestic air cargo for inspection, however, is not yet in place. In developing its Air Cargo Strategic Plan, TSA coordinated with air cargo industry stakeholders representing passenger and all-cargo carriers, as well as with CBP to assist in developing a system for targeting domestic air cargo. TSA’s Air Cargo Strategic Plan, however, does not include goals and objectives for addressing inbound air cargo security, which presents different security challenges than domestic air cargo. According to CBP, the agency has begun a comprehensive review of its current air cargo security strategy, including how C-TPAT as well as relevant TSA programs can be incorporated into this strategy. As part of its risk management efforts, CBP developed a strategic plan covering fiscal years 2007-2011 focusing on securing the nation’s borders at ports of entry, including airports. This plan includes a discussion on how CBP will use risk-based principles to guide decisions related to securing inbound air cargo. For example, to achieve CBP’s strategic objective of screening all goods entering the United States by air, CBP plans to develop an approach to increase the percentage of goods for which it receives advance information. By increasing the amount of information available, CBP can better identify low-risk goods and move them quickly through the port of entry, while focusing its resources on inspecting cargo that represents higher risks. As TSA develops a strategy for inbound air cargo, it will be important to work with CBP to ensure that the two agencies coordinate their respective responsibilities for securing inbound air cargo and leverage available information to ensure vulnerabilities are addressed. For example, during discussions with TSA and CBP officials, we determined that, due in part to a lack of coordination between the two agencies, neither agency was addressing an area that both considered a potential threat to air cargo security. Although TSA and CBP have not stated whether this issue results in a vulnerability to the cargo’s transport to the United States, some air cargo industry stakeholders with whom we spoke told us it represents a security vulnerability. TSA officials acknowledged that it is important to partner with CBP, foreign governments, and international air cargo stakeholders in developing a strategy for securing inbound air cargo. TSA officials stated that they plan to revise their existing domestic air cargo strategic plan and will consider incorporating a strategy for addressing inbound air cargo security at that time. However, as of January 2007, agency officials had not set a time frame for when TSA will complete this revision, and the extent to which this plan will address inbound air cargo is unclear. CBP officials stated that their input could contribute to any strategy developed by TSA, and that CBP is in the initial stages of developing its own air cargo strategic plan, scheduled for completion by the end of 2007. In addition to developing a strategic plan, a risk management framework in the homeland security context should include risk assessments, which typically involve three key elements—threats, vulnerabilities, and criticality or consequence (for more information on our risk management framework, see app. IV). Information from these three assessments provides input for setting priorities, evaluating alternatives, allocating resources, and monitoring security initiatives. TSA has completed an assessment of air cargo threats, but has not assessed air cargo vulnerabilities or critical assets. In September 2005, TSA’s Transportation Security Intelligence Service (TSIS) completed an overall threat assessment for air cargo, which identified general and specific threats related to both domestic and inbound air cargo. According to TSA, the primary threats to inbound air cargo focus on the introduction of an explosive device in cargo loaded on a passenger aircraft, and the hijacking of an all-cargo aircraft resulting in its use as a weapon to inflict mass destruction. As stated previously, TSA, CBP, and industry stakeholders have also identified the introduction and transport of a WMD or its component parts as a potential threat. TSA has characterized the threats to inbound air cargo as high and has identified air cargo as a primary aviation target for terrorists in the short term. However, TSA has not evaluated the relative security risk presented by inbound air cargo compared to other areas of aviation security, such as passengers and checked baggage. While TSA has acknowledged that the vulnerabilities to inbound air cargo would likely be similar to those of domestic air cargo, TSA has not conducted a vulnerability assessment, nor has it identified vulnerabilities specific to inbound air cargo. TSA officials stated that the agency is first planning to conduct an assessment of domestic air cargo vulnerabilities before initiating an assessment of inbound air cargo vulnerabilities. TSA does not plan to complete its assessment of domestic air cargo vulnerabilities until late in 2007, thus potentially delaying the start of an assessment of the inbound air cargo vulnerabilities until 2008. According to TSA officials, limited resources and competing priorities have delayed agency efforts to conduct an assessment of inbound air cargo security vulnerabilities. Nevertheless, TSA officials acknowledge that vulnerabilities to inbound air cargo exist and that these vulnerabilities are in some cases similar to those facing the domestic air cargo supply chain. TSA officials stated that conducting vulnerability assessments for inbound air cargo will be difficult because these assessments require an understanding of the inbound air cargo supply chain, and while the agency has some information on the supply chains of several foreign countries, it does not have access to that information for many others. Although agency officials reported that they have taken initial steps toward developing a methodology for assessing inbound air cargo security vulnerabilities, they have not established a time frame for completing the methodology or determined when the vulnerability assessments will be conducted. TSA officials acknowledged that conducting assessments to identify vulnerabilities associated with inbound air cargo, and analyzing the results of such assessments, could help to strengthen the agency’s efforts to secure inbound air cargo by providing information that could be used to develop measures to address identified vulnerabilities. Air cargo industry stakeholders we spoke with, including those representing domestic and foreign air carriers, agreed that TSA-led vulnerability assessments could help to identify air cargo security weaknesses and develop measures to mitigate these weaknesses. TSA also has not developed a methodology or schedule for completing an assessment to identify those inbound air cargo assets deemed most critical to protect, or whose destruction would cause the most severe damage to the United States. TSA officials stated that inbound air cargo assets mirror domestic air cargo assets, and could include workers, facilities, and aircraft. According to TSA, factors that could be used to define critical inbound air cargo assets include the number of fatalities resulting from a terrorist attack on a domestic or foreign cargo facility or aircraft; the economic or political importance of the asset; and consequences that an attack would have on the public’s confidence in the U.S. government’s ability to maintain order, among other things. According to TSA officials, the agency will conduct an assessment of critical inbound air cargo assets once it has completed its vulnerability and criticality assessments for domestic air cargo expected in 2007. The need for an assessment of critical transportation infrastructure, which could include inbound air cargo assets, has been identified by various sources, including DHS’s NIPP and National Strategy for Transportation Security, and a number of Presidential Directives. The 9/11 Commission also recommended that the U.S. government identify and evaluate the transportation assets that need to be protected, set risk-based priorities for defending them, select the most practical and cost-effective ways of doing so, and develop a plan, budget, and funding to implement the effort. TSA officials we spoke with acknowledged that such assessments could better enable the agency to prioritize its efforts by focusing on high- priority or high-value inbound air cargo assets, and by targeting resources to address the most critical inbound air cargo security risks. Moreover, TSA officials agreed that analyzing the results of a criticality assessment could provide the basis for taking immediate protective actions depending on the threat environment, and guiding future agency decisions related to securing the inbound air cargo transportation system. In May 2006, TSA issued a final rule that revised some of the requirements air carriers need to follow to ensure air cargo security. While TSA’s air cargo security rule is focused primarily on domestic air cargo, it also includes more stringent security requirements for passenger and all-cargo carriers transporting cargo into the United States. For example, TSA created a new mandatory security regime for domestic and foreign all- cargo air carrier operations. The final rule also acknowledges that TSA amended its security directives and programs to triple the percentage of cargo inspected on domestic and foreign passenger aircraft. TSA currently requires foreign and domestic all-cargo carriers to inspect a different percentage of nonexempt items prior to the cargo’s loading. While the air cargo security rule establishes general requirements air carriers must follow to secure inbound air cargo, TSA is currently drafting and revising security programs to incorporate applicable elements of the rule and with which air carriers will need to comply. These security programs will address inbound, outbound, and domestic air cargo operations. TSA regulations require that each air carrier, foreign or domestic, adopt a security program that incorporates applicable security requirements and that is approved by TSA. Once TSA finalizes revisions to the security programs—which for domestic passenger air carriers is known as the Aircraft Operator Standard Security Program (AOSSP) and for foreign passenger air carriers is known as the Model Security Program (MSP)—TSA will require air carriers to amend their security programs to reflect TSA’s new requirements. TSA also drafted new security programs for domestic all-cargo carriers, referred to as the Full All-Cargo Aircraft Operator Standard Security Program (FACAOSSP), and for foreign all-cargo carriers, referred to as the All-Cargo International Security Program (ACISP). As of January 2007, TSA had yet to issue the final security programs. Air carriers will be required to be in full compliance with the revised and new security programs on a date to be established by the agency. However, TSA officials could not provide a time frame for when these programs would be finalized, nor has the date that air carriers will be required to be in compliance with the new and revised security programs been announced. After TSA issued its final air cargo security rule and released its draft security programs for comment, the agency held eight listening sessions in five cities to provide industry an opportunity to share its views on the proposed requirements before the final security programs are issued. At these listening sessions, some air carriers were pleased that TSA had taken action to strengthen air cargo security. Other air carriers, however, expressed concerns regarding the cost and feasibility of implementing TSA’s air cargo security requirements contained in the agency’s draft security programs. Air carriers present at these listening sessions also stated that given the operational changes they would need to make to implement TSA’s new air cargo security requirements, TSA should provide air carriers sufficient time to fully comply with the new and revised security programs. Although passenger air carriers expressed concern regarding the implementation of measures contained in the final rule and draft security programs, most of their comments relate to domestic air cargo security. Domestic and foreign all-cargo carriers cited several challenges related to TSA’s draft security programs for all-cargo carriers. These included new requirements for inspecting 100 percent of certain nonexempt inbound air cargo viewed as unnecessary, burdensome to implement, and costly; proposed revisions to existing inspection exemptions based on weight and packaging viewed as negatively affecting delivery of specific cargo shipments; application of new inspection and other requirements viewed as not consistent with identified threats to the air cargo industry; difficultly determining which TSA requirements apply to all-cargo carriers versus which apply to cargo transferred from an all-cargo aircraft to a passenger aircraft; and a proposed requirement to train carrier personnel to screen individuals and their property transported on an all-cargo flight viewed as unwarranted because very few individuals other than crew members fly on these aircraft. Among other things, the draft security programs for foreign and domestic passenger carriers would require the physical inspection of air cargo shipments, including manual searches and the use of technology, in addition to other methods currently in use. The primary concern expressed by all-cargo carriers about the draft security programs focus on air cargo inspection requirements. Specifically, some all-cargo carriers did not understand TSA’s rationale for requiring them to inspect 100 percent of certain types of nonexempt cargo and noted that this would require them to inspect three times more cargo than passenger carriers are required to inspect. According to some all-cargo carriers, TSA has not adequately explained any additional risk to all-cargo carriers that would justify the new inspection requirements. TSA officials stated that the agency will review the comments submitted by industry stakeholders regarding the new and revised security programs prior to issuing the final security programs. In our October 2005 report, we noted that TSA’s inspection requirements allowed carriers to exempt certain types of air cargo from inspection. These exemptions may leave the air cargo transportation system vulnerable to terrorist attack. We reported that a terrorist could place an explosive device in an exempt piece of cargo, which would not be detected prior to its loading onto aircraft because such cargo is not subject to inspection. We recommended that TSA assess the rationale for the exemptions, determine whether these exemptions pose vulnerabilities, and determine whether adjustments were needed. According to TSA officials, the agency originally chose to exempt certain cargo from the inspection requirements because it did not view the exempted cargo as posing a significant security risk and because the time required to inspect certain cargo could adversely affect the flow of commerce. TSA recognized, however, that some of the inspection exemptions could pose a potential vulnerability, and convened an internal cargo policy working group in February 2006 to examine air cargo policies and regulations that apply to inbound, outbound, and domestic air cargo, including inspection exemptions, to identify requirements that may allow for unacceptable security gaps. In March 2006, the working group made several recommendations to TSA related to the inspection exemptions for cargo transported on passenger aircraft. The working group’s recommendations included more stringent inspection requirements for passenger carriers. In October 2006, TSA issued a security directive and emergency amendment to domestic and foreign passenger air carriers operating within and from the United States that implemented elements of the recommendations of the internal working group. However, these new requirements do not cover all air carriers. In addition to the actions TSA took to address the working group’s recommendations, the agency is also considering limiting some of the inspection exemptions for all-cargo carriers, and has drafted security programs for foreign and domestic all-cargo carriers aimed at strengthening the security of inbound, outbound, and domestic air cargo. The draft programs for all-cargo carriers would require all-cargo carriers to inspect 100 percent of certain nonexempt air cargo. TSA officials stated that prior to issuing the final security programs, the agency will consider comments by all-cargo carriers on this proposed requirement. Under TSA’s revisions to the inspection exemptions for passenger air carriers transporting cargo from and within the United States, and TSA’s proposed changes to the inspection exemptions contained in the draft security programs for all-cargo carriers, certain types of air cargo will remain exempt from inspection. These remaining exemptions for both all-cargo and passenger air carriers transporting cargo into the United States continue to represent potential vulnerabilities to the air cargo transportation system. According to TSA officials, the agency has not established a time frame for completing its assessment of whether existing inspection exemptions pose an unacceptable security vulnerability. Some all-cargo carriers expressed concern over TSA’s proposal to eliminate the inspection exemption for certain types of cargo, and recommended that this proposal be reconsidered. TSA officials stated that the proposed revisions to the inspection requirements are aimed at increasing the overall security of air cargo transported on all-cargo aircraft. According to TSA officials, the agency is still evaluating industry’s comments to the proposed security programs, including those related to removing the inspection exemption for certain types of cargo transported on all-cargo carriers. TSA officials noted that the agency is also holding discussions with the air cargo industry to determine whether or not the current inspection exemptions leave the air cargo transportation system vulnerable to attack and what impact further revisions to the inspection exemptions would have on air carriers’ operations. According to TSA officials, while ongoing discussions with industry are focused on the domestic air cargo transportation system, any decisions made as a result of these discussions could affect inbound air cargo. TSA officials added that while industry stakeholder concerns are considered, decisions regarding what requirements will be issued will be based on the agency’s assessment of air cargo risks and security needs. TSA currently inspects domestic and foreign passenger air carriers transporting cargo into the United States to assess their compliance with TSA inbound air cargo security requirements. The agency, however, does not perform compliance inspections of all air carriers transporting cargo into the United States. Between July 2003 and February 2006, TSA conducted about 1,000 inspections of domestic and foreign passenger air carriers that included a review of air cargo security procedures. TSA’s inbound air cargo security inspections differ from its domestic air cargo security inspections in that the agency does not have an inspection plan that focuses solely on air cargo security regulations. Instead, TSA inspectors evaluate inbound cargo security procedures as a part of its international aviation security inspection program, which also includes reviews of areas such as aircraft, passenger, and baggage security. TSA’s five international field offices are responsible for scheduling and conducting the international air carrier inspections. TSA inspections may include areas of cargo security, such as cargo acceptance procedures, security testing and training, and ensuring that foreign air carriers implement a cargo security plan that is consistent with TSA standards. According to TSA records, inspectors have found instances where passenger air carriers were not complying with inbound air cargo security procedures. For example, TSA found that some passenger air carriers were accepting cargo from unknown shippers, not physically screening cargo in accordance with TSA regulations, and failing to search empty cargo holds on an aircraft to prevent unauthorized access prior to loading and unloading. If not corrected, these problems could create vulnerabilities in the security of inbound air cargo. For information on TSA’s inspections conducted, including inspection results from July 2003 to February 2006, see appendix II. TSA has a domestic aviation security inspection plan that, among other things, describes how the agency will ensure that air carriers that use domestic airports are complying with TSA security requirements, including those that apply to passengers, baggage, and air cargo. However, TSA has not developed a similar inspection plan for international aviation security. As a result, there is no inspection plan that would establish performance goals and measures that provide a clear picture of the intended objectives and performance of its inspections of passenger and all-cargo carriers that transport cargo into the United States. The Government Performance and Results Act of 1993 (GPRA), among other things, requires agencies to prepare an annual performance plan for their programs and directs executive agencies to articulate goals and strategies for achieving those goals. These plans should include performance goals and measures to determine the extent to which agencies are achieving their intended results. TSA’s annual domestic inspection plan describes how the agency will ensure air carrier compliance with federal aviation security requirements, including those related to air cargo security. The domestic inspection plan includes goals, such as the number of air cargo inspections of air carriers each inspector is to conduct for the year. TSA officials stated that the agency applied risk management principles that considered threat factors, local security issues, and input from law enforcement to target key vulnerabilities and critical assets to develop its domestic inspection plan goals. According to TSA, its plan for conducting domestic cargo inspections also takes into account how to use the agency’s limited inspection resources most effectively. Within the context of TSA’s international inspections program, an inspection plan should describe the agency’s approach for conducting compliance inspections of air carriers that transport cargo into the United States. This plan should include performance goals and measures to gauge air carriers’ compliance with inbound air cargo security requirements. Developing such indicators is also recommended by our standards for internal control in order for agencies to compare and analyze actual performance data against established goals. For example, we reported that successful organizations try to link performance goals and measures to the organization’s strategic goals and, to the extent possible, have performance goals that will show annual progress toward achieving their long-term strategic goals. With regard to TSA’s inspection plan, a goal could be to ensure that passenger and all-cargo air carriers transporting cargo to the United States are meeting an acceptable level of compliance with air cargo security requirements. Another goal could be to assess all- cargo carriers transporting inbound air cargo within a specified time frame based on the identified risk posed by these carriers to the United States. In addition, we reported that a successful agency focuses its goals on the results it expects the program to achieve. For example, TSA could measure the achievement of a compliance inspection goal by establishing the number and type of inspections the agency wants to conduct, and determining appropriate measures to gauge air carrier compliance with air cargo security requirements. TSA officials stated that the agency uses its foreign airport assessment schedule as its plan for determining where it will conduct compliance inspections of passenger air carriers during each fiscal year. Officials added that they select passenger air carriers for inspection based on factors such as the results of previous inspections, when the air carrier was last inspected, and the availability of inspection resources. While TSA’s schedule for completing airport assessment is an important step in focusing TSA’s international compliance inspection efforts, this schedule does not include goals or measures for evaluating passenger carrier compliance with TSA’s inbound air cargo security requirements. Further, the schedule does not include inspections of all-cargo carriers. Without an inspection plan, TSA may not be able to clearly show the relationship between its inspections efforts and its longer-term goals to secure inbound air cargo. Moreover, without establishing performance goals and measures, TSA is limited in its ability to assess the agency’s performance and the performance of the air carriers it regulates against expected outcomes. While we understand that TSA has competing demands and must address numerous areas of aviation security with limited resources, developing a risk-based plan would help the agency better plan for and articulate how it intends to address inbound air cargo security inspections using its limited resources. Further, developing goals and measures to benchmark its performance would demonstrate the effectiveness of its inbound air cargo security efforts and help TSA determine the extent to which the inspections are contributing to the agency’s overall aviation security goals and objectives. TSA is authorized by U.S. law to assess the effectiveness of security measures maintained at foreign airports that serve U.S. air carriers or from which foreign air carriers serve the United States, or that pose a high risk of introducing danger to international air travel. TSA staff located at five international field offices conduct these assessments. During an assessment, TSA inspectors are to evaluate the security policies and procedures in place at a foreign airport to determine whether procedures meet ICAO aviation security standards and recommended practices. TSA consults with foreign government officials to schedule these assessments. According to TSA officials, however, some foreign governments are sensitive to permitting the United States to come into their country and assess their airport security and may put conditions on the assessments, such as limiting the number of days that TSA has to conduct its assessments. TSA supplements its limited international inspection resources by using inspectors that are assigned to conduct aviation security inspections inside the United States to help international aviation security inspectors conduct foreign airport assessments. In October 2006, TSA implemented a risk-based methodology to prioritize which foreign airports to assess based on an analysis of the risk of an attack at an airport as determined by credible threat information, the vulnerability of the airport’s security based on previous airport assessments, and the number of flights coming to the United States from a foreign airport. TSA officials stated that this approach will allow the agency to focus its limited resources on airports that pose the most significant risk to the United States and aviation security. TSA officials stated that the agency has not performed assessments of all foreign airports with service to the United States, in part because of political sensitivities associated with foreign airport assessments and because limited international oversight resources may affect whether TSA assesses additional airports. Therefore, TSA cannot determine whether cargo transported from foreign airports at which it has not performed an airport assessment poses a security risk. To prevent WMD and other elements of terrorism from unlawfully entering the United States, CBP uses its automated targeting system, referred to as ATS, and other information to identify cargo that may pose a relatively high security risk, so it can undergo inspection once the cargo arrives in the United States. In July 2006, CBP began using ATS to target inbound air cargo on passenger and all-cargo aircraft that may pose a security risk. As discussed previously, ATS uses weighted rules or criteria that assign a risk score to each arriving shipment based on a variety of factors. This includes the submission of cargo manifest information required by CBP either at an aircraft’s time of departure for the United States or no later than 4 hours prior to arrival, as specified in regulation. Inbound air cargo transported by passenger and all-cargo air carriers that is targeted for security reasons by ATS is inspected by CBP personnel stationed at airports in the United States. CBP officials stated that the extent to which a cargo shipment is inspected depends on the risk score it receives, as well as the type of commodity that is shipped. CBP’s targeting policy describes the roles and responsibilities of CBP personnel involved in targeting air cargo transported on passenger and all- cargo air carriers that may pose a security risk and inspecting such cargo once it enters the United States. CBP’s targeting policy also includes details on the risk scores given to shipments that require inspection by CBP personnel. The policy also describes what an inspection of high-risk air cargo should include, such as the use of X-rays; inspection with radiation detection technology, such as personal handheld radiation detectors; and physical inspection. CBP has also established performance goals related to its efforts to target and inspect air cargo transported into the United States on passenger and all-cargo aircraft. Specifically, these performance goals relate to (1) targeting, controlling, inspecting, and interdicting high-risk air cargo shipments that may pose a threat to the national security of the United States, including instruments of terror or any commodity with a link to terrorism, narcotics, and other contraband, and agriculture risks, and (2) the accountability and reconciliation of all identified high-risk air cargo shipments. To gauge its effectiveness of meeting these goals, CBP recently drafted performance measures in conjunction with its targeting policy. According to CBP, many of the measures are new and will first be tested at selected airports to assess their feasibility, utility, and relevancy. These performance measures include the number of shipments identified by CBP as having direct ties to terrorism, the number of shipments that have been identified for further examination based on an anomaly in a nonintrusive inspection, the number of shipments that CBP holds, and the type of inspection findings. CBP did not provide us with a time frame for when these performance measures would be fully implemented. Our previous reports identified challenges that CBP faced when targeting oceangoing cargo shipped in containers for inspection. Specifically, we reported that CBP did not have a comprehensive, integrated process for analyzing inspection results of oceangoing cargo and incorporating these results into its targeting system. We also identified limitations with the information CBP used to target oceangoing cargo, such as vague or incomplete cargo manifests. We concluded that without complete and accurate information on shipments, it was difficult for CBP’s targeting system to accurately assess the risk of shipments and to conduct thorough targeting. We also found that CBP did not yet have a system in place to report sufficient details of the results of security inspections nationwide that could allow management to analyze those inspections and systematically adjust its targeting system. We noted that without a more comprehensive feedback system, the effectiveness of CBP’s targeting system could be limited. CBP officials acknowledged that the problems identified with ATS’s effectiveness in targeting oceangoing cargo would also apply to CBP’s efforts to target inbound air cargo. For example, CBP uses cargo manifests as a data source to identify high-risk cargo shipments, but according to some air carrier representatives, the information contained in these manifests is not always complete or accurate. CBP’s new effort to target and inspect inbound air cargo transported on passenger carriers that may pose a security risk provides CBP an opportunity to strengthen its targeting activities by addressing the issues with its targeting system that we previously identified. DHS’s strategy for addressing the threat of nuclear and radiological terrorism includes deploying radiation detection equipment at U.S. ports of entry, including airports. CBP plans to deploy radiation portal monitors at international airports by September 2009 in order to inspect 100 percent of inbound cargo for radiation. We have previously reported that currently deployed radiation portal monitors have limitations and that CBP is behind schedule in deploying radiation portal monitors at U.S. ports of entry, including airports. Specifically, we reported that the portal monitors are limited by the type of radioactive materials they are able to detect and they cannot differentiate naturally occurring radiological material from radiological threat material. We also reported that meeting DHS’s goal to deploy over 3,000 radiation portal monitors at U.S. ports of entry, including U.S. airports, by September 2009 was unlikely. As of December 2005, CBP had deployed 57 radiation portal monitors at U.S. facilities that receive international mail and express consignment courier facilities in the United States, but had not yet deployed monitors at U.S. airports that receive inbound air cargo. CBP officials cited a lack of resources as the primary reason for not being able to purchase and deploy more monitors, including those at U.S. international airports. Until CBP fully deploys radiation portal monitors at international airports that receive inbound air cargo, CBP’s efforts to effectively inspect air cargo once it enters into the United States for radiological weapons or the materials to build such a weapon may be limited. Another effort CBP has under way to secure the security of inbound air cargo is the voluntary C-TPAT program. This program is aimed at strengthening the international supply chain and U.S. border security. In exchange for implementing security policies and procedures, such as pre- employment screening, periodic background reviews, and employee training on security awareness and procedures, CBP provides C-TPAT participants, including foreign and domestic air carriers, with a reduced likelihood that their cargo will be inspected once it arrives in the United States. According to CBP, while there are more than 6,000 participants in the C-TPAT program, as of June 2006, only 31 of the approximately 200 foreign and domestic air carriers that transport cargo into the United States, and only 52 of the potentially thousands of freight forwarders that consolidate cargo departing by air for the United States, are participating in the program. Some foreign air carriers and foreign freight forwarders we spoke with stated that although CBP has made them aware of C-TPAT benefits, they have not applied for program membership because they do not see the value of participating in C-TPAT. Specifically, these air carriers and freight forwarders noted that participation in C-TPAT does not ensure quicker delivery times of their shipments and therefore does not benefit them. According to CBP officials, while C-TPAT offers participants a wide range of benefits, such as a reduced number of inspections and priority processing for inspections, CBP cannot compel air carriers to participate in the program because the C-TPAT program is voluntary. CBP has, however, identified expanding the number C-TPAT participants, including air carriers, as one of its objectives in CBP’s fiscal years 2007-2011 Strategic Plan for Securing America’s Borders at Ports of Entry. At present, the requirements to become a member of C-TPAT are more broadly written for air carriers and freight forwarders than they are for importers, sea carriers, and highway carriers because CBP has not yet finalized specific security criteria for air carriers and freight forwarders participating in the program. According to CBP officials, they have drafted specific security criteria for air carriers. However, the finalization of the air carrier criteria has been placed on hold, as CBP is in the process of conducting a comprehensive review of its current air cargo strategy, including how CBP will incorporate C-TPAT. DHS has taken some steps to incorporate new technologies into strengthening the security of air cargo, which will affect both domestic and inbound air cargo. However, TSA and DHS’s Science and Technology (S&T) Directorate are in the early stages of evaluating available aviation security technologies to determine their applicability to the domestic air cargo environment. TSA and S&T are seeking to identify and develop technologies that can effectively inspect and secure air cargo with minimal impact on the flow of commerce. DHS officials added that once the department has determined which technologies it will approve for use on domestic air cargo, they will consider the use of these technologies for enhancing the security of inbound air cargo shipments. According to TSA officials, there is no single technology capable of efficiently and effectively inspecting all types of air cargo for the full range of potential terrorist threats, including explosives and WMDs. As such, TSA, together with S&T, is conducting a number of pilot programs that are testing a variety of different technologies that may be used separately or in combination to inspect and secure air cargo. These pilot programs seek to enhance the security of air cargo by improving the effectiveness of air cargo inspections through increased detection rates and reduced false alarm rates, while addressing the two primary threats to air cargo identified by TSA—hijackers on an all-cargo aircraft and explosives on passenger aircraft. DHS’s pilot programs are testing a number of currently employed technologies used in other areas of aviation and transportation security, as well as new technologies. These pilot programs include an air cargo explosives detection pilot program implemented at three airports, testing the use of explosive detection systems, explosive trace detectors, standard X-ray machines, canine teams, technologies that can locate a stowaway through detection of a heartbeat or increased carbon dioxide levels in cargo, and manual inspections of air cargo; an explosive detection system (EDS) pilot program, which is testing the use of computer-aided tomography to compare the densities of objects to locate explosives in air cargo and to determine the long- term feasibility of using EDS equipment as a total screening process for break bulk air cargo; an air cargo security seals pilot, which is exploring the viability of potential security countermeasures, such as tamper-evident security seals, for use with certain classifications of exempt cargo; the use of hardened unit loading devices, which are containers made of blast-resistant materials that could withstand an explosion on board the aircraft; and the use of pulsed fast neutron analysis (PFNA) which allows for the identification of the chemical signatures of contraband, explosives, and other threat objects (see appendix V for more detailed information on DHS’s and TSA’s air cargo security pilot tests). TSA anticipates completing its pilot tests by 2008, but has not yet established time frames for when it might implement these methods or technologies for the inbound air cargo system. As noted, some of the technologies being pilot-tested are currently employed or certified for use in other areas of aviation security, to include air cargo. According to DHS and TSA officials, further testing and analysis will be necessary to make determinations about the capabilities and costs of these technologies when employed for inspecting inbound air cargo at foreign locations. Pursuant to Homeland Security Presidential Directive 7, TSA is responsible for coordinating with relevant federal agencies, such as CBP, to secure the nation’s transportation sector, including the air cargo system. TSA and CBP have taken a number of steps to coordinate their respective efforts to safeguard air cargo transported into the United States. For example, CBP shared its experience in targeting international cargo shipments with TSA to help the agency develop a system to target elevated-risk domestic air cargo shipments for inspection. Moreover, in 2003, interagency working groups were established to share information on TSA’s technology development programs and CBP’s air cargo targeting activities, among other things. In addition, TSA and CBP officials at the three U.S. airports we visited told us that both agencies discuss aviation security issues, including inbound air cargo, during weekly or monthly meetings with airport representatives and other aviation industry stakeholders. These officials also stated that TSA and CBP staff located at U.S. airports participate in operational planning and compliance inspection activities, and that these task forces and inspection activities may include inbound air cargo security issues. While these collaborative efforts are important, the two agencies do not have a systematic process in place to ensure that they are communicating information on air cargo security programs and requirements, such as the results of compliance oversight and targeting activities that could be used to enhance the security of inbound air cargo. Both collect information that each other could use. For example, if TSA’s compliance inspection results indicated that certain air carriers were in violation of TSA air cargo inspection requirements, CBP could use this information to assess the risk of inbound air cargo shipments from these particular air carriers. Moreover, if air carrier inspections revealed routine problems with certain types of shipments or certain shippers, CBP could use this information to apply greater scrutiny to those types of shipments or shippers. Likewise, if TSA’s foreign airport assessments identify airports that are not meeting international security standards, CBP could use this information to improve its inbound air cargo targeting efforts. TSA also requires air carriers transporting cargo into the United States to randomly inspect a certain percentage of inbound cargo and compile information on these inspections. These inspection results could indicate which shipments were inspected, the outcome of those inspections, and the location at which the inspections took place. Similarly, CBP collects information that could be useful to TSA’s efforts to secure inbound air cargo. For example, information gathered from CBP’s inbound air cargo targeting and inspection activities could be used by TSA to help focus its compliance oversight efforts on those air carriers whose shipments have been identified by CBP as posing an elevated security risk. In addition, the results of CBP officers’ inspection of inbound air cargo could be used by TSA to make risk-based decisions regarding the types of cargo air carriers should be required to inspect, based on its contents and points of origin, prior to its departure to the United States. Without a systematic process to communicate relevant air cargo security information, TSA and CBP are limited in their ability to most effectively secure inbound air cargo. TSA and CBP officials agreed that a process to improve information sharing could provide opportunities for enhancing their respective efforts to secure inbound air cargo. Specifically, CBP officials stated that information on the results of TSA’s compliance inspections of air carriers and assessments of foreign airport security, as well as the results of air carrier inspections of air cargo prior to its transport to the United States, could potentially help CBP in targeting high-risk inbound air cargo shipments for inspection upon its arrival in the United States. TSA officials also stated that having access to the results of CBP’s inbound air cargo targeting and inspection activities could be used to potentially strengthen existing TSA air cargo security requirements. Although both agencies agree that sharing relevant air cargo information could help to more effectively secure inbound air cargo, neither TSA or CBP has plans to establish a process to share information on the other’s air cargo security programs and requirements and the results of compliance oversight and targeting activities that could be used to enhance the security of inbound air cargo. While some of the security practices employed by foreign governments that regulate airports with high volumes of cargo and domestic and foreign air carriers that transport large volumes of cargo are similar to those required by TSA, we identified some security practices that are currently not used by TSA that could have potential for strengthening the security of inbound and domestic air cargo supply chains. Although TSA has initiated a review of select countries’ air cargo security practices, the agency has not systematically compiled and analyzed information on actions taken by foreign countries and foreign and domestic air carriers to determine whether the benefits that these practices could potentially have in strengthening the security of the U.S. and inbound air cargo supply chain are worth the cost. In addition, DHS has begun working with foreign governments to develop uniform air cargo security standards and to mutually recognize each other’s security standards, referred to as harmonization. However, challenges to harmonizing security practices may limit the overall impact of TSA’s efforts. TSA, foreign governments, and foreign and domestic industry stakeholders employ some similar air cargo security practices, such as inspecting a specific percentage of air cargo or the use of specific technologies to inspect air cargo. However, 18 of the 22 industry stakeholders and 9 of the 11 countries we compiled information on reported that they have implemented security practices that differ in some way from those required by TSA to ensure the security of air cargo they transport both within their own countries and into the United States. Some of these practices could potentially be used to mitigate terrorist threats and strengthen TSA efforts to secure inbound air cargo when employed in conjunction with current TSA security practices. While we observed a range of security practices used by foreign countries, we identified four categories of security practices implemented by foreign governments and foreign and domestic air carriers that could potentially enhance the agencies’ efforts to secure air cargo. These practices include (1) the use of air cargo inspection technologies and methods, (2) the percentage of air cargo inspected, (3) physical security and access control methods for air cargo facilities, and (4) procedures for validating known shippers. We focused on these practices based on input from air cargo industry stakeholders. We did not compare the effectiveness or cost of foreign practices with current TSA requirements and practices. Rather, we determined whether the use of these security practices differed from existing TSA efforts to secure domestic and inbound air cargo and could have the potential to augment the department’s current efforts to secure domestic and inbound air cargo. For additional information on actions taken by domestic and foreign air carriers with operations overseas and air cargo industry stakeholders to secure air cargo, see the table in appendix VI. Additional information about the actions taken by foreign governments to secure air cargo is included in the table in appendix VII. Three of the 17 air carriers and 1 of the 7 countries we visited require the use of large X-ray machines to inspect entire pallets of cargo transported on passenger aircraft. These machines allow for cargo on pallets to undergo X-ray inspection without requiring the pallet to be broken down and reconfigured. Government officials from the country that uses large X-ray machines stated that this technology allows for the expedited inspection of high volumes of large cargo items, without impeding the flow of commerce. CBP also uses this technology to inspect inbound air cargo once it enters the United States. While DHS’s S&T and TSA have recently begun to research large X-ray technology, TSA officials stated that the agency has not established time frames for developing and testing X-ray technology capable of inspecting large pallets of cargo transported domestically or at a foreign location prior to its transport to the United States. Without further consideration of the use of large X-ray technology, which may have been enhanced over the past 8 years, TSA may be limited in its ability to make such determinations regarding its effectiveness in the post-September 11 air cargo environment. In addition, three domestic all-cargo carriers with operations overseas have independently chosen to employ radiation detection technologies to inspect air cargo for potential WMD and other radiological items prior to the cargo being transported on an all-cargo aircraft. Specifically, one all- cargo air carrier determined that the introduction of a WMD onto aircraft poses a significant threat. As a result, this carrier inspects cargo shipments using radiation detection portals and handheld radiation detectors. According to TSA officials, the agency does not currently require air carriers to conduct inspections of air cargo to detect WMD prior to its transport into the United States because the agency considers mitigating the threat of WMD to be the responsibility of CBP. Further, two European countries are currently using canines in a different manner than TSA to inspect air cargo for explosives. Specifically, these countries are using the Remote Air Sampling for Canine Olfaction (RASCO) technique, which involves the use of highly trained dogs to sniff air samples collected from air cargo or trucks through a specially designed filter. The dogs sniff a series of air samples to determine whether or not there is a trace of explosives and indicate a positive detection by sitting beside the sample. According to foreign government officials representing two of the countries that use this technique, tests to determine the effectiveness of this practice have shown that RASCO has a very high rate of effectiveness in detecting traces of explosives in cargo. According to foreign government officials, this inspection method can be used on cargo that is difficult to inspect using other methods, due to size, density, or clutter, and does not require the breakdown of large cargo pallets. Further, officials stated that the dogs used in RASCO do not tire as easily as dogs involved in searching cargo warehouses, and can therefore be used for a longer period of time. Both TSA and CBP have certified canine teams for use in detecting explosives in baggage and currently use dogs for air cargo inspection. These canine teams are currently used to search narrow and wide-body aircraft, vehicles, terminals, warehouses, and luggage in the airport environment. According to TSA officials, while the results of previous agency tests of RASCO raised questions about its effectiveness, they continue to work with their international counterparts to obtain information on the feasibility of using RASCO to inspect air cargo. TSA officials stated that the agency has not yet determined whether RASCO is sufficiently effective at finding explosive in quantities that could cause catastrophic damage to an aircraft and whether this technique will be approved for use in the United States. The majority of the countries we visited and the majority of air carriers we spoke with have taken several actions to increase the percentage of air cargo that is inspected as well as using threat information to target certain cargo for inspection prior to transport. For example, 6 of the 17 foreign and domestic air carriers we met with are either required by their host government or have independently chosen to inspect a higher percentage of air cargo shipments, with X-ray technology or other inspection methods, than is currently required by TSA. Air carrier officials stated that the decision to inspect a higher percentage of air cargo is based on several considerations, including concerns about the terrorist threat to passenger aircraft, as well as concerns regarding the security of the air cargo supply chain in their host country. In addition, in 4 of the 7 countries we visited, air cargo inspections are conducted earlier in the supply chain prior to the cargo’s consolidation and delivery to airports. Specifically, the governments in these 4 countries permit inspections to be conducted by regulated agents who meet certain government requirements, such as maintaining an approved security program. Foreign government officials we spoke with stated that this practice contributed to the security of air cargo because it increased the total amount of cargo inspected and facilitated the inspection of cargo earlier in the supply chain. Finally, the majority of air carriers we spoke with have independently chosen to use available threat information to determine how much scrutiny and what methods to apply to certain cargo prior to its transport on aircraft. Specifically, 9 of the 17 passenger and all-cargo air carriers we interviewed target their air cargo inspection efforts based on analyses of available threat information, among other factors that could affect air cargo security. TSA recently increased the amount of cargo air carriers are required to inspect and initiated efforts to require freight forwarders to inspect domestic air cargo earlier in the supply chain. The agency, however, has not evaluated the procedures foreign countries and air carriers use to inspect a higher percentage of air cargo without affecting the flow of commerce to determine whether the cost of using these procedures would be worth the potential benefits of enhanced security. Moreover, unlike the majority of foreign and domestic air carriers we interviewed, TSA does not adjust the percentage of air cargo air carriers are required to inspect based on threat information related to specific locations. While TSA requires passenger air carriers to implement additional security requirements for inspecting checked baggage and passengers for flights departing from high-risk locations, the agency has not implemented additional requirements for air cargo departing from these same locations. Agency officials stated that new air cargo security requirements, contained in the agency’s air cargo security rule, are adequate to safeguard all air cargo transported into the United States, including cargo transported from high- risk locations. TSA officials added that the agency would consider implementing additional air cargo security requirements for high-risk locations if intelligence information became available that identified air cargo transported from these locations as posing a high risk to the United States. CBP, however, currently considers information on high-risk locations to identify cargo that should undergo inspection upon its arrival in the United States. In October 2006, TSA issued an emergency amendment requiring indirect air carriers, under certain conditions, to inspect a certain percentage of air cargo prior to its consolidation. While TSA’s efforts to require freight forwarders to inspect domestic air cargo earlier in the supply chain have the potential for enhancing domestic air cargo security, we have previously identified problems with TSA’s oversight of freight forwarders to ensure they are complying with air cargo security regulations. In addition to inspecting air cargo prior to its transport on aircraft, we identified additional security practices implemented by air carriers and foreign governments to physically secure air cargo and air cargo facilities. For example, two foreign governments require that all air cargo be stored in a secured terminal facility located within a restricted area of the airport to prohibit tampering to the cargo prior to its loading onto an aircraft. At some airports with restricted areas, individuals accessing these areas must first undergo physical screening through the use of walk-through metal detectors or biometric identification systems. For instance, one all-cargo air carrier uses a biometric hand-scanning identification system to grant employees access to air cargo storage facilities. In addition, 10 of the 17 air carriers we interviewed are subject to audits of the access controls at air cargo facilities to assess security vulnerabilities at such a facility. If the test results in a breach of security, all cargo contained within the breached facility must be inspected before it is permitted to be loaded onto a passenger or all-cargo aircraft. TSA acknowledged the importance of enhancing the security of air cargo and air cargo facilities, and included provisions in the agency’s air cargo security rule for applying or expanding the secure identification display area (SIDA) requirements at U.S. airports to include areas where cargo is loaded and unloaded. However, TSA has no plans to require additional air cargo access control measures. Two of the 7 countries we visited employ stringent programs for validating known shippers that differ from the program used in the United States. For example, 1 country we visited requires its known shippers or those shippers that have met certain criteria and have an established shipping history, referred to as known consignors in the country, to be validated by government-approved contractors. Prior to implementing this requirement, the country’s consignor program allowed regulated agents and airlines to assess and validate their own consignors with whom they did business. However, according to government officials, the previous program was ineffective because it allowed for breaches in the security of the air cargo supply chain, such as the implementation of weak security programs by shippers and conflicts of interest among air carriers and their customers. We previously reported on the limitations of TSA’s current known shipper program, such as the relative ease of TSA’s requirements for becoming a known shipper. Under this foreign country’s new program, validations of known consignors are conducted by independent third parties that have been selected, trained, and accredited by the government. The government maintains the authority to remove a validator from an approved list, accompany a validator on a site visit, or conduct unscheduled spot visits to known consignor sites. To become known in this particular country, the consignor can choose from a list of over 100 validators to schedule a validation inspection. The validation process is conducted using a checklist of security requirements that includes the physical security measures in place at the site, staff recruitment, personnel background checking and security checks, access control to the site, air cargo packing procedures, and storage of secure cargo, among other things. After the initial validation inspection, consignors must be reassessed every 12 months to retain their known status. During the first round of assessments conducted, 70 percent of existing known customers failed to become known consignors because of the stricter security requirements in place under the new scheme. Since the new validation program requires program participants to implement stricter security practices for securing air cargo before it is delivered to the air carrier, it helps to ensure that cargo coming from known consignors has been adequately safeguarded. While TSA’s air cargo security rule contains provisions for enhancing the agency’s known shipper program, such as making air carrier and indirect air carrier participation in the agency’s centralized database mandatory, it did not modify TSA’s current process for validating known shippers, which remains the responsibility of indirect air carriers and air carriers. Accordingly, passenger, all-cargo, and indirect air carriers will continue to be responsible for entering shipper information into TSA’s central known shipper database, which may allow for potential conflicts of interest because air carriers who conduct business with shippers will also continue to have the authority to validate these same shipping customers. TSA officials stated that the agency will continue to rely on its mandatory centralized known shipper database that allows air carriers and indirect air carriers to validate shippers as known until it develops a system that would enable TSA to validate known shippers. According to TSA officials, however, the agency is not considering implementing a program that relies on an independent third party to validate shippers because high administrative costs, combined with the large number of shippers located within the United States, may make it difficult to implement a third-party validation program. Foreign government officials stated that using third parties to validate shippers has enhanced the countries’ air cargo security by reducing the number of shippers that are considered known and by introducing more security controls at an earlier point in the supply chain. Although the implementation of a third-party validation program may be challenging in the United States, without further analysis of such a program, TSA may be missing an opportunity to determine the extent to which all or parts of a similar scheme could be incorporated into the agency’s current air cargo security practices. We previously reported that in order to identify innovative security practices that could help further mitigate terrorism-related risk to transportation sector assets—especially as part of a broader risk management approach discussed earlier—it is important to assess the feasibility as well as the costs and benefits of implementing security practices currently used by foreign countries. However, DHS has not taken systematic steps to compile or analyze information that could contribute to the security of both domestic and inbound air cargo. In response to a recommendation made by DHS’s Science and Technology Directorate, TSA has taken initial steps to learn more about foreign air cargo security technologies and practices that could be applied in the United States. For example, according to TSA officials, the agency collects information on the security measures implemented by countries from which air carriers transport air cargo into the United States. In addition, the United States has agreements with several countries that allow TSA to visit and compile information on their aviation security efforts, including those related to air cargo. Likewise, officials from these countries are allowed to visit the United States to learn about DHS’s aviation security measures. TSA officials acknowledge that further examination of how foreign air cargo security practices may be applied in the United States could yield opportunities to strengthen the department’s overall air cargo security program. While TSA has obtained some information on foreign air cargo security efforts, TSA officials acknowledged that the agency has not systematically compiled and analyzed information on foreign air cargo security practices to determine those, if any, that could be used to strengthen the agency’s efforts to secure air cargo. TSA officials stated that while some foreign air cargo security practices may hold promise for use in the United States, the agency and the air cargo industry face challenges in implementing some of these practices because the U.S. air cargo transportation system involves multiple stakeholders and is responsible for transporting large amounts of cargo on both passenger and all-cargo aircraft. While large amounts of air cargo are transported to and from U.S. airports on a daily basis, we identified air cargo security practices implemented at foreign airports that also process large volumes of air cargo shipments that may have application to securing domestic and inbound air cargo operations. For example, we observed the security practices at 8 foreign airports, 4 of which rank among the world’s 10 busiest cargo airports. In addition, some of the security practices we identified are being implemented by air carriers that transport large volumes of air cargo. Specifically, we spoke with air carrier officials representing 7 of the world’s 10 largest air cargo carriers. In addition to taking initial steps to collect information on foreign air cargo security practices, DHS has also begun efforts to work with foreign governments to develop uniform air cargo security standards and to mutually recognize each other’s air cargo security practices—referred to as harmonization. Harmonization has security as well as efficiency benefits, including better use of resources and more effective information sharing. However, working with foreign governments to achieve harmonization may be challenging because these efforts are voluntary. Additionally, many countries around the world may lack the resources or infrastructure needed to develop an air cargo security program as developed as that of the United States. One way TSA is working with foreign governments is by collaborating on the drafting of international air cargo security standards. For example, according to TSA officials, agency representatives worked with foreign counterparts to develop Amendment 11 to ICAO’s Annex 17, issued in June 2006, which sets forth new standards and recommended practices related to air cargo security. In addition, TSA is working with the European Union to develop a database containing information on shippers and freight forwarders that will be shared between the United States and European Union member states. As of January 2007, TSA was negotiating with the European Union on (1) how information in the databases will be shared, (2) what information will be shared, and (3) how the shared information will be used by each entity. Currently, the European Union database can transmit data to the TSA system as part of the development and testing of the European Union system. However, TSA’s system will not be able to transmit data to the European Union’s database until TSA’s new known shipper and indirect air carrier databases are online, which TSA expects to occur sometime in late 2007. CBP has also engaged in efforts to develop uniform air cargo security standards with select foreign countries. Specifically, CBP undertook a study with the Canadian Border Services Agency (CBSA) to identify similar air cargo security practices being carried out by CBP and CBSA and areas in need of improvement. The study made recommendations to enhance both agencies’ efforts to secure air cargo that included specific steps the agencies can take to harmonize security measures. For example, the study recommended that CBP and CBSA explore harmonizing air cargo targeting and inspection protocols, including the use of detection technology. The study also recommended that the two agencies share knowledge of emerging technologies. CBP’s fiscal years 2007-2011 Strategic Plan for Securing the Nation’s Borders at Ports of Entry recognizes the need to partner with foreign governments to share relevant information in an effort to improve cargo security, including cargo transported by air. According to foreign government and international air cargo industry representatives, the development of uniform air cargo security requirements and measures could provide security benefits by eliminating ineffective requirements and practices and focusing on automated or nonintrusive inspection technologies that could be universally employed to reduce the potential for human error. The cargo security mission of the International Air Transport Association, according to the association’s cargo security strategy 2006/2007, is to simplify cargo security by developing an integrated approach that involves all key supply chain stakeholder groups, and which is proportionate to the threat, effective, harmonized, and sustainable. The World Customs Organization’s Framework of Standards to Secure and Facilitate Global Trade has also called for aviation and customs security requirements to be harmonized into one integrated solution, to the extent possible. Foreign air carrier officials we spoke with also stated that developing uniform air cargo security standards related to performing background checks on air cargo workers, training air cargo workers, and controlling access to air cargo facilities would increase security levels in these areas. These officials added that uniform air cargo security requirements could facilitate industry compliance with security requirements. Further, foreign air carrier representatives and foreign government officials discussed the need to harmonize the terms used in the air cargo environment. For example, TSA uses the term “indirect air carriers” when referring to certified freight forwarders, whereas most other countries refer to these entities as “regulated agents.” In addition, TSA uses the term “known shipper” to refer to certified shippers, while most other nations use the term “known consignor” when referring to these same entities. Harmonized terminology would provide air cargo industry stakeholders clarification on which security requirements apply to them. Foreign and U.S. air cargo industry representatives and foreign government officials added that there is currently too much variation among countries regarding what type of air cargo must be inspected, what types of cargo are exempt from inspection, which entities should conduct the inspections, and what methods or technologies should be used to inspect air cargo. These representatives and officials stated that a harmonized inspection process would reduce duplicative efforts to inspect cargo shipments in order to meet different countries’ security requirements. According to industry officials, having to implement duplicative security requirements, particularly those related to air cargo inspections, can impede the flow of commerce, expose air cargo shipments to theft, and damage high-value items. For example, representatives from a U.S. air carrier stated that in one Asian country, government employees inspect 100 percent of outbound air cargo transported on a passenger air carrier. However, to meet U.S. requirements, TSA requires passenger air carriers transporting air cargo into the United States to inspect a certain percentage of nonexempt cargo shipments, which would have already been inspected by the foreign government. Air carrier representatives stated that meeting TSA inspection requirements is problematic in certain foreign countries because air carriers are not permitted to re-inspect air cargo shipments that have already been inspected by foreign government employees and deemed secure. These conflicts and duplication of effort could be avoided through mutually acceptable uniform air cargo security standards developed jointly between the United States and foreign countries. However, we recognize that because foreign countries’ requirements are so varied, and the threats to certain foreign airports are less than to others, TSA would have to consider accepting other countries’ inspection requirements on a case-by-case basis to determine the viability of such an option. According to TSA officials, developing stronger uniform international standards would improve the security of inbound air cargo and assist TSA in performing its mission. For example, TSA officials stated that the harmonization of air cargo security standards would provide a level of security to those entities not currently regulated by the agency, such as foreign freight forwarders and shippers. TSA has taken additional steps to begin mutual recognition of foreign air cargo security requirements in an effort to enhance the security of inbound air cargo. For example, TSA officials stated that the agency approved amendments to air carriers’ security programs in November 2001 permitting those carriers operating out of the United Kingdom, France, Switzerland, Israel, and Australia to implement the air cargo security requirements of these foreign countries, in lieu of TSA’s. TSA officials stated that these five countries were selected based on agency officials’ recommendations and a review of the countries’ security programs to determine if country requirements and practices met or exceeded TSA requirements. In contrast, those air carriers operating out of a foreign country other than the five previously identified must implement their host government’s requirements in addition to TSA’s. Officials added that in order for these countries’ air cargo security programs to remain recognized by TSA, they must have met or exceeded TSA’s air cargo security requirements, including new requirements set forth in the air cargo security rule. TSA officials further stated that they do not currently have plans to review other countries’ air cargo security measures and that such reviews would be predicated on a host countries’ request. In addition, air carriers may seek TSA’s approval of amendments to their security programs that would enable the air carrier to implement alternative air cargo security measures that satisfy TSA’s minimum security requirements while maintaining compliance with the security requirements of the host government. According to TSA officials, the agency will approve these alternative measures as long as TSA deems that they meet ICAO’s standards and TSA’s minimum requirements. For example, officials noted that some foreign governments allow cargo from unknown shippers to be transported on passenger aircraft after that cargo is inspected. Although this measure differs from the requirements in place in the United States that do not permit cargo from unknown shippers to be transported on passenger aircraft, TSA officials stated that the ICAO standards are being met and air carriers operating out of such countries are permitted to transport cargo into the United States. Foreign government officials, embassy officials, and foreign industry members with whom we met also stated that to lessen the burden on airports and air carriers, TSA should consider accepting the results of ICAO or European assessments of airports with passenger air carrier service to the United States, and air carrier compliance inspections conducted by the European Union in lieu of conducting their own assessments and inspections. According to foreign government officials, in addition to TSA air carrier inspections and foreign airport assessments, air carriers located at foreign locations and airports around the world are subject to inspections by ICAO, as well as their host country. The European Union has also recently begun to conduct its own assessments of the security of airports located within its member states. Officials from one country told us that TSA should consider accepting the results of European Union assessments in light of the progress the European Union has made in developing its oversight program. Foreign government officials also expressed concern over TSA’s inspections of foreign air carriers, saying that TSA lacks the authority under host government or international laws to assess foreign air carriers’ compliance with TSA’s security requirements that exceed ICAO’s standards. Notwithstanding this view, TSA is authorized under U.S. law to ensure that all air carriers, foreign and domestic, operating to, from, or within the United States maintain the security measures included in their TSA-approved security programs and any applicable security directives or emergency amendments issued by TSA. Although TSA security requirements support the ICAO standards and recommended practices, TSA may subject air carriers operating to, from, or within the United States to any requirements necessary and assess compliance with such requirements, as the interests of aviation and national security dictate. TSA officials acknowledged that they have discussed the possibility of using European Union airport assessment results to either prioritize the frequency of TSA’s assessments or to conduct more focused TSA assessments at European Union airports. According to TSA officials, the agency may also be able to use host government or third-party assessments to determine the aviation security measures to focus on during TSA’s own airport assessments in foreign countries. TSA is also considering reducing the number of assessments conducted at airports that are known to have effective security measures in place and focus inspector resources on airports that are known to have less effective security measures in place. In addition, TSA is considering having a TSA inspector shadow a European Union inspection team for 1 or 2 days to validate the results of European Union assessments. Another option would be for TSA and the European Union to leverage their resources by conducting joint airport assessments. According to a European Union official, however, member states recently met to discuss sharing European Union assessment results with TSA. Specifically, member states determined that until the European Union and TSA agree on how they will share sensitive security information with each other and how they will conduct joint assessments of each other’s airports, that at this time they will not share the results of European Union airport assessments with TSA. The European Union official further stated that member states will not share their European Union airport assessment results with TSA unless TSA reciprocates. The official added that member states may share the results of airport assessments conducted by their own internal auditing entities with TSA, but it would be illegal for member states to share their European Union assessment results with TSA. TSA is also working closely with the European Union to develop mutually acceptable air cargo security measures. For example, in March 2005 a bilateral meeting on air cargo security was held between the European Union and the United States. An objective of this meeting was to share information on the air cargo security policies being developed by both, which, in turn, may encourage mutual acceptance. The development of the European Union/United States joint air cargo database was a focus of this meeting. The meeting also provided the European Union an opportunity to comment on TSA’s notice of proposed rule making on air cargo security before the rule was finalized. Despite DHS’s efforts to harmonize international air cargo security practices, a number of key obstacles, many of which are outside of DHS’s control, may impede their progress. For example, because international aviation organizations, such as ICAO, have limited enforcement authority, they can only encourage, but generally not require, countries to implement air cargo security standards or mutually accept other countries’ security measures. In addition, the implementation of uniform air cargo security standards may require the expenditure of limited resources. For example, according to European Union and air cargo industry officials, those countries with air cargo security programs that are less advanced than those of the European Union and the United States may not have the resources or infrastructure necessary to enhance their air cargo security programs. In addition, some foreign governments do not share DHS’s view regarding the threats and risk associated with air cargo. For example, CBP has identified the introduction of terrorist weapons, including a WMD, as the primary threat to cargo entering the United States. Government officials from one country we met with, however, stated that they do not view the introduction of a WMD as a significant threat to air cargo security. Officials from another country stated that, unlike DHS, they do not consider stowaways as a primary threat to air cargo, while an official from a third country noted that it does not differentiate between the threats to passenger air carriers and those to all-cargo carriers. In addition, while TSA prohibits cargo from unknown shippers from being transported on passenger aircraft, the European Union and one Asian country we obtained information from allows cargo from unknown shippers to be transported on passenger aircraft after the cargo is inspected. These countries also inspect 100 percent of cargo from unknown shippers that is transported on all-cargo aircraft, while TSA requires all-cargo air carriers to randomly inspect a portion of the air cargo they transport. These differing approaches to air cargo security may make the harmonization of inspection requirements difficult to achieve. Further, TSA faces legal challenges in mutually accepting the results of other entities’ airport assessments. According to TSA officials, the agency interprets its statutory mandate to conduct assessments of foreign airports to mean that TSA must physically observe security operations at a foreign airport. This interpretation, according to TSA, precludes TSA from relying solely on third-party or host government assessments. If the Secretary of DHS, on the basis of the results of a TSA assessment, determines that a foreign airport does not maintain and carry out effective security measures, the Secretary must take further action. Such actions include, among others, notifying appropriate authorities of the foreign government of deficiencies identified, providing public notice that the airport does not maintain and carry out effective security measures, or suspending service between the United States and the airport if it is determined a condition exists that threatens the safety or security of the passengers, aircraft, or crew, and such action is in the public interest. TSA officials noted that unlike DHS, ICAO has limited enforcement capabilities. However, TSA officials stated that the agency is taking steps to further emphasize reciprocity with other governments by encouraging them to assess airports within the United States. Such an effort could help facilitate the agency’s foreign airport assessments and air carrier inspections. TSA officials also stated that although they are working with the European Union to develop a process to share airport assessment and inspection results, the agency currently does not have an agreement with either the European Union or ICAO to share assessment results. TSA officials added that even if they obtain access to these results, TSA is still legally required to conduct its own assessments of airports at which air carriers have operations into the United States and will continue with inspections of air carriers that transport cargo into the United States. Information on the results of other governments’ airport assessments and air carrier inspections could help TSA focus its oversight resources on those countries and carriers that may pose a greater risk to the United States. In addition, foreign government and embassy officials noted that it will be difficult to harmonize air cargo security standards and requirements until the international community develops an approach for sharing sensitive information, such as security requirements. Developing a process for sharing sensitive information could help the United States and other countries improve their understanding of each others’ security measures and identify overlapping or contradicting security requirements. While DHS has made significant strides in strengthening aviation security, it is still in the early stages of developing a comprehensive approach to ensuring inbound air cargo security. Until TSA and CBP take additional actions to assess the risks posed by inbound air cargo and implement appropriate risk-based security measures, U.S.-bound aircraft transporting cargo will continue to be vulnerable to terrorist attack. In October 2005, we recommended that TSA take a number of actions designed to strengthen the security of the nation’s domestic air cargo transportation system. Similar actions, if effectively implemented, could also strengthen the department’s overall efforts to enhance the security of inbound air cargo, both before the cargo has departed a foreign nation and once it has arrived in the United States. We are encouraged by TSA’s initial efforts to use a risk-based approach to guide its investment decisions related to inbound air cargo security while at the same time addressing other pressing aviation and transportation security priorities. However, risk management efforts should begin with a strategy that includes specific goals and objectives, which TSA has not yet identified. Likewise, TSA’s efforts to prioritize inbound air cargo assets and guide decisions about protecting them could be strengthened by establishing a methodology and time frames for completing risk assessments of inbound air cargo and determining how to use the results to target security programs and investments. Further, while TSA has drafted new requirements for securing inbound air cargo, without reexamining the rationale for existing inspection exemptions specific to air cargo transported into the United States on passenger aircraft and making any needed adjustments to these exemptions, there will continue to be a vulnerability that could be exploited by terrorists. Moreover, without developing an inspection plan that includes performance goals and measures to gauge air carrier compliance with air cargo security requirements, TSA cannot readily identify those air carriers that are achieving an acceptable level of compliance and focus the agency’s inspection resources on those air carriers with higher levels of noncompliance that may pose a greater risk. Coordination and communication between TSA and CBP is also important to ensuring that gaps do not exist in the security of inbound air cargo. Without effectively sharing information, TSA’s and CBP’s inbound air cargo security activities may be less efficient and effective. While TSA and CBP have separate missions within DHS, their responsibilities for the security of air cargo are complementary. A strategy that clearly defines TSA’s and CBP’s roles and responsibilities with regard to securing inbound air cargo could help ensure that all areas of inbound air cargo security are being addressed. TSA and CBP also lack a systematic process to share relevant air cargo security information, such as the results of air carrier compliance inspections and foreign airport assessments that could enhance both agencies’ efforts to secure air cargo. Such a process could provide opportunities for enhancing TSA’s and CBP’s respective efforts to secure inbound air cargo. TSA’s efforts to coordinate with foreign governments and air cargo stakeholders are an important step toward developing enhanced and mutually agreeable international air cargo security standards. While TSA has taken steps to obtain information on foreign air cargo security practices, further examination of how these practices may be applied in the United States could yield opportunities to strengthen the department’s overall air cargo security program. Doing so could also enable the United States to leverage the experiences and knowledge of foreign governments and international air cargo industry stakeholders and help identify additional innovative practices to secure air cargo against a terrorist attack in this country. To help ensure that the Transportation Security Administration and Customs and Border Protection take a comprehensive approach to securing air cargo transported into the United States, in the restricted version of this report we recommended that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration and the Commissioner of U.S. Customs and Border Protection to take the following two actions: (1) Develop a risk-based strategy, either as part of the existing air cargo strategic plan or as a separate plan, to address inbound air cargo security, including specific goals and objectives for securing this area of aviation security. This strategy should clearly define TSA’s and CBP’s responsibilities for securing inbound air cargo, as well as how the agencies should coordinate their efforts to ensure that all relevant areas of inbound air cargo security are being addressed, particularly as they relate to mitigating the threat posed by weapons of mass destruction. (2) Develop a systematic process for sharing information between TSA and CBP that could be used to strengthen the department’s efforts to enhance the overall security of inbound air cargo, including, but not limited to, information on the results of TSA inspections of air carrier compliance with TSA inbound air cargo security requirements and TSA assessments of foreign airports’ compliance with international air cargo security standards. To help strengthen the Transportation Security Administration’s inbound air cargo security efforts, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to take the following four actions: (3) establish a methodology and time frame for completing assessments of inbound air cargo vulnerabilities and critical assets, and use these assessments as a basis for prioritizing the actions necessary to enhance the security of inbound air cargo; (4) establish a time frame for completing the assessment of whether existing inspection exemptions for inbound air cargo pose an unacceptable vulnerability to the security of air cargo, and take steps, if necessary, to address identified vulnerabilities; (5) develop and implement an inspection plan that includes performance goals and measures to evaluate foreign and domestic air carrier compliance with inbound air cargo security requirements; and (6) in collaboration with foreign governments and the U.S. air cargo industry, systematically compile and analyze information on air cargo security practices used abroad to identify those that may strengthen the department’s overall air cargo security program, including assessing whether the benefits that these practices could provide in strengthening the security of the U.S. and inbound air cargo supply chain are cost- effective, without impeding the flow of commerce. We provided a draft of this report to DHS for review and comments. On April 19, 2007, we received written comments on the draft report, which are reproduced in full in appendix VIII. DHS generally concurred with the report and recommendations. With regard to our recommendation to develop a risk-based strategy to address inbound air cargo security which clearly defines TSA’s and CBP’s responsibilities for securing inbound air cargo, particularly as they relate to mitigating the threat posed by weapons of mass destruction, DHS stated that CBP is in the preliminary stages of developing its Air Cargo Security Strategic Plan. According to DHS, the draft plan includes goals and objectives, such as capturing accurate advance information to effectively screen air cargo shipments; accounting for and reconciling all high-risk air cargo shipments arriving from foreign destinations; developing and enhancing partnerships to strengthen air cargo security while continuing to facilitate the movement of legitimate trade; and controlling, inspecting and interdicting all air cargo that may pose a threat to national security of the United States. DHS also stated that CBP is coordinating with TSA in the refinement of CBP’s Air Cargo Security Strategic Plan. Current efforts include discussions with TSA management and the review of relevant information in the classified TSA air cargo threat assessment. DHS further stated that CBP plans to collaborate with TSA during the vetting stage of CBP’s Air Cargo Strategic Plan to ensure coordination of efforts and seamless implementation. Further, DHS stated that TSA plans to revise its existing Air Cargo Strategic Plan in fiscal year 2007, and will consider including a strategy for addressing inbound air cargo transported on passenger and all-cargo aircraft. DHS stated that TSA will identify and include specific goals and objectives for securing this area of aviation security and will work with CBP to share best practices in mitigating threats posed by weapons of mass destruction. While DHS has recognized the need for CBP and TSA to work together to address inbound air cargo security threats, DHS has not indicated whether the Air Cargo Strategic Plan CBP is developing or TSA’s revised Air Cargo Strategic Plan will provide a risk-based strategy for how the agencies will coordinate their respective efforts to ensure the security of air cargo transported into the United States, particularly as they relate to mitigating the threat posed by weapons of mass destruction. Taking such action would be necessary to fully address our recommendation. Concerning our recommendation to develop a systematic process for sharing information between TSA and CBP that could be used to strengthen the department’s efforts to enhance the overall security of inbound air cargo, DHS stated that CBP and TSA plan to meet monthly to continue working on ensuring air cargo security and to determine whether they can work more collaboratively to ensure air cargo security. DHS stated that these meetings will also focus on its air cargo security strategy, including proposed DHS definitions for the terms “screen,” “scan” and “inspection.” DHS also noted that TSA and CBP have previously collaborated on air cargo security initiatives and efforts through their ongoing participation in the Aviation Security Advisory Committee Air Cargo Working Group, and CBP has shared information on its Automated Targeting System with TSA staff who are developing a Freight Assessment System to target elevated risk domestic cargo. DHS further stated that TSA recognizes that CBP’s Customs-Trade Partnership Against Terrorism program may include some information that could help TSA in its efforts to strengthen the security requirements for individuals and businesses that ship air cargo domestically. While CBP’s and TSA’s efforts to collaborate on their air cargo security activities are worthwhile, it is also important that TSA and CBP develop a system to share information—such as the results of TSA inspections of air carrier compliance with TSA inbound air cargo security requirements and TSA assessments of foreign airports’ compliance with international air cargo security standards—that could be used to strengthen the department’s efforts to secure inbound air cargo. Ensuring that TSA and CBP incorporate systematic information sharing into their ongoing coordination efforts would more fully address our recommendation. Regarding our recommendation to establish a methodology and time frame for completing assessments of inbound air cargo vulnerabilities and critical assets, and use these assessments as a basis for prioritizing the actions necessary to enhance the security of inbound air cargo, TSA acknowledged that assessments of inbound air cargo vulnerabilities and critical assets can assist in the prioritization of programs and initiatives developed to enhance air cargo security. While TSA stated that it has taken steps to develop a methodology and a framework to complete vulnerability assessments of the domestic air cargo supply chain, TSA does not plan to begin work on assessments of vulnerabilities of the inbound air cargo supply chain until after the domestic assessments are completed. TSA stated that it will pursue partnerships with foreign countries to assess the security vulnerabilities associated with U.S.-bound air cargo. TSA’s efforts to complete a vulnerability assessment for domestic air cargo are an important step in applying a risk management approach to securing air cargo. However, TSA did not provide a time frame for completing the domestic vulnerability assessments and therefore could not provide a schedule for when it will conduct an assessment of inbound air cargo security vulnerabilities. Moreover, TSA has not determined whether it will conduct a criticality assessment of inbound air cargo assets or indicated how it plans to use information resulting from these assessments of inbound air cargo to prioritize the agency’s efforts to enhance the security of inbound air cargo. Taking these steps would be necessary to fully address our recommendation. With regard to our recommendation to establish a time frame for completing the assessment of whether existing inspection exemptions for inbound air cargo pose an unacceptable security vulnerability, and taking steps, if necessary, to address identified vulnerabilities, TSA acknowledged that air cargo inspection exemptions represent a security risk and described several actions it had taken to revise the air cargo inspection exemptions. For example, TSA stated that in October 2006, the agency issued a series of security enhancements in the form of a security directive, removing air cargo inspection exemptions. While TSA’s actions are an important step in addressing a recommendation we made in our October 2005 report on domestic air cargo security, TSA’s recent security directive does not remove all inspection exemptions for air cargo. Specifically, TSA’s action only applies to air cargo transported from and within the United States and not to air cargo transported into the United States from a foreign country, and only applies to air cargo transported on passenger air carriers, not all-cargo carriers. Until TSA assesses whether existing inspection exemptions for cargo transported on passenger and all- cargo aircraft into the United States pose an unacceptable vulnerability, and takes any necessary steps to address the identified vulnerabilities, TSA cannot be assured that the agency’s inbound air cargo inspection requirements for air carriers provide a reasonable level of security. Taking this important step is necessary to fully address our recommendation. Concerning our recommendation to develop and implement an inspection plan that includes performance goals and measures to evaluate foreign and domestic air carrier compliance with inbound air cargo security requirements, TSA stated that it recognizes the importance of evaluating air carrier compliance using performance measures and goals. TSA also stated that its international and domestic field offices establish comprehensive inspection schedules for field staff to visit air carriers based on risk factors, inspection histories, and security determinations. In addition, TSA noted that it is hiring 10 dedicated international air cargo inspectors, who will be deployed to four international field offices to inspect all-cargo operations at last points of departure to the United States on an annual basis to ensure that they are in compliance with relevant all- cargo security programs and applicable security directives or emergency amendments. TSA stated that it will also track the progress on these inspections utilizing the tracking system developed for its Foreign Airport Assessment Program. Hiring additional inspectors to conduct compliance inspections of all-cargo carriers that transport cargo into the United States is an important step for enhancing the agency’s oversight of such carriers. However, TSA has not indicated whether it will develop an inspection plan that includes performance goals and measures to evaluate foreign and domestic air carrier compliance with inbound air cargo security requirements. Developing such a plan will be important to fulfilling the agency’s oversight responsibilities and is a necessary action in addressing our recommendation. Regarding our recommendation to collaborate with foreign governments and the U.S. air cargo industry and compile and analyze information on air cargo security practices used abroad to identify those that may strengthen the department’s overall air cargo security program, TSA stated that it recognizes the importance of collaborating with foreign governments and U.S. industry to identify best practices and lessons learned for enhancing air cargo security. Specifically, TSA stated that it has taken numerous steps to increase collaboration with foreign governments and industry, including developing relations with United Kingdom and Irish officials to better understand their air cargo security practices and programs. TSA also noted that it actively coordinates with Canadian transportation security officials to share lessons learned and improve air cargo security between the two countries. Moreover, TSA stated that it is continuing to build relationships with foreign governments, including European Union members and southeast Asian nations. TSA also stated that it is collaborating with U.S. industry through the Aviation Security Advisory Committee Air Cargo Working Group to partner with air cargo supply chain stakeholders on new initiatives and existing programs and pilot programs. TSA’s efforts to collaborate with foreign governments and industry are important steps toward improving inbound air cargo security. However, TSA has not indicated whether it plans to compile or analyze information on air cargo security practices used abroad to identify those that may strengthen the department’s overall air cargo security program, including assessing whether the benefits that these practices could provide in strengthening the security of the U.S. and inbound air cargo supply chain are cost-effective, without impeding the flow of commerce. Taking such actions would be necessary to fully address the intent of this recommendation. DHS also offered technical comments and clarifications, which we have considered and incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will provide copies of this report to the Secretary of Homeland Security, the Assistant Secretary of the Transportation Security Administration, the Commissioner of U.S. Customs and Border Protection, and interested congressional committees. If you have any further questions about this report, please contact me at (202) 512-3404 or berrickc@gao.gov. Key contributors to this report are listed in appendix IX. This report addresses the following questions: (1) What actions has the Department of Homeland Security (DHS) taken to secure inbound air cargo, and how, if at all, could these efforts be strengthened? (2) What practices have the air cargo industry and select foreign countries adopted that could potentially be used to enhance DHS’s efforts to strengthen air cargo security, and to what extent have the Transportation Security Administration (TSA) and the U.S. Customs and Border Protection (CBP) worked with foreign government stakeholders to enhance its air cargo security efforts? To determine what actions DHS has taken to secure inbound air cargo, and how, if at all, these efforts could be strengthened, we reviewed TSA’s domestic air cargo strategic plan, proposed and final air cargo security rules, air cargo-related security directives and emergency amendments, aircraft operator security programs, and related guidance to determine the requirements placed on air carriers for ensuring inbound air cargo security. We also interviewed TSA and CBP officials to obtain information on their current and planned efforts to secure inbound air cargo. Further, we reviewed CBP’s programs and performance measures related to targeting and inspecting air cargo once it reaches the United States. Specifically, we reviewed CBP’s Customs and Trade Partnership Against Terrorism (C-TPAT) program and its Automated Targeting System (ATS) related to air cargo to obtain information on CBP’s efforts to secure, target, and inspect inbound air cargo. We analyzed TSA foreign airport assessment reports conducted during fiscal year 2005, compliance inspection data from July 2003 to February 2006, and performance measures to determine the agency’s progress in evaluating air carriers’ compliance with existing air cargo security requirements. We also discussed the reliability of TSA’s compliance inspection data for the period July 2003 to February 2006 with TSA officials. Although our initial reliability testing indicated that there were some inconsistencies in the data provided by TSA, we were able to resolve most of the discrepancies and concluded that the data were sufficiently reliable for the purposes of this review. For example, we found spelling variations in the inspections for the same air carrier, which we identified and made uniform in the dataset. We found that some records contained duplicate information. We removed these records based on a comparison of information such as the inspection record number, the date of the inspection, the specific requirement the TSA inspector assessed, and the determination of the air carriers’ compliance with the requirement. We also found some inspections in the dataset that had occurred at U.S. airports. We identified these by the airport name and removed them from the data. To identify DHS’s plans for enhancing inbound air cargo security, we reviewed DHS Science and Technology Directorate, TSA, and CBP documents to identify pilot programs for inspection technology, including program funding levels, time frames, results, and implementation plans. We discussed how, if at all, DHS efforts could be strengthened to secure inbound air cargo with TSA and CBP officials and air cargo industry stakeholders. To identify any challenges DHS and its components may face in strengthening inbound air cargo security, we interviewed TSA and CBP officials about how they coordinate and share information on their respective inbound air cargo security efforts. We obtained information on DHS’s, TSA’s, and CBP’s efforts to apply risk management principles to inform their decisions related to securing inbound air cargo and compared these actions against our risk management framework. Our complete risk management framework includes a specific set of risk management activities: setting strategic goals and objectives, assessing risk (threat, vulnerabilities, and criticality), evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. This report examines the two risk management efforts TSA has focused on thus far related to inbound air cargo security—setting strategic goals and objectives and assessing risk. With regard to establishing strategic goals and objectives, we reviewed DHS’s Strategic Plan, National Infrastructure Protection Plan, and National Strategy for Transportation Security. We also reviewed TSA’s strategic plan and TSA’s air cargo strategic plan to determine DHS’s strategy for addressing the security of inbound air cargo. Regarding risk assessments, we interviewed DHS officials to discuss the department’s plans to conduct assessments of the vulnerabilities and critical assets associated with inbound air cargo. In addition, we interviewed TSA and CBP officials, foreign government officials, and air cargo industry stakeholders to identify efforts to develop international air cargo security standards, and DHS’s efforts to work with foreign governments to develop uniform air cargo security standards that would apply to participant countries, including a structure for mutually recognizing and accepting other countries’ air cargo security practices. To identify actions the air cargo industry and select foreign countries have taken to secure air cargo and whether such actions have the potential to be used to strengthen air cargo security in the United States, we interviewed foreign and domestic air carrier (passenger and all-cargo) officials, foreign freight forwarder representatives, airport authorities, air cargo industry associations, and DHS and foreign government officials. We also conducted site visits to 3 U.S. airports to observe inbound air cargo security operations and industry and CBP efforts to inspect inbound air cargo using nonintrusive inspection technologies, including radiation detection systems. We selected these airports based on several factors, including airport size, the volume of air cargo transported to these airports from foreign locations, geographical dispersion, the presence of CBP officers, and TSA international field office officials. Because we selected a nonprobability sample of airports, the results from these visits cannot be generalized to other U.S. airports. Further, we conducted site visits to 7 countries in Europe and Asia to observe air cargo facilities on and off airport grounds, observe air cargo security processes and technologies, and obtain information on air cargo security measures implemented by foreign governments and industry stakeholders. During our international site visits, we also met with officials from the European Union and TSA’s international field offices. We selected these countries based on several factors, including geographical dispersion; TSA threat rankings; and discussions with DHS, State Department, and foreign government officials and air cargo industry representatives and experts regarding air cargo security practices that may have application to DHS’s efforts to secure air cargo. We also considered information on 4 additional countries whose air cargo security practices differ from those used in the United States. According to TSA and air cargo industry stakeholders, these countries have implemented stringent air cargo security programs. Specifically, we observed security practices at 8 foreign airports, 4 of which rank among the world’s 10 busiest cargo airports. We also obtained information on the air cargo security requirements implemented by 4 additional foreign countries. In addition, some of the security practices we identified are being implemented by air carriers that transport large volumes of air cargo. Specifically, we spoke with air carrier officials representing 7of the world’s 10 largest air cargo carriers. We also discussed the feasibility of applying foreign air cargo security measures in the United States with TSA officials. We did not, however, evaluate the effectiveness of the foreign measures we identified during this review. We also discussed efforts to develop, harmonize, and mutually recognize international air cargo security standards with TSA, foreign government, and air cargo industry officials. TSA’s and CBP’s roles and responsibilities for securing air cargo transported from the United States to a foreign location were not included in the scope of this review. TSA’s requirements for outbound air cargo are similar to those governing the security of air cargo transported within the United States. For a review of TSA’s practices related to securing domestic air cargo, GAO-05-446SU. We conducted our work from October 2005 through February 2007 in accordance with generally accepted government auditing standards. TSA’s inspections at foreign airports are conducted by aviation inspectors who are responsible for reviewing aviation security measures of foreign and domestic passenger air carriers to determine their compliance with a variety of TSA aviation security requirements, including those related to inbound air cargo. These inspectors are responsible for conducting foreign airport assessments as well as domestic and foreign air carrier inspections at foreign airports. According to international field office officials, the agency usually conducts inspections and foreign airport assessments during the same visit to an airport. The agency also trains and utilizes domestic aviation security inspectors to conduct inspections under the supervision of the international field offices to supplement its international inspection resources. TSA uses its automated Performance and Results Information System (PARIS) to compile the results of its aviation inspections and the actions taken when violations are identified. As shown in figure 4, our analysis of PARIS inspection records determined that between July 2003 and February 2006, TSA conducted 1,020 international compliance inspections of domestic and foreign carriers that included a review of one or more areas of cargo security. TSA data also show that inspectors conducted 747 inspections at 452 separate domestic air carrier stations and 273 inspections at 177 separate foreign air carrier stations. TSA has taken initial steps to compile information on the violations found during its inspections of inbound air carrier cargo security requirements. For example, from July 2003 to February 2006, TSA inspectors identified 57 air cargo security violations committed by foreign and domestic passenger air carriers at foreign airports in several areas of air cargo security responsibility. Specifically, as shown in figure 5, these violations covered areas such as cargo acceptance procedures, cargo screening procedures, and air carrier cargo hold search procedures. During fiscal year 2005, TSA conducted 128 foreign airport assessments at the approximately 260 airports that service passenger air carriers departing for the United States. As part of the foreign airport assessment process, TSA develops a report that identifies recommendations for the airport to improve its airport security to meet ICAO standards, which include air cargo security standards. Of the 128 assessments TSA conducted during fiscal year 2005, the agency made 28 recommendations to improve air cargo security. As of October 2005, 2 cargo security recommendations were adopted by the airports and 26 recommendations remained to be addressed. Examples of TSA recommendations include developing a national cargo security program to establish government authorities and air cargo industry responsibilities for securing air cargo, among other things. When TSA inspectors identify a deficiency that requires immediate action, they work with the airport and government officials to resolve the deficiency. If TSA inspectors determine that effective security is still not being maintained, the law prescribes steps and actions available for encouraging compliance with the standards used in TSA’s assessment. Such actions include, among other things, notifying appropriate authorities of the foreign government of deficiencies identified, providing public notice that the airport does not maintain and carry out effective security measures, or suspending service between the United States and the airport if it is determined a condition exists that threatens the safety or security of the passengers, aircraft, or crew, and such action is in the public interest. The agency has not issued a travel advisory or suspended service solely for air cargo security deficiencies at an airport since its inception. GAO’s risk management framework is intended to be a starting point for risk management activities and will likely evolve as processes mature and lessons are learned. A risk management approach entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Figure 6 depicts a risk management cycle. Risk assessment, a critical element of a risk management approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. The risk assessment element in the overall risk management cycle may be the largest change from standard management steps and is central to informing the remaining steps of the cycle. Table 1 describes the elements of a risk assessment. Another element of our risk management approach—alternatives evaluation—considers what actions may be needed to address identified risks, the associated costs of taking these actions, and any resulting benefits. This information can be provided to agency management to assist in the selection of alternative actions best suited to the unique needs of the organization. An additional step in the risk management approach is the implementation and monitoring of actions taken to address the risks, including evaluating the extent to which risk was mitigated by these actions. Once the agency has implemented the actions to address risks, it should develop criteria for and continually monitor the performance of these actions to ensure that they are effective and also reflect evolving risk. According to DHS officials, the department’s ongoing pilot programs seek to enhance the physical security of air cargo and improve the effectiveness of air cargo inspections by increasing detection rates and reducing false alarm rates. DHS officials stated that its air cargo technology pilot programs focus on securing domestic air cargo, and while these pilot methods have yet to be implemented, the results of these tests could be applied to securing inbound air cargo against similar threats. These technology pilots focus on addressing the two primary threats to air cargo identified by TSA—hijackers on an all-cargo aircraft and explosives on passenger aircraft—but do not include tests to identify weapons of mass destruction. DHS’s pilot programs are described below. Of the amounts appropriated to DHS in fiscal year 2006, $30 million was allocated to the Science and Technology (S&T) Directorate to conduct three cargo screening pilot programs. DHS’s S&T, working in conjunction with TSA, selected San Francisco International Airport, Seattle-Tacoma International Airport, and Cincinnati/Northern Kentucky International Airport as the sites for the pilot and commenced cargo inspection operations at all three airports in September 2006. The pilots will test different concepts of operation at each of the airports. At San Francisco International Airport, the program will test the use of approved inspection technologies, including explosive detection systems, such as CTX 9000, explosive trace detectors, standard X-ray machines, canine teams, and manual inspections of air cargo, in attempts to determine the technological and operational issues involved in explosives detection. The pilot at San Francisco International Airport will further examine how the use of these existing checked baggage inspection technologies at a higher rate than is currently required by TSA will affect air cargo personnel and operations on, for example, throughput. The pilot at Seattle-Tacoma International Airport will use canines and stowaway detection technologies, for example, technologies that can locate a stowaway through detection of increased carbon dioxide levels in cargo, to detect threats in freighter air cargo, while the Cincinnati/Northern Kentucky International Airport pilot program will test existing passenger infrastructure for inspecting air cargo, including explosive detection systems (EDS) technology. The projected benefits of these pilots include the following: increases in the amount of cargo inspected, increases in detection reliability without adversely affecting commerce, and a better understanding of the necessary procedures and costs associated with greater cargo security. EDS is a form of X-ray technology that can be highly automated to screen several hundred bags an hour. EDS machines, in contrast to explosive trace detection technology, are much larger, up to the size of a minivan and cost in excess of $1 million. EDS technology uses computer tomography to scan objects and compare their density to the density of known objects in order to locate explosives. According to TSA, EDS provides an equivalent level of security as explosive trace detection (ETD) technology. However EDS provides a higher level of efficiency. TSA’s EDS Cargo Pilot Program is currently in the third phase of a three- phased program testing the use and effectiveness of explosive detection systems at 12 participating sites. While the Air Cargo Explosives Detection Pilot Program will test a range of explosives detection technologies, the EDS pilot focuses specifically on EDS technology for its use in the air cargo environment. Phase I, referred to as Developmental Test and Evaluation, was conducted using live explosives to test the detection capability and technical performance of the systems screening simulated break bulk air cargo. Phase II, referred to as Operational Utility Evaluation, was conducted in cargo facilities to test the system’s effectiveness in the air cargo environment, in addition to determining the operational alarm and false alarm rates of the technology. Phase III of TSA’s testing is referred to as the Extended Field Test and is designed as a longer-term evaluation of available EDS technologies in the air cargo environment. According to TSA officials, the extended time frame of Phase III (a minimum of 1 year) will allow TSA to evaluate the reliability, maintainability, and availability of the EDS technology, in addition to establishing operational parameters and procedures within a realistic operational environment. TSA officials stated that the agency is exploring the viability of potential security countermeasures, such as tamper-evident security seals, for use with certain classifications of exempt cargo. Traditionally used in the maritime environment, container seals include a number of tamper-evident technologies that range from tamper-evident tape to more advanced technologies used to secure air cargo on aircraft. Tamper-evident tape can identify cargo that requires further screening and inspection to safeguard against the introduction of explosives and incendiary devices. Indicative seals are made of plastic and show signs of tampering. Ranging in price from 5 to 20 cents, they provide the cheapest solution to air cargo security. Barrier seals, which cost between 50 cents and $2 or more, are stronger seals that are generally used on more sensitive cargo because they require bolt cutters to remove. The most advanced seal technology allows shipping companies to track a container through the entire shipping process through a radio frequency identification (RFID) tag that is embedded in the seal. Average RFID seals can range in cost from $1 to $10, with the most sophisticated models costing upward of $100. Security seals could be used in combination with known shipper protocols to insure that known shippers provide security in their packaging facilities and deter tampering during shipping and handling. In 2003, the Congressional Research Service reported that the utility of electronic seals in air cargo operations has been questioned by some experts because currently available electronic seals have a limited transmission range that may make detecting and identifying seals difficult. In 2006, GAO reported that container seals provide limited value in detecting tampering with cargo containers. However, according to TSA officials, such countermeasures could provide an additional layer of security and warrant further examination. In January 2006, the agency issued a public request for information regarding security seals. Although the agency has since acquired information on seals from five vendors, officials stated that efforts to begin the pilot program have been delayed due to funding issues, among other things. TSA officials stated that the agency plans to implement the pilot at four airports by the first quarter of 2007. These airports include Portland International Airport, John F. Kennedy International Airport, Chicago O’Hare International Airport, and Ronald Reagan Washington National Airport. While the Federal Aviation Administration, TSA, and DHS have been involved in testing hardened unit load devices since the mid-1990s, testing of these devices has increased since the 9/11 Commission recommended that all U.S. airliners deploy at least one hardened cargo container in the hold of every passenger aircraft to carry suspect passenger baggage or air cargo. Hardened unit load devices are blast-resistant containers capable of transporting passenger baggage or air cargo within the lower deck cargo holds of wide-body aircraft. These containers are required to withstand an explosive blast up to a certain magnitude while maintaining the integrity of the container and aircraft structure. The container must also be capable of extinguishing any fire that results from the detonation of an incendiary device. In accordance with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA began a pilot program in June 2005 to conduct airline operational testing of the ability of hardened or blast-resistant containers to minimize the potential effects, including explosion or fire, of a detonation caused by an explosive device smuggled into the belly of an aircraft. TSA officials stated that the start up of the pilot program was slow because one of the two participating vendors dropped out of the program and because there were few available domestic wide-body flights in which to conduct the tests. TSA officials added that the agency has since made progress in conducting the pilot and is collecting test data. TSA officials stated that the agency expects to conclude the data collection phase of the program by summer 2007 and make policy decisions regarding the possible implementation of hardened unit loading devices by December 2007. In addition, TSA has been working with vendors and airlines to develop and test a hardened unit load device that would satisfy industry’s request for a lighter, less cost-prohibitive model while still providing the necessary level of security to the aircraft. TSA officials reported that the agency’s efforts to test pulsed fast neutron analysis (PFNA) are currently in the proof-of-concept design stage, which is focusing on the development of the technology. PFNA technology allows for bulk inspection of containerized air cargo by measuring the reaction to injected neutrons and identifying elemental chemical signatures of contraband, explosives, and other threat objects. The agency plans to complete the proof-of-concept phase of testing by March 2007, at which point TSA and DHS will evaluate the technology on its technical, environmental, operational, and performance specifications. Testing of this technology will then proceed to the Development Testing and Evaluation phase. Agency officials project that the next two phases, Development Testing and Evaluation and Operational Testing and Evaluation, will take another 2 to 3 years (after the completion of the proof-of-concept design phase) to fully determine the operational readiness and maturity of the technology. Agency officials were unable to provide us with a time frame for when PFNA would be operational at the George Bush Intercontinental Airport. Inspect a higher percentage of cargo placed on passenger aircraft than is required by TSA or host government. 100 percent inspection performed on: passenger aircraft bound for the United States. Freight forwarders, also known as regulated agents, are validated by the government and are responsible for conducting inspections. 100 percent of air cargo loaded onto passenger aircraft bound for the United States required to undergo inspection. (cash paying) customers. air cargo shipped in or out of locations deemed high-risk by the air carrier is inspected via X-ray. decompression chambers are used to inspect cargo that cannot be X-rayed. Large palletized cargo is broken down in order to pass cargo through X-ray machines. fee when use of decompression chamber is required. Canines used to sniff air passenger flights to the United States are inspected via X-ray. samples taken from cargo shipments. Limited or no air cargo inspection exemptions. Large X-ray machines used to inspect entire pallets of cargo bound for passenger craft. Additional targeted inspections are conducted based on analysis of available threat information, among other things. assessment system indicates when air cargo should be inspected and when other procedures should apply. The color assigned (red, amber, or green) is based on the cargo’s point of origin, destination, and other relevant intelligence information. technology is used to inspect cargo transported to the United States and differentiate between legitimate and illegitimate sources of radiation. Canines used to sniff air samples from cargo shipments. cosigners to prepare for annual audits; new identification numbers are given post-audit to ensure security of cosigner identity. Air cargo workers undergo additional and stringent background checks, including criminal and employment history checks. completed by employees; employees are not permitted to enter facility if training lapses or requirements are not met. Program provides monetary incentives to employees in order to increase employee awareness of access controls, including rewards for reporting suspicious individuals. Managers are required to remain knowledgeable on security policies and regulations in destination countries. All personnel are trained to identify and handle security risks; quarterly training is provided to security personnel on a range of issues, including security updates and the use of new technology. Threat information is derived from public/private intelligence. This information includes data on the sociopolitical/economic conditions of countries. Annual audits of carrier facilities are conducted using an online questionnaire; facilities undergo a certification process that is linked to the audits. provided to CBP earlier than is required by CBP. Independent risk assessments are conducted based on internal testing to identify cargo security weaknesses. Security incident database tracks worldwide security issues. air carrier industry meet to identify best practices in aviation security. Truck drivers entering carrier facilities to deliver air cargo are escorted by an airline representative at all times. All employees/visitors are required to pass through a metal detector before entering/exiting cargo facility. cargo facilities, including testing of access controls to identify security weaknesses. Security guards control access to freighters at every stop made by the aircraft. Secured cart system transports cargo within cargo storage facility. Assessments are conducted of security conditions in foreign destinations where staff are located; armed security personnel are assigned to those locations deemed high risk. Seals and plastic straps are applied to all cargo crates, containers, and boxes to prevent tampering. Pallets are locked and sealed in a completely enclosed chain-like container after they are built to prevent the possibility of tampering. whenever possible into larger units and sealed with steel banding to limit the possibility of tampering. surveillance system monitor all-cargo areas 24 hours a day. Biometric badge required to gain access to secured areas. Biometric identification system that scans the hand to grant access air carrier facilities and cargo areas. employees are permitted to pick up, pack, and transport cargo to cargo facilities and the airport. Strategic placement of air cargo in the aircraft to secure the cockpit and minimize the potential for a hijacking by a stowaway. Fingerprints and photographs of all truck drivers that transport cargo are taken, kept on file, and used to authorize access. cargo brought directly to the ticketing or check-in counter by an unknown shipper. Thorough security review is conducted of potential customers prior to acceptance of their business or cargo. documented that could pose a potential security threat. Palletized cargo is refused unless airline security personnel are present when pallet is built. brought directly to the counter. outbound cargo from unknown shippers. Examining use of inspection technology capable of detecting traces of explosives. Pilot testing the use of bees to detect explosive traces in air cargo shipments. Twenty-four hour holding period used as form of inspection. Government, airport, or methods to avoid detection of inspection patterns. freight forwarder representatives are responsible for inspecting air cargo. unknown cargo loaded on either passenger or all-cargo aircraft is physically inspected. Canines used to sniff air samples from air cargo shipments–Remote Air Sampling for Canine Olfaction (RASCO). air cargo inspections. between cargo placed on passenger aircraft versus all-cargo aircraft in regards to type or degree of inspection. undergoes inspection becomes known and is permitted on passenger aircraft. No, or limited number of, air cargo inspections exemptions. Palletized cargo from unknown shippers, broken up, inspected, and re-palletized before being loaded unto aircraft. Process to become a regulated agent is strict and costly; decertification for unsatisfactory performance. Third-party validation required to become a known shipper/consignor; annual third-party compliance inspections conducted of known shippers/cosigners. Regulated agents are validated by aviation authority prior to regulating and auditing shippers and conducting inspections of air cargo. Air cargo handlers and workers attend government-certified schools to receive mandatory training in air cargo security awareness and quality control. Air cargo workers undergo background checks that include a criminal history records check before being granted access to cargo facilities. Air cargo workers must be of native descent to be hired. Developing multicountry database containing information on all known consignors and regulated agents to facilitate the exchange of information among countries. Actions Taken by Select Foreign Governments to Secure Air Cargo Security personnel accompany and surround aircraft upon landing to guard aircraft and its contents, including cargo. Biometric technologies used to control access to cargo facilities. Cargo is stored in secured terminal facility, located within a “restricted” area of the airport. All individuals accessing cargo facilities are required to pass through a walk-through metal detector. attempt to gain access to cargo warehouses/facilities; if successful, all cargo in the breached facility is considered unknown and must be inspected before being loaded unto aircraft. Government and airport authority subsidize the costs of purchasing X-ray equipment to inspect air cargo. In addition to the contact named above, John C. Hansen, Assistant Director; Susan Baker; Charles W. Bausell; Katherine Davis; Jennifer Harman; Richard Hung; Cathy Hurley; Tom Lombardi; Jeremy Manion; Linda Miller; Steve D. Morris; and Meg Ullengren made key contributions to this report.
|
The Department of Homeland Security (DHS) has primary responsibility for securing air cargo transported into the United States from another country, referred to as inbound air cargo, and preventing implements of terrorism from entering the country. GAO examined (1) what actions DHS has taken to secure inbound air cargo, and how, if at all, these efforts could be strengthened; and (2) what practices the air cargo industry and foreign governments have adopted that could enhance DHS's efforts to strengthen inbound air cargo security, and to what extent DHS has worked with foreign governments to enhance their air cargo security efforts. To conduct this study, GAO reviewed relevant DHS documents, interviewed DHS officials, and conducted site visits to seven countries in Europe and Asia. Within DHS, the Transportation Security Administration (TSA) and U.S. Customs and Border Protection (CBP) have taken a number of actions designed to secure inbound air cargo, but these efforts are still largely in the early stages and could be strengthened. For instance, TSA completed a risk-based strategic plan to address domestic air cargo security, but has not developed a similar strategy for addressing inbound air cargo security, including how best to partner with CBP and international air cargo stakeholders. In addition, while TSA has identified the primary threats to inbound air cargo, it has not yet assessed inbound air cargo vulnerabilities and critical assets. Moreover, TSA's air cargo security rule incorporated a number of provisions aimed at enhancing the security of inbound air cargo. This final rule also acknowledges that TSA amended its security directives and programs to triple the percentage of cargo inspected on domestic and foreign passenger aircraft. However, TSA continues to exempt certain types of inbound air cargo transported on passenger air carriers from inspection. Further, TSA inspects domestic and foreign passenger air carriers with service to the United States to assess whether they are complying with air cargo security requirements, but currently does not conduct compliance inspections of all air carriers transporting inbound air cargo. Moreover, TSA has not developed performance goals and measures to determine to what extent air carriers are complying with security requirements. In addition, CBP recently began targeting inbound air cargo transported on passenger and all-cargo aircraft that may pose a security risk and inspecting such cargo once it arrives in the United States. TSA and CBP, however, do not have a systematic process in place to share information that could be used to strengthen the department's efforts in securing inbound air cargo, such as the results of TSA air carrier compliance inspections and foreign airport assessments. The air cargo industry and foreign governments have implemented various security practices that could provide opportunities for strengthening DHS's overall air cargo security program. TSA officials acknowledged that compiling and analyzing security practices implemented by foreign air cargo stakeholders and foreign governments may provide opportunities to enhance U.S. air cargo security, and have begun an initial review of practices in select foreign countries. TSA has also begun working with foreign governments to coordinate security practices to enhance security and improve oversight, referred to as harmonization, but these efforts may be challenging to implement. For example, some foreign countries do not share the United States' view regarding air cargo security threats and risks, which may make the harmonization of air cargo security practices difficult to achieve.
|
Established by the National Housing Act of 1934, FHA’s single-family mortgage insurance program helps home buyers obtain home mortgages by providing insurance on single-family mortgage loans. The mortgage insurance allows FHA-approved private lenders to provide qualified borrowers with mortgages on properties with one to four housing units and generally compensates lenders for nearly all of the losses incurred on such loans. FHA insures mortgages on properties that meet its criteria, providing guarantees for initial purchases, construction and rehabilitation, and refinancing. To support the program, FHA imposes up-front and annual mortgage insurance premiums on home buyers. The agency has played a particularly large role among minority, low-income, and first-time home buyers. In 2012, about 78 percent of FHA-insured home purchase loans went to first-time home buyers, about 32 percent of whom were minorities. A number of other federal and private sector entities participate in the mortgage market. Along with FHA, the VA Loan Guaranty Service and RHS administer federal government programs that insure or guarantee single-family mortgages made by private lenders. In addition to these government agencies, private companies insure lenders against losses on home mortgages, and private lenders make loans without mortgage insurance. The enterprises also participate in the U.S. housing market by purchasing mortgages from lenders. VA’s Loan Guaranty program is an entitlement program that provides eligible veterans, active duty military personnel, and certain other individuals with housing benefits. The VA guaranty program allows mortgage lenders to extend loans to eligible borrowers on favorable terms—for example, with no down payment—and provides lenders with financial protections against the losses associated with such mortgages. To help support the program, borrowers are required to pay a funding fee that equals a certain percentage of the loan amount, although service- connected disabled veterans are exempt from paying this fee. The program may also receive congressional appropriations if needed. RHS operates guaranteed and direct loan programs to help rural Americans with very low incomes, low incomes, and in some cases moderate incomes purchase single-family homes. The purpose is to provide financing with no or low down payments at favorable rates and terms. The loans are generally for the purchase, construction, rehabilitation, or relocation of a dwelling and related structures. RHS- guaranteed loans are made through approved local lenders, with RHS providing the lenders substantial financial protections against associated losses. The loans are available to qualifying borrowers who meet applicable household income limits and seek to buy properties in eligible rural areas. Under its direct loan program, RHS extends loans to qualified borrowers—who must have low incomes and be without adequate housing—for the purchase of properties that are modest in size, design, and cost. Congress established the enterprises as for-profit, shareholder-owned corporations. They share a primary mission to stabilize and assist the U.S. secondary mortgage market and facilitate the flow of mortgage credit. To accomplish this goal, the enterprises purchase conventional mortgages that meet their underwriting standards, obtaining their funds through borrowing or by issuing mortgage-backed securities, which are securities backed by pools of mortgages. The enterprises hold some of the purchased mortgages in their portfolios, but they package most of them for sale to investors in the secondary mortgage market. In exchange for a fee, the enterprises guarantee these investors the timely payment of principal and interest. Both enterprises are also required to purchase mortgages that serve low- and moderate-income families. On September 6, 2008, the Federal Housing Finance Agency placed the enterprises into conservatorship out of concern that the enterprises’ deteriorating financial condition threatened the stability of financial markets. Institutions that originate home mortgages generally do not hold such loans as assets on their balance sheets but instead sell them to other financial institutions for the purpose of securitizing the mortgage. These securities pay interest and principal to their investors, which include financial institutions and pension funds, among others. In the past, institutions originating mortgage loans took care of all the activities associated with servicing them—including accepting payments, initiating collection actions for delinquent payments, and foreclosing if necessary. With the advent of securitization, entities known as mortgage servicers— which can be large mortgage finance companies or commercial banks— typically undertake such activities on behalf of the current owners of the loans. If a borrower defaults on a mortgage loan secured by the property, the mortgage note holder is generally entitled to pursue foreclosure to obtain title to the property. The foreclosure process is governed by state laws and differs across states, but foreclosed properties are typically sold at auction, as shown in figure 1. Once the borrower is in default, the mortgage servicer—often in conjunction with the borrower and entities with an interest, such as mortgage guarantors and insurers—must decide whether to pursue a home retention workout or other foreclosure alternative or to initiate foreclosure. The mortgage owner or servicer generally initiates foreclosure once the loan becomes 90 days or more delinquent unless the borrower can resolve the loan’s delinquency by paying the outstanding amount or some other resolution occurs, such as a borrower repayment plan or loan modification. If the foreclosure process is completed and no third party purchases the home at the foreclosure sale, the home usually becomes the property of the loan holder or servicer as part of an REO inventory. However, certain states provide the previous owners of foreclosed properties with a right of redemption that allows them to pay amounts owed to the lender and reclaim ownership. During redemption periods, the previous borrower or current occupant is allowed to remain in the residence and the REO property owner or servicer generally is not permitted to pursue activities such as evicting property residents or securing properties. Some states may have a confirmation process to complete the foreclosure process and transfer title that may also delay possession and marketing of an REO property. Typically, the redemption or confirmation period begins after the foreclosure sale and lasts from around 1 to 6 months or more. However, if properties become vacant, some state laws may permit the shortening of redemption periods and allow the REO property owners or servicers to take control of foreclosed properties. The acquisition of REO properties differed across the entities that we reviewed. When a servicer forecloses on an FHA-insured property that is not sold to a third party, the foreclosure is held in the lender’s or servicer’s name, and the lender or servicer is responsible for the property until it is conveyed to FHA. FHA requires servicers to oversee properties during redemption periods, to evict residents if properties not in redemption periods are occupied, and to perform critical maintenance on properties. The servicer files a claim with FHA, and FHA conducts its inspections before accepting the title. The length of time between the foreclosure sale and entry of a property into FHA’s REO inventory depends in part on state foreclosure laws as well as the actions of the loan’s servicer. According to FHA rules, a servicer needs marketable title before it can convey a property to FHA, and the title is generally considered to be marketable only after the borrower has left the home or been evicted, any redemption period has expired, and other required actions have taken place. After conveyance to FHA, the property is assigned to FHA’s contractors, which begin the process of preparing the REO property for sale. Other federally related entities that acquire REO properties take custody of and are responsible for them closer to the time of the foreclosure sale. For example, the enterprises require servicers to convey properties to them within 24 hours of foreclosure sales, while VA requires servicers to provide notice of their intent to convey properties within 15 days of foreclosure sales, although servicers have 60 days (or longer in certain jurisdictions) to provide evidence of acceptable title to conveyed properties. RHS’s REO process varies, depending on whether the property has had a guaranteed or direct loan. With direct loans, RHS takes possession of the property after the foreclosure sale and manages the entire REO process. With guaranteed loans, the lender receives title to the property and maintains, markets, and disposes of the property, and RHS oversees the process. Because RHS acquires and disposes only of REO properties related to its direct loans, we generally considered only these types of properties when discussing RHS’s REO disposition. Disposing of REO properties can involve various activities, although the disposition process is generally similar across entities. While a property is held as part of an entity’s REO inventory, the entity is responsible for maintenance, including cleaning, lawn care, snow removal, and security. If a property has a tenant or is otherwise occupied, eviction proceedings may need to occur before it is offered for sale. After any redemption periods expire and evictions take place, properties are usually assessed to determine their market value. The market value is used to determine the selling, or listing, price. A listing real estate broker is usually chosen to market the property publicly, generally through a multiple listing service system—a database set up by a group of real estate brokers to provide information about properties for sale. If a property does not attract interest at its initial listing price, the price can be reduced. Once a purchase offer is accepted, the sale closing process occurs, and ownership is transferred to the new owner. Since 1999, FHA has been outsourcing to private sector contractors the maintenance and disposition of its REO properties. Entities that dispose of REO properties typically use various types of contractors, including those that manage the marketing or maintenance activities for a large number of properties. These larger contractors often use subcontractors to provide specific services related to the marketing or maintenance of properties, such as listing the property for sale or cutting the lawn. In June 2010, FHA launched the third generation of its Management and Marketing (M&M) contractor program, known as M&M III. Under prior arrangements, FHA’s M&M contractors were responsible for both the maintenance and marketing of FHA’s REO properties. However, under M&M III these functions are performed by separate contractors, including maintenance contractors that are responsible for preserving properties and marketing contractors that are responsible for selling the homes. Under the M&M III structure, FHA also uses contractors called Mortgagee Compliance Managers that protect FHA’s interests in foreclosed properties that lenders have not yet conveyed to FHA. The M&M III structure was also meant to include an additional contractor to serve as an oversight monitor to assist FHA in overseeing its REO program’s performance, including the other M&M III contractors. FHA conducts its mortgage loan insurance programs and its REO disposition program through four regional operating locations called homeownership centers, or HOCs. The homeownership centers are located in Atlanta, Georgia; Denver, Colorado; Philadelphia, Pennsylvania; and Santa Ana, California. Each of these homeownership centers oversees FHA operations in the states within their region (see fig. 2). Under M&M III, FHA contracts with entities to provide the necessary maintenance or marketing services in multiple areas that cover several states. Within each of these contract areas, multiple entities generally will receive either maintenance or marketing contracts as a way of fostering competition among the contractors to improve their responsiveness, reduce risk, and increase net returns to FHA. FHA staff—known as government technical representatives—in each of these locations oversee the maintenance and marketing contractors, including monitoring and evaluating their performance and providing technical guidance and assistance. Other entities also use contractors to manage and dispose of REO properties, but to varying degrees. For example, officials from one of the enterprises said that it used three nationwide contractors for property maintenance, with each operating in different states. According to these officials, contractors inspect the properties, but field-based quality control employees also do property inspections. For disposition, this enterprise has its own internal management group that markets and sells REO properties using its own network of listing brokers. The enterprise’s officials said that to help adjust capacity based on REO inventories, the enterprise had also contracted with external asset management companies that did not use its broker network. The officials noted that the enterprise recently ended its use of these contractors, as its REO inventories had declined. The other enterprise has used contractors to manage and dispose of its REO properties since 2008. According to officials, this enterprise initially used one sales management contractor but added two more in March 2011. These officials explained that each of the three contractors operated in every geographic area, allowing the enterprise to more easily change the contractor managing a property if necessary. The enterprise’s officials also said that it managed the network of service providers—such as listing brokers, property maintenance companies, and repair contractors—that its sales management contractors used to ensure a consistent process and minimize potentially adverse relationships among contractors. Officials said that the enterprise primarily maintained relationships with local service providers rather than nationwide providers because markets differed at the local level. The enterprise’s oversight structure includes several different groups, including oversight monitors that review specific areas of contractor performance, individual business teams that also review performance across all contractors, and vendor oversight teams that serve as liaisons between the business units and contractors. VA and RHS use different approaches. VA staff told us that the agency used a single contractor to manage the maintenance and disposition of its entire REO inventory. This contractor uses numerous subcontractors to maintain and market properties and manage the subcontractors. According to its staff, VA wanted one company to manage everything so that it could have a single point of interaction and accountability. RHS uses both contractors and its own staff to dispose of REO properties, according to agency officials. For its direct loans, RHS staff manage the disposition of any foreclosed properties for which it takes possession in about half of the states, while a central RHS office oversees contractors that manage and dispose of properties in the remaining states. Officials said that for several years RHS had been transitioning from using its own staff to using contractors to dispose of properties. For its guaranteed loans, RHS does not use contractors but relies on lenders that take possession of the REO properties with some RHS oversight. If a lender completes foreclosure or a deed-in-lieu of foreclosure on a loan that RHS guarantees, the lender receives title to the property and maintains, markets, and disposes of the property. As a result of the large number of loan defaults arising during the housing crisis, FHA and the other entities have generally experienced significant growth in their REO property disposition activities. As shown in the figure below, each of the entities disposed of an increasing number of REO properties during that time. RHS was not included in the figure because it had no more than 218 direct loan REO property dispositions in any quarter during the time period. Our review of FHA and other federally related housing entities found that they pursued similar goals and strategies for disposing of REO properties, including seeking to sell most properties to owner-occupants—individuals planning to occupy the homes as primary residences. But data for REO dispositions from January 2007 through June 2012 showed that FHA’s returns from selling its REO properties generally trailed the returns earned by the enterprises. This difference declined when differences in property characteristics, such as location and value, were considered. In addition, we found that FHA, on average, took significantly longer to sell its properties than the enterprises and VA—more than 130 and 70 additional days, respectively. We also evaluated how the method used to sell properties and the type of purchaser affected FHA’s performance. The large majority of FHA’s dispositions were retail sales to owner- occupants or investors, and our analysis of FHA’s performance indicated that the agency achieved higher returns on sales to owner-occupants than on sales to investors and other buyers. However, FHA had a smaller share of owner-occupant sales than the enterprises. We also found that, in making these sales, FHA did not follow several practices that other entities used in disposing of REO properties that could potentially increase the agency’s returns, including repairing properties to increase their market value, using multiple inputs to set list prices, and using market-based information to make subsequent list price reductions. FHA’s goals for disposing of its REO properties were similar to those of the other entities we reviewed. According to staff from FHA, VA, and the enterprises, each entity aims to maximize the financial return of REO dispositions while minimizing each property’s time in inventory. Specifically, FHA’s regulatory goals for its REO program are to dispose of properties in a manner that expands home ownership opportunities, strengthens neighborhoods and communities, and ensures a maximum return to the mortgage insurance fund. Currently, the performance of FHA’s REO program is assessed against three formal goals: reducing the average number of days from acquisition to listing REO properties for sale by 2 percent from the prior fiscal year’s average, reducing the average number of days in inventory by 2 percent from the prior fiscal year’s average, and conducting at least 12 REO workshops/meetings to promote acquisition and reuse of foreclosed properties in Neighborhood Stabilization Program areas. While not having a formal goal addressing the expected return on its REO sales, FHA staff told us that they attempted to improve the solvency of FHA’s insurance fund by targeting a gross execution rate—that is, a property’s sale price as a share of its assessed value, or list price—of 100 percent although they said they use a specific target of at least 80 percent when evaluating homeownership center performance. VA staff said that it advises the contractors that market its REO properties that their execution rate target is 80 percent—based on a property’s net sales proceeds rather than gross sales price—but offers additional compensation incentives for a rate of 88 percent or better. One of the enterprises has a procedural manual for REO sales that notes that its goal is to sell properties for “as close to 100 percent of the list price as possible.” FHA and the other entities we reviewed pursued similar strategies to achieve their goals. Staff from these entities explained that they attempted to sell their REO properties primarily via retail sales to either owner-occupants or to investors, who would likely renovate them for resale or rentals. Retail REO sales usually are conducted by private real estate brokers that market the properties the same way they market other properties—for example, listing them in a multiple listing service. Officials from these entities explained that they pursued retail sales because such sales produced higher net returns than other methods of selling the properties, such as selling multiple properties at once—bulk sales—or selling individual or multiple properties at auctions. Officials of some of these entities said that they generally pursued these alternative disposition strategies only after retail sales had proven unsuccessful and that often such sales involved low-value properties or properties with problems that made sales through retail methods difficult. Officials from FHA and other federally related housing entities said that they preferred retail sales not only because of their higher returns, but also because such sales generated more purchases by owner-occupants. Further, some of the officials and representatives of some nonprofit organizations said that owner-occupant sales were better for stabilizing communities and protecting home values. Owner-occupants are assumed to have more incentive to maintain their properties than investor owners that may be absent or focused primarily on maximizing rental income. They also said that owner-occupant sales generally yielded a higher financial return than sales to investors. FHA staff noted that selling to owner-occupants helped to achieve one of HUD’s overall agency goals— to expand home ownership opportunities—and said that FHA required sales records for REO properties to reflect the type of buyer. To further promote sales to owner-occupants, FHA and some of the other entities often use methods such as exclusive access periods. For example, FHA accepts offers on properties that qualify for FHA insurance only from owner-occupants for the first 30 days of the listing. In some cases, FHA also disposes of REO properties at a discount through certain programs intended to further its agency’s mission goal of creating strong, sustainable, inclusive communities and quality affordable homes. These discount sale programs represented only a small fraction of FHA’s total REO dispositions from January 2007 through June 2012. They include the Asset Control Area program, discount sales to nonprofit organizations and government entities, the Good Neighbor Next Door program, and the $1 Home Sale program. Asset Control Area. Properties located in areas that HUD has designated for revitalization based on the area’s household incomes, home ownership rates, and level of FHA-insured mortgage foreclosure activity can be offered to sale to municipal government entities or approved nonprofits through this program. These properties are offered at discounts of at least 50 percent of the appraised value of the property for properties valued at over $25,000. Properties valued at less than that amount may be sold for as little as $100. Discount sales to nonprofit organizations and government entities. Qualified nonprofit organizations and government entities may also purchase FHA properties at a discount. Discounts range from 10 to 30 percent, depending on the property’s FHA insurability status, location, and other factors. For example, one initiative provides grantees participating in the Neighborhood Stabilization Program exclusive access to newly conveyed REO properties that are located in their designated areas for 2 days after the grantee is notified that an appraisal has been obtained. Good Neighbor Next Door. Under this program, properties in revitalization areas can also be offered to police officers, teachers, fire fighters, and emergency medical technicians at 50 percent off of the list price. $1 Home Sale. Sales of “aged inventory” (properties listed for sale for more than 6 months) can also be made for $1 plus closing costs to local governments to support local housing and community development initiatives. FHA’s returns from selling its REO properties generally have trailed the returns of the two enterprises but its performance has improved recently. FHA also took longer to dispose of properties than the enterprises but showed recent improvement in this area. Each of these entities achieved better results when conducting retail sales of individual properties than when using other disposition methods and when selling properties to owner-occupants rather than to investors and other buyers. All of the entities, including private REO servicers, that we interviewed assessed their performance in disposing of REO properties using a variety of metrics. For example, execution rates gauge success in maximizing a property’s sale prices. They can be calculated by comparing either a property’s gross sales price or net sales proceeds with some measure of the property’s value, such as an assessed value from an independent appraiser or the price at which the property was listed for sale. Properties that sell for larger percentages of the assessed value or listed price generate higher returns for the seller. We analyzed data from REO properties disposed of from January 2007 through June 2012 to determine FHA’s and the enterprises’ net execution rates based on independently assessed value, which represents the entities’ net sales proceeds as a percentage of those values. Our analysis showed that the aggregate net execution rate for FHA was 4 and 6 percentage points lower depending on the enterprise (see fig. 4). To account for the possibility that the performance differences were attributable to differences in the property characteristics of each of these entities’ REO inventories, we developed a regression model to control for the effects on the net execution rates of certain property characteristics— such as location, value, and local real estate market conditions—that were beyond the entities’ control. Based on this analysis, we estimate that FHA’s aggregate independently assessed value net execution rate still trailed that of the enterprises by 2 and 5 percentage points. Similarly, FHA’s aggregate net execution rate based on initial list prices for the entire period was worse than enterprise 2’s and enterprise 1’s by 3 and 6 percentage points, respectively, as shown in figure 5. After controlling for the effects of certain property characteristics—as we did for the independently assessed value net execution rate—we estimate that FHA’s net execution rate based on initial list prices was less than that of enterprise 2 and enterprise 1 by 2 and 4 percentage points, respectively. When comparing aggregate execution rate performance based on gross sales prices rather than net sales proceeds, the difference between FHA and the enterprises widened slightly for both independently assessed values and initial list prices. Both FHA’s net and gross execution rates improved relative to the enterprises during the first half of 2012 and for the list price ratio, FHA’s performance was equal to that of the enterprises. Although the difference in aggregate net sales returns between FHA and the enterprises over the entire period was 6 percent or less, a small improvement in performance can yield substantial amounts of additional revenue because FHA disposes of tens of thousands of properties each year. For example, if FHA had achieved a similar net execution rate based on independently assessed value as enterprise 2, it would have received over $400 million in additional revenue for 2011 alone. And if FHA’s aggregate net execution rate for 2011 had been 1 percentage point higher, FHA would have received over $90 million in additional revenue. FHA took significantly longer to sell its properties than the enterprises and VA, but the differences decreased in the first half of 2012. For each entity, we analyzed the time in days that elapsed between the date of the foreclosure sale of the property to the closing date of the REO sale. Based on this analysis, FHA took on average about 340 days from the foreclosure sale to sell its REO properties based on dispositions from January 2008 through June 2012, compared with just over 200 days for the enterprises and about 270 for VA (fig. 6). This difference between FHA and the enterprises in average REO timelines persisted after controlling for the average effects of certain property characteristics such as location, value, and local real estate market conditions. Longer disposition times can create additional costs for items such as taxes, insurance, home owners’ association fees, maintenance costs, and other expenses (holding costs) and leave properties exposed to an increased risk of vandalism and other property damage. We found that if FHA’s average number of days from foreclosure sale to REO disposition had equaled those of the enterprises, it could have avoided around $600 million in extra holding costs for 2011 alone. Furthermore, the negative impact of vacant properties on neighborhoods and property values has been identified in prior GAO reports and other sources, underscoring the importance of minimizing the amount of time required to dispose of properties after the foreclosure sale. The differences in the length of time FHA took to dispose of REO properties relative to the other entities was largely attributable to differences in one specific period—the time from the foreclosure sale until the date of the initial REO valuation. FHA and the other organizations obtain the initial value assessment once a property is ready to be listed for sale and marketed to potential buyers. The assessment is typically made after the expiration of any redemption period after the foreclosure sale and any necessary eviction action, as well as cleaning, maintenance, and repair of the property. FHA’s average for this period was 184 days, while the enterprises’ averages were 69 (enterprise 1) and 66 days (enterprise 2). As discussed previously, FHA acquires its REO properties from mortgage servicers at a later date after the foreclosure sale than the enterprises and other entities. Having multiple entities complete its postforeclosure sale activities, such as eviction, could be one reason that FHA’s average time from foreclosure to initial valuation was longer than the times for the other entities. As we stated in our 2002 report on FHA’s process for selling REO properties, the enterprises, VA, and RHS have one entity that is responsible for the custody, maintenance, and sale of foreclosed properties, but FHA divides these responsibilities among its mortgage servicers and REO contractors, all of which operate largely independently of one another. This divided approach to property custody could delay the initiation of critical steps necessary to sell REO properties quickly. As a result, in 2002 we recommended that HUD establish unified property custody as a priority for FHA and that it determine and implement the optimal method for establishing unified property custody. In their response at the time of the report, HUD said that it agreed that a unified custody approach may streamline processes and oversight, reduce holding time, and increase net return. HUD also said that the agency intended to continue research to determine the feasibility of unified custody within the framework of existing statutory requirements and explore statutory changes that would increase efficiencies in the property disposition program. However, HUD subsequently determined that it would not be advisable for the agency to establish unified property custody as an objective for the agency and it did not implement our recommendation. The analysis in this report once again highlights the need for FHA to consider whether the potential benefits from unified property custody, such as shorter REO disposition timelines and lower holding costs, outweigh any costs and challenges associated with acquiring REO properties from servicers closer to the foreclosure sale date. Once the initial value assessment was completed, it took FHA an average of 168 days to sell a property, which was more comparable to the enterprises’ average timelines of 142 and 132 days. And in the first half of 2012, FHA had a shorter average time from initial valuation to completed REO sale than the enterprises, as shown in figure 7. Some of this difference may be due to the time taken to list a property for sale. From January 2008 through June 2012, FHA took an average of 32 days to list a property for sale after receiving the value assessment, compared to 13 days for enterprise 1 and 24 days for enterprise 2. In the first half of 2012, however, FHA’s average had fallen to 18 days. Similarly, FHA took slightly longer to sell its properties once they were listed—an average of 136 days compared to 119 for enterprise 1 and 118 for enterprise 2. But in the first half of 2012, FHA’s average time from listing to sale was shorter than the enterprises’. FHA officials indicated that its lagging performance prior to 2012 may have been due, at least in part, to limitations in their ability to adjust the level of homeownership center staff resources in response to increases in inventory that occurred in the years prior to 2012. FHA’s recent performance relative to the enterprises may also reflect continued progress in implementing its new M&M III contractor program structure that began in the middle of 2010. We tested certain factors that were to some degree under FHA’s and the enterprises’ control to explore whether these factors helped to explain the differences in REO disposition performances. We included these factors (time from foreclosure sale to REO disposition, ratio of initial list price to initial valuation amount, disposition method, and buyer type) in additional regression models to determine whether their inclusion significantly changed the results. These models indicated that the length of time from the foreclosure sale to the REO sale was associated with FHA’s lower net execution rate relative to the enterprises. Specifically, when our regression models included the time from the foreclosure sale to the REO sale, FHA’s relative performance deficit in terms of the independently assessed value net execution rate was eliminated completely. Such a result indicates that the longer time FHA requires to acquire and sell properties could be an important factor in explaining differences in execution rate performance. However, an actual causal relationship is difficult to isolate and prove, as additional factors—such as deteriorated property conditions or variations in market conditions within ZIP code areas—that were not incorporated into our regressions could also explain both the performance difference and the difference in the average total disposition time between FHA and the enterprises. Aligning with their stated preferences for retail sales, the predominant share of FHA’s and the enterprises’ REO property dispositions were through retail sales of individual properties to owner-occupants or investors (fig. 8). From January 2007 through June 2012, retail sales were about 97 percent of FHA’s dispositions, about 99 percent of enterprise 1’s, and 91 percent of enterprise 2’s. Enterprise 2 had the highest share of bulk and auction sales over the time period, representing almost 7 and 2 percent of its total dispositions, respectively. Less than 4 percent of FHA’s dispositions from January 2007 through June 2012, on average, were for programs related to its housing mission goals (e.g., discounts and donations). Dispositions through the enterprises’ programs that market properties to nonprofits and public entities (e.g., discounts and donations) accounted for less than 0.5 percent of total dispositions from January 2007 through June 2012. Retail sales also generated higher returns than other disposition methods. For example, FHA’s independently assessed value and initial list price execution rates (net sales proceeds as a share of property values or list prices) were both 30 percent higher for its retail sales than for its mission program dispositions for sales from January 2007 through June 2012. The enterprises, particularly enterprise 2, also generally had higher execution rates for retail sales than other disposition methods over the same period. Specifically, enterprise 2’s independently assessed value execution rate for retail sales was 52 percent higher than for its auction sales, 95 percent higher than for its bulk sales, and 62 percent higher than for its nonprofit and public entity sales. For enterprise 1, retail sales were 36 percent, 40 percent, and 1 percent higher, respectively, than sales using these methods. We also examined FHA’s performance in selling REO properties to different types of buyers. Our analysis of data for all dispositions from January 2007 through June 2012 showed that FHA achieved higher returns on sales to owner-occupants than on sales to investors and other buyers, although it had a smaller share of these types of dispositions than the enterprises. FHA’s independently assessed value and initial list price net execution rates both were 25 percent higher for sales to owner- occupants than for sales to nonowner-occupant investors. The enterprises experienced smaller return premiums—ranging from 10 to 19 percent—from owner-occupant sales as measured by these sales return measures. FHA’s and the enterprises’ overall returns on owner-occupant sales were generally comparable. Specifically, FHA’s net execution rate based on independently assessed value was the same as enterprise 1’s and four percentage points less than enterprise 2’s. Based on initial list price, FHA’s net execution rate was 2 percentage points less than enterprise 1’s and the same as enterprise 2’s. Yet FHA sold a smaller share of its properties to owner-occupant buyers than the enterprises. Specifically, about 58 percent of FHA’s sales from January 2007 through June 2012 were to owner-occupant buyers, compared to 63 and 68 percent for the enterprises. FHA’s share of owner-occupant sales increased to about 64 percent in the first half of 2012. Figure 9 shows FHA’s and the enterprises’ percentage of REO property sales to owner-occupants, based on our analysis of data from sales of REO properties from January 2007 through June 2012. Properties that FHA sold to owner-occupants also had higher average sales prices and were sold more quickly than properties that were sold to other buyers. For example, the average price of FHA’s sales to owner- occupants from January 2007 through June 2012 was more than $77,000, compared to less than $50,000 for sales to nonowner- occupants. Likewise, it took FHA an average of 42 fewer days to dispose of properties sold to owner-occupants during the same time period than were needed to complete sales to investors and other buyers, as measured from the foreclosure sale date. The average sale prices of the enterprises’ owner-occupant sales were around twice the average amount of their sales to investors and other buyers, but the timelines were similar for both types of sales. Results from our regression models also indicated that the type of buyer was associated with FHA’s lower independently assessed value net execution rate—which gauges success in maximizing a property’s sale price. When we controlled for differences in the share of sales to owner- occupants in our regression models, FHA’s performance deficit relative to the enterprises for independently assessed value net execution rate was reduced. However, this association does not necessarily mean that this factor caused the performance difference. For example, additional factors—such as deteriorated property conditions or the existence of certain amenities that might attract owner-occupants—that were not controlled for in our regressions could also explain the differences in performance and share of owner-occupant sales between FHA and the enterprises. Based on our analysis of all dispositions from January 2007 through June 2012, the enterprises repaired more properties than FHA and experienced higher returns than on properties that they did not repair. Our review of data for all REO dispositions during this period showed that the enterprises spent at least $1,000 on repairs for 29 percent (enterprise 1) and 23 percent (enterprise 2) of the properties they sold. FHA, however, spent at least that amount on only about 5 percent of its properties. Based on our analysis, we found that properties repaired by the enterprises netted higher independently assessed value net execution rates, after accounting for repair costs, and also achieved higher list price net execution rates. Specifically, the enterprises’ net execution rates based on independently assessed value—including the cost of repairs— were 3 to 4 percentage points higher over the entire period for properties with at least $1,000 in repair costs than for properties with repair costs less than that amount. The difference for the net execution rate based on list price over the entire period was 3 percentage points for each of the enterprises. However, the enterprises’ properties with at least $1,000 in repair costs sold an average of 33 to 47 days more slowly than properties with lower repair costs, as measured from the initial valuation date. These differences may reflect the time required to complete repairs or a greater willingness to market the property for a longer period. Our analysis also showed that FHA netted higher returns on sales of REO properties that were in better condition—that is, that met minimum property standards to qualify for FHA insurance. To be eligible for an FHA-insured loan, a property must be in a condition and location free of known hazards and adverse conditions that could affect occupants’ health and safety or the structural soundness of any improvements or that could impair the use and enjoyment of the property. For example, FHA requires properties to have adequate heating, hot water, and electricity. Based on our analysis of data for all REO dispositions from January 2007 through June 2012, FHA received higher sales returns for properties that were eligible for FHA insurance (eligible) than it did for properties that were deemed ineligible because their condition did not meet these standards (ineligible). FHA’s independently assessed value net execution rate was 12 percentage points higher for eligible properties than for ineligible properties for all dispositions during this period. Furthermore, eligible properties sold faster than ineligible properties, which took an average of 88 additional days—26 percent longer—from the foreclosure sale date to sell. FHA staff told us that while they generally did not repair REO properties to increase the sale value, some properties are repaired to address health and safety concerns and to preserve the property’s condition. FHA officials also noted that while the agency might conduct these types of repairs when necessary, FHA does not repair properties specifically to meet its minimum property standards. FHA officials explained that FHA had a long-standing policy of not repairing properties. They said that the agency does not conduct repairs because of concerns about having to oversee contractors that perform the work and HUD’s inability to obtain volume discounts on replacement appliances or other home fixtures because of the agency’s preference for using small contractors. They also said that having to comply with HUD procurement guidelines and the Davis-Bacon Act made it more difficult for FHA to engage in construction projects to repair properties and increase sale returns. However, officials noted that in 2011 FHA had begun a small pilot program in its Atlanta homeownership center to assess the impact of repairs on properties’ marketability. This program selected favorable properties— that is, relatively high-quality properties in a few counties in the Atlanta metropolitan area—to repair and officials indicated that results had generally been positive. The pilot involved about 50 completed sales and around 80 additional properties as of September 2012, according to FHA officials. However, the officials also said that FHA had not analyzed the sale prices of the repaired properties to determine whether it was achieving higher returns than it could achieve without conducting repairs. Additionally, one FHA official expressed concern that the existing policy not to repair properties prevented FHA from capturing the additional returns that can come from selling repaired properties for higher prices. Instead, the official said that selling properties without making repairs intended to increase the sale value allowed investors to purchase them, make the repairs, and capture the additional returns. Similarly, VA and RHS staff said that their agencies generally did very few repairs to REO properties and that most generally were sold without repairs intended to increase the sale value, largely because repairs did not generally result in higher returns. In some cases, VA has conducted minor (cosmetic) repairs in order to improve returns, according to VA staff. However, staff noted that in general the costs associated with making these repairs have not been fully recovered by the eventual sale proceeds. They further noted that some cosmetic repairs—such as fixing windows, painting, or installing new carpeting—may increase sale returns, but major repairs often reduced returns, at least in part because of the additional costs of repairs and holding the property longer. The VA officials told us that a few years ago they conducted a small case study of repair work done for six properties and found no positive result from doing the repairs. RHS officials also said that RHS did not make repairs for the majority of its direct loan REO properties unless repairs were needed for safety. For properties in its guaranteed loan program, RHS generally has not had lenders complete cosmetic repairs but may consider repairs to increase returns on a case-by-case basis. RHS officials also said that lenders completed repairs on guaranteed properties for safety reasons and to preserve and protect the property. In contrast to FHA, VA, and RHS direct loan properties, the enterprises and the three private mortgage servicers we contacted did make case-by- case determinations on conducting cosmetic repairs to improve returns, increase the likelihood of an owner-occupant purchase, or meet neighborhood standards. Officials from these entities said that they did repairs on between about 20 to 40 percent of their REO properties, although staff from one of the private mortgage servicers indicated that they had been repairing up to 80 percent of their properties due to the lengthy foreclosure process. One of the enterprise’s officials told us that it had been repairing more REO properties as a way to improve the impact on neighborhoods as well as to earn the highest possible return. For example, the officials said that about 80 percent of repaired properties were sold to owner-occupants compared to about 50 percent of unrepaired properties. They also explained that repair decisions were based on numerous factors, including neighborhood conditions, potential buyers, and the costs of the repairs. As the length of time that an unsold property remains on the market increases, the enterprise may reassess the repair decision to see if performing repairs could add value and facilitate a sale. Officials also said that the enterprise repaired properties based on expected returns, regardless of value. The other enterprise also makes decisions on whether and to what extent to do repairs on a property-by-property basis, primarily to increase returns, according to staff. This enterprise’s staff said that they viewed repairing properties as a way to maximize the properties’ value and increase the chances of selling them to owner-occupants. They said that the enterprise also tried to ensure that properties conformed to neighborhood standards and were competitive with other properties for sale in the area. In some areas where the potential for vandalism was high, they said that the enterprise would be less likely to make repairs early in the REO process but would complete them just prior to closing. Among the goals that FHA staff described for the agency’s REO disposition program were maximizing net returns to the mortgage insurance fund and increasing home ownership, but FHA may be failing to take advantage of the opportunity for increased financial returns by not repairing more properties. FHA’s policy to limit repairs only to those related to health and safety concerns may in part explain why it sells fewer properties to owner-occupants than is the case for the enterprises. Repairing properties only to address health and safety concerns would not necessarily result in a property that meets standards for FHA eligibility, and as a result FHA may be selling fewer properties to owner- occupants, many of whom may be interested in FHA loans. As we have shown, FHA’s sales of eligible properties yield higher returns than those that are not in an eligible condition. If FHA repaired ineligible REO properties to make them eligible, the agency might be able to realize higher sales returns, avoid the holding costs related to the longer disposition time frames for ineligible properties, and further its mission of increasing home ownership. FHA uses only one input to set list prices—an appraisal, or professional appraiser’s estimate of the property’s fair market value based on market research and analysis as of a specific date. Other entities use additional, generally accepted methods for establishing a listing price for their properties, including obtaining an estimate from a real estate broker— known as a broker’s price opinion (BPO). BPOs are estimates of the market value of a particular property prepared by a real estate broker, agent, or sales person. In addition, market values for properties can also be estimated using automated valuation models (AVMs), computerized programs that estimate property values using proprietary and public data, such as tax records and information kept by county recorders and multiple listing services, and other real estate records. FHA’s marketing contractors set REO property list prices at the appraisal value, although their marketing contractors also have access to BPOs. FHA’s regulations require the use of an independent appraiser when setting a price for an REO property, and FHA staff told us that properties typically are listed at the appraised value. Based on our review of property dispositions from January 2007 through June 2012, the list prices of more than 98 percent of FHA’s properties equaled the appraised market value. However, FHA field staff told us that the agency’s marketing contractors often also ordered BPOs to evaluate and review list prices and were required to obtain BPOs when requested by FHA. These additional valuations are not used to change the list price; however, FHA staff said that the listing brokers used BPOs to evaluate and support list price reductions when properties did not sell. Staff from one of FHA’s four homeownership centers noted that its marketing contractors often have properties’ listing brokers complete a BPO during the listing process to assess the accuracy of the appraisal value used in setting the list price. These staff said that if the BPO value differed greatly from the appraisal value, the marketing contractor might discuss the valuation with the appraiser and request that the value be reconsidered. Although they said that they could only recall one or two instances of appraisers changing their valuations after these discussions, the staff at this office considered comparing the appraisal value to a BPO value to be an effective practice on a case-by-case basis. A review of the plans of selected marketing contractors showed that contractors were to obtain BPOs during the listing process and also when considering subsequent list price reductions. For example, some contractors’ marketing plans called for an initial BPO when the property was listed for sale and every 30 days thereafter as a way to evaluate the appraised value and appraiser performance and to analyze market data. The contracts between FHA and its marketing contractors also state that the contractor must obtain an independent BPO when directed by FHA staff. In contrast, the enterprises, VA, RHS, and the three private mortgage servicers we interviewed all use at least two methods—either an appraisal and BPO or two BPOs—to estimate the market value of their REO properties as part of determining a list price. The enterprises also use an AVM to provide an additional value estimate and incorporate additional information and analysis beyond the supplemental valuation information into their list price decisions. For example, the enterprises produce list price guidance based on factors such as location, market conditions, comparable sales, REO sales trends, and input from listing agents. Following this guidance, the enterprises may set a list price above or below the estimated market value based on whether the property is located in a depreciating or appreciating market. For property dispositions from January 2007 through June 2012, one enterprise set the initial list price for less than 1 percent of its properties at the independent BPO value, and 28 percent were within 5 percent of the BPO value. The other enterprise set initial list prices at the appraised market values for fewer than 10 percent of the REO properties that it disposed of during this period, and 28 percent of initial list prices were within 5 percent of the appraised value. Using multiple information sources could improve the accuracy of FHA’s market value estimates and list prices. Officials from other federal housing agencies and the enterprises said that multiple inputs increased accuracy by providing a range of independently valued assessments. Providers of appraisals and BPOs use different approaches to valuing properties and combining the two methods produces the best results, according to officials from some entities. For example, appraisals are to be conducted by trained and certified independent professionals with no interest in the outcome of the sales, but these appraisals focus on past sales and listings and may not reflect current price trends. BPOs, although conducted by brokers who may have an interest in the outcome, may reflect more knowledge of the properties and local markets. In addition, appraisers may have difficulty finding comparable property sales in some rural areas, and officials said that appraisals are more costly to obtain than BPOs. If estimates from different sources vary, entities reconcile them to produce a market value estimate that reflects a broader and more diverse base of information and analysis than an estimate from a single source. Our analysis of the enterprises’ reconciled value estimates—which incorporate all of their market value inputs such as appraisals, BPOs, and AVMs—indicated that the reconciled values generally were lower than independent value assessments reflecting a single source such as an appraisal or BPO and accordingly reflected final sale prices somewhat more accurately. For all dispositions from January 2007 through June 2012, the enterprises’ reconciled value estimates were closer to gross sale prices than their independent value assessments were by 1 and 7 percentage points overall. The use of multiple valuation methods could help FHA more accurately estimate the market values of its REO properties, increasing the likelihood of selling properties more quickly and at prices that best reflect current market conditions. FHA officials indicated that BPOs and AVMs could reduce costs and increase the accuracy of FHA’s market value assessments by better reflecting recent market trends. A senior official from FHA’s single family housing program also said that using AVMs could improve FHA’s ability to identify the most appropriate marketing and disposition strategies for certain properties by providing more accurate and timely market value estimates. In early 2013, FHA’s Santa Ana homeownership center began a pilot program to evaluate the use of AVMs in validating appraised market values, according to agency officials. The officials explained that the pilot program uses a model that incorporates results from multiple AVMs to assess the independent appraisals. They said that the model has helped to identify opportunities for FHA to increase list prices based on market analysis. The officials also said that they were considering other options to establish accurate list prices and reduce risks from appraised market values that were unnecessarily low relative to market conditions. They stated that FHA was working with HUD’s Office of General Counsel (OGC) to determine if the regulatory requirement that list prices be based on an appraisal allowed them to be based on multiple sources that include an appraisal rather than solely on the appraised market value. FHA generally did not take into account market conditions when reducing the list prices for REO properties that do not sell. FHA’s marketing contractors determine when and by how much the list prices should be reduced. FHA’s marketing contractors create plans for each of the geographic contract areas in which they operate that describe how they intend to market and sell FHA properties and submit these plans to FHA’s homeownership centers for approval. The marketing plans include a schedule identifying time frames and percentage thresholds for reducing list prices. For example, a schedule might indicate that for properties that have been listed for sale for between 30 to 60 days, the list price should be set at 90 percent of appraised value. These schedules vary by marketing contractor and FHA homeownership center. For example, the schedules’ quantitative thresholds for the amount and timing of price reductions can differ, although most plans use one of two standard amounts. Some schedules describe a reduction of “up to” a certain percentage of the appraised value or current list price, while others specify that properties will be listed for an amount “no less than” a percentage of the appraised value or current list price. Our analysis of each of the 23 marketing plans used by FHA’s marketing contractors showed that all but one used a schedule for price reductions. However, FHA lacks a clear and consistent policy for how price reductions should be conducted, allowing each of its homeownership centers to approve marketing contractors’ plans for reducing list prices. FHA headquarters and homeownership center officials whom we interviewed differed on the extent to which price reductions required review and approval by FHA staff. Officials from FHA’s headquarters said that marketing contractors had to provide supporting documentation to justify why a price reduction was necessary and receive prior approval from homeownership center staff for all proposed reductions, even if they were following a schedule. However, officials we spoke with at each of the centers said that their marketing contractors typically did not need and did not obtain prior review and approval from FHA staff as long as they followed the approved price reduction schedules in their marketing plans. They added that marketing contractors only needed to provide documentation to FHA homeownership center staff and receive homeownership center approval for price reductions that exceed marketing plan thresholds. Further, staff from homeownership centers described to us different practices that they followed when considering price reductions in excess of marketing plan schedules. For example, the Atlanta and Denver offices allowed exceptions to the schedule for individual properties, but the Santa Ana office does not allow any price reductions in excess of the schedule, according to staff. Analysis of FHA’s REO property dispositions indicated that marketing contractors generally followed the price reduction schedules systematically when reducing properties’ list prices. To assess the extent to which list price reductions on FHA’s properties followed these schedules, we analyzed list price data from FHA’s REO property dispositions from June 2011 through June 2012 and compared the price changes to the schedules in contractors’ marketing plans. Based on this analysis, we found that almost half of the properties that FHA sold over the period had at least one price reduction. Of these, about 75 percent of the initial reductions were for a scheduled amount. Most price reduction schedules based this amount on one of two specific percentages. Only one of FHA’s four homeownership centers—Denver—preferred its marketing contractors to base list price reductions on evaluations of market conditions rather than on a schedule. Denver center officials said that they encouraged marketing contractors to base price reduction decisions on evaluations of individual property-level market data. The Denver center also requires its marketing contractors to provide supporting documentation for all price reductions on REO properties, and its staff review the documentation for a small sample of these reductions, according to officials. Furthermore, one contractor’s price reduction schedule for the Denver center included a threshold range for price reductions rather than a specific percentage. One marketing contractor in the region did not have a price reduction schedule in its marketing plan and instead analyzed market conditions, the appraisal, and other information to determine a new list price. None of the marketing contractors in the other centers used a range for price reductions or had marketing plans that did not include a price reduction schedule. Officials from one homeownership center indicated that automated reductions were easier, quicker, and required fewer resources. Those from another center noted that following such reductions helped to find the right price in a structured fashion. In contrast, officials from the Denver center identified several disadvantages to using systematic price reductions and said that they attempted to have the schedules removed from contractors’ marketing plans at the beginning of the M&M III program. However, they explained that officials at FHA headquarters at that time strongly resisted the change. Although differences in disposition performance cannot be attributed solely to pricing practices, the Denver homeownership center performed better both in terms of sales returns and speed of sales than either the Philadelphia or Atlanta centers even when we controlled for regional differences, as we discuss later in the report. And compared to the enterprises’ performance in the states where each center operates, the Denver center’s performance compared more favorably with the enterprises’ performance than did that of the other centers, which lagged the enterprises. While homeownership centers generally reduced REO properties’ list prices with similar frequency, some differences existed in the degree to which contractors’ price reductions followed marketing plan schedules. Based on our analysis of data on all property dispositions from June 2011 through June 2012, each of the homeownership centers reduced prices for close to the overall average of 47 percent. However, the Denver center reduced prices by scheduled amounts less frequently than the other centers, consistent with its preference that contractors base price reduction decisions on evaluations of market conditions. For example, almost 90 percent of the Santa Ana center’s initial price reductions were for a scheduled amount, compared with 58 percent for the Denver center. The Atlanta and Philadelphia centers had figures of 81 percent and 76 percent, respectively. Other federally related housing entities with REO inventories generally based their decisions to reduce prices on evaluations of property-level information and market conditions. According to officials from the enterprises and VA documents that we reviewed, these entities used individual assessments of market conditions rather than predetermined schedules when considering the timing and amount of list price revisions. These entities also defined thresholds to identify when reductions required additional levels of approval. RHS follows a schedule for its direct loan program, but agency officials acknowledged limitations with this approach including a lack of flexibility. Market participants that we interviewed, including real estate brokers, an industry consultant, and a nonprofit organization, identified disadvantages with systematic price reductions, such as the potential for interested buyers to adjust the timing and amount of their bids in anticipation of a discounted price. The limited use of price reduction schedules may provide certain advantages by establishing clear benchmarks for determining when to evaluate a property’s market situation and which reduction amounts should require review and approval by FHA staff. However, reducing list prices based solely on a schedule may lower prices at times and by amounts that are not optimal, potentially lowering FHA’s net return. For example, a property whose list price is reduced excessively or hastily may sell at a price that is unnecessarily low based on market conditions, leading to lower returns for FHA. Also, mispriced properties may take longer to sell thereby increasing FHA’s holding costs. In contrast, a strategy of basing price reductions more comprehensively on evaluations of property-level information and market conditions would likely be more flexible and could provide more accurate prices. Further, because many marketing contractors already obtain BPOs and other assessments of market conditions and are sometimes required to do so, such an approach would likely not involve significant costs for FHA. FHA does not have a current and complete set of policies and procedures for its current REO disposition program as required by internal control standards. Federal internal control standards require agency management to conduct monitoring of program quality and performance through the establishment and review of performance measures and indicators. Under the new contract structure that FHA introduced in 2010, the agency intended that its staff conduct specific activities to assess whether its contractors were meeting minimum requirements under the contracts, but these reviews have not been occurring. Further, while FHA is in the process of implementing procedures to better ensure that FHA homeownership centers perform consistent oversight activities, it has not implemented a critical component of the plan—a scorecard to evaluate contractor performance against standard metrics that would allow it to compare the quality of its contractors’ activities. Our review also showed that FHA was not conducting as many or as frequent in-person property inspections as other entities that dispose of REO properties and that it was not taking steps to determine that listing brokers were located sufficiently close enough to the properties they were selling to ensure local market knowledge. Finally, although assigning work to contractors in part on the basis of their performance was intended to have been a key quality assurance mechanism under the new contract structure, FHA has encountered various obstacles to implementing this condition. FHA does not have a current and complete set of policies and procedures for its current REO disposition program, as required by internal control standards. These standards require formally documented policies and procedures that are clear and readily available. Such materials can be used to provide guidance to staff in the performance of their day-to-day help ensure that activities are performed consistently across an help ensure that the agency complies with federal laws and agency, communicate management’s directives, and regulations. These control standards require that policies and procedures be reviewed regularly and updated when necessary. In keeping with these requirements, the enterprises have well-documented policies and procedures for their REO disposition programs. One of the enterprises has consolidated its guidance and expectations for its staff and contractors in a single, comprehensive guide that it updates as needed. The other enterprise does so using multiple documents that are accessible from a single location. However, while FHA has an REO Disposition Handbook that outlines policies, procedures, and key controls for REO activities, the agency has not updated the handbook since 1994. FHA describes the Disposition Handbook’s objectives as providing comprehensive guidance that reflects program requirements and that stresses the importance of internal controls, incorporates fiscal procedures, and provides clear statements of policy for field office staff. However, the handbook does not reflect the current structure, processes, or requirements that FHA uses to dispose of properties. Nevertheless, FHA’s contracts reference it as a source of applicable guidance. FHA headquarters staff told us that the disposition handbook was outdated and did not reflect the current REO program structure, including the use of the multiple types of contractors and their responsibilities. But staff from one FHA homeownership center told us that they continued to use the handbook for policy guidance in certain areas, such as broker registration, contract extensions, rental agreements, and the closing agent monitoring checklist. Instead of updating its Disposition Handbook, FHA relies on mortgagee letters, housing notices, and contracts to document its current policies and procedures. FHA headquarters staff told us that they had not taken steps to update the program handbook and that they used these letters and notices to provide new and revised guidance to their staff and contractors. FHA officials also indicated that the terms of its contracts with service providers served to document the REO process and performance expectations. However, as mentioned earlier, these contracts refer to the outdated Disposition Handbook as a source of reliable guidance. Also, using multiple contracts rather than a single consolidated document as a source of policies and expectations for staff and contractors creates the potential for inconsistencies. In addition, FHA has a decentralized REO disposition process managed through four homeownership centers, underscoring the need for a single source of guidance on policies and procedures for headquarters and field staff. FHA’s contractors may have multiple contracts overseen by different centers, and in a February 2013 report, HUD’s Office of the Inspector General (OIG) found that one of FHA’s REO contractors faced different procedural requirements across the various HUD operating regions in which it was active. According to HUD OIG staff, they had also found a lack of consistent expectations for contractors across homeownership centers. For example, some centers were requiring maintenance contractors to pay any homeownership association fees and other unpaid bills before sale closings, but other centers expected marketing contractors to make these payments. In some cases, FHA faced delayed sales closings for failing to pay these fees on a timely basis. Lack of consistent and updated guidance on policy and procedures may also make oversight of contractors less efficient and may increase the REO program’s operating costs. FHA staff in two homeownership centers told us that they spent significant amounts of time responding to policy inquiries from contractors and seeking answers to questions about policy within the agency. Staff from one of these homeownership centers indicated that more guidance on policy matters would be beneficial and said that they needed additional support for procedures. The lack of a single, up-to-date form of guidance for the REO program leaves FHA without important internal controls, has resulted in extra costs in time and resources for FHA staff, and has created a burden for some contractors who face differing requirements across regions. Further, the lack of consistent guidance may be a factor in the execution rate performance of the four homeownership centers. For instance, some centers may be using practices that increase contractor performance that other centers have not tried. As discussed previously, our analysis of FHA’s property dispositions revealed differences in performance levels across its homeownership centers. For instance, aggregate sales returns based on independently assessed value and list price execution rates for all properties disposed from January 2007 through June 2012 were 13 and 12 percentage points higher for the Santa Ana and Denver centers than for the Philadelphia center and 7 percentage points higher than for the Atlanta center. Each year within the overall period showed similar patterns with the Santa Ana and Denver centers typically having higher execution rates than the Atlanta and Philadelphia centers. Even after controlling for the average effects of certain property characteristics, such as value and changes in local housing prices, Santa Ana and Denver had higher execution rates than Atlanta and Philadelphia. Because regional housing market differences could affect performance of the homeownership centers even after controlling for certain property characteristics, we compared the performance of the homeownership centers with the performance of the enterprises in the states where each center operates. For all dispositions from January 2007 through June 2012, the Denver center’s aggregate independently assessed value net execution rate was comparable to that of the enterprises in the states where it was responsible for REO property dispositions. The Santa Ana and Atlanta centers lagged the enterprises by 2 to 6 percentage points for dispositions in their respective states, while the Philadelphia center lagged the enterprises by 8 and 11 percentage points (fig. 10). After controlling for the average effects of certain property characteristics, such as value, ZIP code, and changes in local housing prices, differences in the homeownership centers’ performance relative to the enterprises generally persisted. However, Santa Ana’s performance decreased while Philadelphia’s increased such that their performance differences relative to the enterprises were similar. Denver’s performance declined slightly but remained comparable to that of the enterprises and the Atlanta center’s performance improved but still lagged that of the enterprises. The average number of days to complete an REO disposition from the date that FHA acquired properties also varied, from 164 for the Denver center to 212 for the Philadelphia center for all dispositions from January 2008 through June 2012. Other time frames—such as from the initial list date to the completed sale—also illustrated performance differences among the homeownership centers, as did the results for individual years. Even after controlling for the average effects of certain property characteristics, such as value and changes in local real estate prices, these differences persisted. Because regional housing market differences also could affect the time required to dispose of properties, we compared the homeownership centers’ time frame performance with that of the enterprises in the states where the centers operate. For all dispositions from January 2008 through June 2012, the Denver center’s average number of days from a property’s initial valuation to the completed REO sale was generally equivalent to that of the enterprises. The Santa Ana, Atlanta, and Philadelphia centers, in order, took longer to sell properties than did the enterprises in the states in which they operate (fig. 11). After we controlled for the average effects of certain property characteristics, such as value, ZIP code, and changes in local housing prices, the average number of days from initial valuation to completed sale for each of the homeownership centers generally exceeded that of the enterprises to an even greater extent. The Denver center exceeded the enterprises by the least number of days, while Philadelphia exceeded the enterprises by the most. Returns on sales of FHA’s REO properties also varied across marketing contractors, as did the time that the contractors required to complete dispositions. From 2010—when FHA implemented its new contract structure—to 2012, the difference between the contractors with the best and worst execution rates based on independently assessed value was between 12 percent and 19 percent each year. Of the seven marketing contractors, one had the best execution rate in each of the last 2 years and another had the worst rate for each of those years. Likewise, the difference between the contractors with the best and worst time frame for the number of days from acquisition to completed sale ranged from 11 percent to 40 percent each year. Again, one contractor—not the same one that had the best execution rate—had the best time frame in each of the last 2 years and another—the same one that had the worst execution rate in each of the last two years—had the worst time frame for each of the last 2 years. These differences raise questions about the guidance that FHA provides to its homeownership centers and contractors tasked with managing and selling REO properties. For instance, FHA has not identified optimal practices and included them in a single consolidated handbook, although some centers could be using practices that could benefit others. According to an FHA headquarters official, staff in each homeownership center analyze their office’s performance against the overall agency goals of reducing the time that REO properties are in inventory and the time it takes to list properties for sale, steps that should decrease the costs associated with dispositions. According to this official, each center analyzes this data monthly and takes corrective action as deemed necessary, and recently the performance across homeownership centers on these timeline measures have been similar. The official also noted that homeownership center staff complete standard contractor monitoring activities monthly to identify and address potential problems with disposition performance. Although having each homeownership center evaluate its own performance is an important internal control step, it does not replace an independent performance assessment across all of FHA’s centers, nor does it address the causes of any differences in performance across centers. FHA has yet to conduct any analysis to identify differences in execution rate performance across homeownership centers and the factors that may account for such differences, although doing so could help to improve performance at all centers and reduce costs across the REO program. Under the new contract structure—known as M&M III—created in 2010, FHA expected that its staff and an oversight contractor would conduct a number of specific activities to monitor its REO maintenance and marketing contractors’ performance, including (1) assessing whether its contractors are meeting minimum contractual requirements, and (2) using standard metrics in a scorecard to evaluate the level of contractors’ performance. Federal internal control standards require agency management to monitor program quality and performance through the establishment and review of performance measures and indicators. Also, HUD contracting standards and guidelines require the periodic evaluation of a contractor’s performance to help ensure that services conform to the contract’s quality and quantity requirements. Under the new M&M III program structure FHA homeownership center staff were expected to evaluate maintenance and marketing contractors monthly to determine whether they were meeting minimum contract standards. Staff were expected to perform this analysis using a tool—the performance requirements summary—that assessed the contractors against several minimum standards. However, staff at FHA’s four homeownership centers have not been performing the systematic reviews envisioned in HUD guidelines and the M&M III program structure to determine whether contractors are meeting minimum performance requirements. FHA staff in the four centers told us that they had not been using the planned assessment tool as intended and instead had just been reviewing the quality of contractors’ performance more informally and subjectively. FHA homeownership center staff explained that they did not complete formal performance requirements summary reports to be shared with contractors because FHA did not have a standard reporting mechanism. Instead, they informally assess contractors by examining performance trends, reports on properties exceeding suggested time frames for disposition, property inspections, and public feedback. FHA homeownership center staff indicated that the tool was not available because the methodology for producing it was to have been developed by the oversight monitor contractor, but the initial firm chosen for this role did not produce results that FHA deemed usable and FHA did not renew its contract when it expired in 2011. FHA headquarters officials also said that the original contractor oversight plans were not implemented because this contract was not renewed. By not completing these assessments, FHA has not systematically or uniformly determined whether contractors have been performing as intended. Without a comprehensive system to evaluate whether contractors are meeting minimum performance standards, FHA risks not being able to ensure the most efficient and effective disposition of its properties. Additionally, FHA has failed to implement another critical component of its M&M III program structure—a uniform tool known as a performance scorecard—that was to have been used to compare the level of contractor performance with that of other contractors. FHA intended to use contractor performance scorecards to determine which contractors would continue to receive new assignments of REO properties, and how many they would receive. However, FHA officials said that they never implemented use of a contractor performance scorecard because of the terminated relationship with its oversight monitor contractor that was responsible both for developing the scorecard and for the actual monitoring. While the performance requirements summary was a tool to identify minimum contractor performance, the scorecard would allow FHA to evaluate the level of contractor performance using standard metrics and to better compare the relative quality of a contractor’s activities against that of other contractors. For example, other entities use a scorecard to rank contractors on their overall performance as well as on certain component metrics that together comprise their overall score. Component metrics used by these entities include measures such as the average time to complete certain tasks or services or the results of oversight inspections. More recently, FHA has taken some steps to increase the consistency of its monitoring activities. A 2012 report by HUD’s OIG found that staff in FHA’s four homeownership centers had developed their own contractor oversight procedures that had led to inconsistent oversight of REO contractors. During the course of the OIG audit, FHA headquarters staff developed standardized plans—one for monitoring maintenance contractors and one for monitoring marketing contractors—that the homeownership centers were to begin using in June 2012. Each of these two standardized monitoring plans includes various contract monitoring tasks, such as measuring performance in key areas and reviewing disposition status reports for properties that exceed certain REO processing time frames (e.g., that have been in the REO inventory for more than a year). As part of these plans, FHA included a contractor performance scorecard that its staff had developed. However, FHA officials said that the scorecard was not implemented due in part to difficulties renewing FHA’s contract with the provider of its REO data management system. To assist in its development of a standard method of evaluating contractor performance, including scorecards, FHA officials said that a new contractor was hired in September 2012. FHA officials said that the scorecards would likely require the approval of each of the maintenance and marketing contractors, as it was not included in their original contracts as a basis for performance evaluation. In contrast, the enterprises and private sector mortgage servicers that we interviewed had been using scorecards to evaluate and compare contractor performance and as a basis for assigning work to contractors. These scorecards generally tracked a variety of metrics related to quality and time frames, such as the number of days that a property was listed for sale, different measures of sales returns, and completion of maintenance and repair work. For example, one of the enterprises uses performance results from monthly scorecards and quarterly report cards to ascertain whether its contractors are meeting its standards for performance. Its officials and those from two private mortgage servicers we spoke with said that they also used scorecard results to make decisions about reducing or ending their use of poorly performing contractors. Officials from some of these entities said that they also used scorecards to compare the performance of individual contractors to the performance of all contractors in a similar geographic area. These officials also said that the contractors knew how the scorecards were being used to assess their performance and that their business relationship with the company and the volume of work they received depend on the assessments. In the absence of a scorecard, FHA homeownership center staff indicated that they were taking steps to evaluate contractors individually and to provide feedback on their performance. But homeownership centers were using different processes that were inconsistent, fragmented, and informal. For example, staff at two centers said that they relied on individual measures of performance such as case reviews and summary reports of properties’ progression through different stages of the disposition process. Staff from one of these centers explained that while they had a sense of whether a contractor was doing a good job or not, they did not have the ability to formally compare performance across contractors. Staff from one center told us that some staff members had created a scorecard-like tool to evaluate the performance of contractors for which they had oversight responsibilities, but had been told by HUD contracting officials that they could not share the results with the contractors until FHA introduced a standard scorecard nationwide. However, staff from a different office said that they had shared certain individual contractor performance information with their contractors. Without a functioning, standardized scorecard, FHA does not have a uniform tool for evaluating the overall level of its contractors’ performance and cannot effectively make distinctions about relative performance differences across contractors or tell contractors how their performance compares to their peers. This shortcoming also limits FHA’s ability to identify and address underperforming contractors and creates the risk that FHA cannot ensure the most efficient and effective disposition of its properties. Our review also showed that FHA was not conducting certain contractor oversight activities performed by some other entities that dispose of REO properties. Specifically, FHA was not conducting as many or as frequent in-person property inspections as other entities and was not taking steps to ensure that listing brokers were close enough to the REO properties they were chosen to market to know local market conditions and efficiently access the properties. One of the ways that FHA’s oversight activities varied from other federally related housing entities and private mortgage servicers was the extent to which it conducts in-person property inspections. HUD’s Contract Monitoring Guide states that inspections are the best way to determine the quality of a contractor’s performance. FHA and other entities typically have their contractors visit properties regularly to inspect the properties’ condition and to perform routine maintenance such as lawn cutting. However, they also have their own staff or third- party contractors that visit properties for oversight purposes, including ensuring that contractors are performing their required duties and maintaining properties to expected standards. According to staff from one of FHA’s homeownership centers, performing in-person property inspections is critical because they allow FHA staff to review contractor performance, identify problems needing resolution, and conduct quality assurance checks, especially in the absence of a uniform scorecard. FHA’s standard monitoring plans include property site visits to help ensure that maintenance contractors are conducting their own routine inspections and maintaining the condition of assigned properties in accordance with their contractual requirements. They also are meant to help ensure that marketing contractors are following required standards and procedures when conducting sales activities. FHA’s plans call for in- person inspection of 2 percent of properties three times per year. However, FHA homeownership center staff had varying interpretations of FHA headquarters’ expectations for the amount and timing of property inspections. For example, staff in some centers told us that they aimed to inspect 2 percent of properties annually, while staff in another center said they targeted 6 percent of properties annually. The timing of the inspections also varied across the homeownership centers from a certain percentage each month to a certain percentage in three of the four quarters of the fiscal year. According to FHA officials, a lack of adequate travel funds and staff capacity has created challenges for homeownership centers in conducting in-person property inspections. Staff at some centers said that a lack of available funds could delay some inspections until the end of the fiscal year, when funds might become available. Others noted that in the past they had inspected more properties within proximity to the homeownership center when travel funds were not available. However, waiting to conduct inspections until the end of the fiscal year and restricting them to a limited geographic area limits their effectiveness, as the inspections may not target properties in certain locations or contractors equally. Further, contractors may become aware that they are unlikely to have properties inspected early in the year or in certain locations. To supplement the in-person inspections, FHA attempts to use other means of monitoring whether its contractors are complying with its expectations, but the effectiveness of these efforts is also uncertain. For example, FHA homeownership center staff may conduct reviews of the evidence, such as photographs of the property, that contractors submit to document that they have performed routine inspections and other activities. However, in 2012 HUD OIG audits found that contractors could upload pictures that do not accurately depict a property’s condition or incomplete reports that limited the effectiveness of these reviews. To supplement its own staffs’ monitoring efforts, FHA’s contractors that are responsible for marketing REO properties also complete some inspections as part of their quality control plans. Additionally, property listing agents employed by the marketing contractors inspect the work of maintenance contractors for all properties that are listed and sold. However, the effectiveness of these reviews may be limited because FHA staff told us that the property listing agents have often been reluctant to submit negative reports on maintenance contractors’ performance because of fear of damaging working relationships with these other contractors. The number of in-person inspections that FHA completes may not be sufficiently effective to ensure that FHA’s contractors are conducting their activities in compliance with contractual requirements. In multiple reports issued between March 2012 and February 2013, HUD’s OIG found that contractors responsible for maintaining and marketing FHA’s REO properties were often not performing the required work at all or were not performing to the expected level of quality. For example, a September 2012 HUD OIG report examined a Las Vegas, Nevada, FHA contractor and found that it did not secure or properly maintain 40 percent of the 96 properties that the OIG examined. Another report from September 2012 reviewed 125 properties nationwide and determined that FHA’s contractors did not properly maintain 75 of them, as evidenced by unmaintained yards, unclean conditions, lack of security, and water leaks. The review also found that for 100 of the 125 properties, FHA’s maintenance contractors did not conduct routine inspections in a timely manner. Furthermore, this OIG audit revealed that FHA’s maintenance contractors nationwide were paid for inspections for which they had not completed the required documentation and may not have conducted. One of FHA’s homeownership centers developed a report to identify these missing routine inspections and, in coordination with HUD’s procurement office, requested reimbursement of fees paid to five of its contractors totaling more than $1.3 million for more than 10,000 inspections from June 2011 through February 2012. In contrast to the amount of in-person inspections done by FHA, other federally related housing entities and private mortgage servicers we spoke with indicated that they conducted in-person inspections of much larger percentages of their REO properties and conducted them more frequently. For example, staff from one of the enterprises told us that it performed in-person oversight inspections of 25 to 30 percent of its REO properties monthly using an independent inspection firm. The other enterprise said that it completed in-person inspections of around 35 percent of properties monthly, using both an independent inspection firm and its own staff. VA officials noted that it conducted in-person inspections on at least 10 to 20 percent of its REO properties annually and in fiscal year 2012 the agency inspected over 40 percent. In addition, staff from one of the private mortgage servicers we interviewed told us that its own field agents inspected about 40 percent of its properties, while two other private mortgage servicers said that they inspected about 7 and 10 to 12 percent of their properties on a monthly basis throughout the year. Federal internal control standards require that agency management conduct effective monitoring to assess program quality and performance over time and work to address any identified deficiencies. Other entities, whether federally related or private, found that frequent in-person property inspections were an effective way to better ensure that contractors were performing required activities and to assess the quality of their work. As a result of not conducting in-person inspections of a greater share of its REO properties and not inspecting them more frequently, FHA may not discover potential maintenance and disposition problems, potentially resulting in poorly maintained properties that sell for lower prices. In addition, FHA does not have the procedures that the enterprises have to ensure that properties are assigned to listing brokers located close enough to the properties to have sufficient knowledge of the local market. Using listing brokers that are close to and familiar with properties and the surrounding communities improves the chances that the properties will be shown as often as possible and will be well maintained. FHA’s contracts require its marketing contractors to use local real estate professionals whose primary place of business is within reasonable proximity of the listed property. However, FHA does not have either a definition of “reasonable proximity” or formal guidelines or procedures for determining whether properties are assigned to local listing brokers, according to officials. FHA headquarters officials said that they had an informal goal of using brokers located within 20 miles of a listed property, but noted that consistently defining what constitutes reasonable proximity could be difficult. For example, distances that would be considered reasonable in an urban area might not be realistic in a rural location. FHA homeownership center staff said that headquarters had not yet implemented clear criteria or controls for the marketing contractors that assigned properties to brokers. In the absence of clear criteria from FHA headquarters, homeownership centers often made their own determinations on using brokers within reasonable proximity to properties. For instance, FHA staff at one center told us that after discovering as part of unrelated inspections that listing brokers for some of its properties were not local, the center and its marketing contractors had decided that “local” generally meant within 50 miles, with exceptions for sparsely populated areas. Centers also varied in their reviews of listing brokers’ proximity to their listed properties. One center noted that such reviews were part of annual inspections of marketing contractors. However, officials at another center said that they did not believe the proximity of listing brokers was a major concern and did not monitor it closely and instead placed more emphasis on overall performance. Officials from one listing broker that has sold properties for two of FHA’s marketing contractors said that there had been many instances of listed properties being more than 50 miles from the listing broker’s office. They also noted that many of these more-distant listing brokers were not members of the listing service that includes properties for those local markets, which resulted in a lack of proper exposure for FHA’s properties. Without clear guidance from FHA on the use and oversight of listing brokers, homeownership centers may continue to make their own determinations on what constitutes “reasonable proximity” to listed properties and may not be able to ensure that properties are being effectively marketed by knowledgeable agents. In contrast, the enterprises have established guidelines for the selection of local listing brokers and conduct monitoring to ensure brokers’ proximity to the listed properties. For example, one of the enterprise’s REO sales guidelines states that properties should be no further than 25 miles from the listing broker, although this threshold is used more often for rural areas, according to officials. In urban areas the goal is to assign a broker as close as 5 miles to the property. The officials also told us that the enterprise used reports to monitor the distances between listing brokers and their assigned properties and addressed situations involving longer distances on a case-by-case basis. Officials from the other enterprise emphasized the importance of using listing brokers located close to listed properties because of local brokers’ market knowledge. This enterprise has a goal for broker proximity of about 15 to 20 miles, according to these officials, but the distances can vary in rural areas. The officials also told us that the enterprise had an Internet-based REO management system that assigned properties to listing brokers by geographic area. The enterprise’s staff define the geographic areas within which its listing brokers can receive property assignments when it adds them as approved service providers and conduct reviews of listing brokers’ office locations. As part of the new M&M III program structure it introduced in 2010, FHA intended to implement a key quality control—assigning contractors work according to the quality of their performance—but has encountered obstacles to implementing this mechanism. The Federal Acquisition Regulation (FAR) stipulates a strong preference for using multiple contractors for the types of contracts that FHA has used to manage and dispose of REO properties. With respect to the types of contracts that FHA has used to manage and dispose of REO properties, the FAR provides that the agency must provide each contractor with a fair opportunity to be considered for the work. FHA designed its M&M III contract structure to include 10 geographic areas with multiple maintenance and marketing contractors operating within most areas. During the first year of the M&M III contracts, FHA assigned equal percentages of REO properties to each contractor in a contract area to satisfy the minimum guarantee under the contract. After the first year, FHA intended to use performance evaluations to help determine the shares of its REO properties within a contract area that it would assign to each of the multiple contractors operating in that area, with the high- performing contractors receiving the largest allocations of properties. FHA planned to divide property assignments within the overall contract on a percentage allocation basis, so that all individual contractors in an area would receive a minimum share of the work as long as the contractors met minimum performance requirements. However, FHA has been unable to implement this system as intended for two reasons. First, until it implements the planned scorecards or other uniform evaluation method, FHA has no way to systematically generate the information needed to assign work based on performance. Second, after implementing the new M&M III program structure, FHA encountered challenges in ensuring that its performance-based allocation contract structure complied with the FAR. In late 2011, HUD’s OGC advised FHA that the process of allocating a minimum share of property assignments to each of its contractors was not compliant with the FAR rules requiring that each contractor have a fair opportunity to compete and win all the work for which it is competing. FHA officials said that the performance- based competitions that OGC determined would be compliant with federal acquisition rules, with contractors winning either all or none of the work assignments, would jeopardize the financial viability of some contractors. These officials explained that contractors have high overhead costs and could go out of business if they did not receive at least some property assignments for more than a few weeks. As the result of these obstacles to implementing performance-based allocations as planned, FHA has continued to assign properties to marketing contractors based on equal allocations in a contract area. It also assigns work to maintenance contractors solely on the basis of the cost. FHA staff told us that they could not identify an acceptable alternative to assigning work among multiple contractors that was also compliant with federal acquisition rules. Instead, FHA plans to award new contracts that give all property assignments in a contract area to a single contractor for at least a year. FHA staff said that if the contractor performed poorly or was unable to provide the necessary services, the property assignments could be shifted to a contractor in a neighboring area by redefining the contract area. However, the practical difficulties and challenges involved in redefining contract areas and reassigning all properties among contractors could make implementing this option difficult. Further, it is not clear that FHA has fully explored other options that are compliant with federal acquisition rules. In terms of performance incentives, FHA officials said that its discretion in determining whether to renew annual contracts was the most important. However, this particular incentive may be less powerful than the frequent reallocation of work envisioned under the M&M III contract structure. According to officials, FHA has never failed to renew an annual contract for the 50-plus maintenance and marketing contracts that have been part of the current contract structure since 2010. FHA staff told us that under the maintenance and marketing contracts’ terms, they could reassign properties to another contractor or suspend property assignments to a poorly performing contractor, but these options have rarely been used. According to agency officials, since 2010 FHA has suspended a contractor’s assignments only once for a period of 1 month and has not yet reassigned properties from one contractor to another on the basis of poor performance. FHA officials said that they had not defined standard criteria for the number of instances of deficient performance that would be required before these actions were taken. Rather, FHA staff perform a risk assessment, and the contractor is given an opportunity to address any deficiencies. FHA has tried to create other incentives for superior performance. It pays marketing contractors a percentage of a property’s sale price at the time of sale based on the disposition price and time frame. These payments are higher for properties with sale prices above a set percentage of the initial listing price and within a certain time frame. Over the second half of 2012, almost one-third of FHA’s sales met the thresholds for the higher fee amount, according to FHA data. However, FHA procurement officials told us that the bonus fee structure was not a normal contract incentive and that FHA also was considering including more typical performance incentives and disincentives in contracts. They explained that the only specific disincentive or performance penalty in the current contracts is FHA’s ability to assess late fees if contractors delay sale closings. Without performance-based work assignments, however, FHA’s ability to motivate contractor performance is limited because it has few other incentives and disincentives and uses them infrequently. As a result, FHA cannot ensure that its maintenance and marketing contractors are performing at the highest possible levels. In contrast, the enterprises and private market servicers we interviewed use performance-based work assignments to align contractor incentives and promote high performance. None of these entities are required to follow federal acquisition regulations. Officials from one of the enterprises explained that if one of its contractors consistently performed better than others based on scorecard assessments, the high-performing contractor would receive more work assignments, with the amount dependent on capacity considerations. This system applies to its national marketing contactors, listing brokers, maintenance contractors, and other service providers. The other enterprise also considers performance when assigning properties to contractors such as listing brokers, according to its officials. Officials from two private mortgage servicers we interviewed told us that they assigned additional work to contractors with better performance on their scorecard indicators and that poor contractor performance could lead to fewer work assignments or termination. The housing crisis has increased the number of REO properties in FHA’s inventory. The agency’s ability to effectively dispose of these properties in ways that maximize sales proceeds and minimize holding time could help increase the government’s financial returns. We found that FHA’s disposition performance and the time required to complete sales of REO properties lagged the performance of the government-sponsored enterprises. Our analysis of FHA’s REO activities revealed that the agency was not employing some of the disposition practices that the enterprises and other housing entities used. These practices could be a factor in other entities’ ability to dispose of REO properties for higher returns and with less holding time and include: using multiple means of assessing property values to better assure that REO properties are fairly valued and thus more likely to sell faster and at the highest price, making improvements to properties with characteristics that are more likely to result in a higher sales price if repaired, and basing price reductions for properties that do not sell at the original list price on market conditions rather than on a predetermined schedule. If FHA could perform as well as the enterprises in disposing of REO property, it could potentially generate hundreds of millions of dollars in additional sales proceeds and reduce maintenance and other holding costs from its future REO activities. Federal internal control standards call for agencies to have comprehensive policies and means to help ensure that program objectives are being met and that expected activities are being completed. However, FHA has not taken the actions necessary to ensure that its controls and oversight activities are effective in several areas. Specifically, FHA lacks comprehensive guidance for its REO program and a process for updating this guidance as policies and procedures change. Having such guidance could better ensure consistent practices across homeownership center staff and uniform oversight of the numerous contractors that carry out maintenance and marketing activities. Further, having a robust revision process would allow FHA to incorporate best practices that it identifies by analyzing differences in performance across homeownership centers, something it currently does not do. FHA has fully implemented neither mechanisms for evaluating contractors’ activities against minimum expected standards nor a scorecard that would allow staff to compare contractor performance to identify high- and low-performing contractors. Addressing this issue is critical to better ensuring contractors are performing as expected and meeting program goals. Given the FHA Inspector General’s findings that REO properties were not always being well-maintained as required by the service providers’ contracts, the number of in-person inspections currently being undertaken by FHA does not appear to be effective. FHA lacks controls to help ensure that the brokers marketing its properties meet FHA contractual requirements that brokers be in reasonable proximity to their properties. As a result, FHA risks having brokers that do not have the expected level of local market knowledge and cannot conduct effective marketing activities because they are too far away. FHA contracts generally lack incentives and disincentives that would encourage performing high-quality work consistent with other entities’ practices. Implementing a more frequent performance-based assessment and assigning work on the basis of performance could improve returns on REO properties and reduce property holding times. Collectively addressing these issues could improve FHA’s oversight of its contractors by, for example, ensuring that their properties are inspected regularly and that they face consequences for not meeting program requirements. To increase the potential for higher financial returns from FHA’s disposition of REO properties, the Secretary, HUD, should direct the Commissioner, FHA, to identify and implement changes in current practices or requirements that could improve REO disposition outcomes, including requiring the use of multiple estimates of market value when determining initial list prices, considering whether conducting repairs could increase the amount of net proceeds from specific property sales, and ensuring that the timing and amount of price reductions for its listed properties are made on the basis of an evaluation of market conditions rather than on standardized schedules. To improve its oversight of the REO disposition program, the Secretary, HUD, should direct the Commissioner, FHA, to update its REO program disposition handbook, or equivalent document, to include a current and consolidated set of policies and procedures for managing and disposing of FHA’s REO properties; establish a process for analyzing differences in disposition performance and practices across homeownership centers that can be used to periodically update this handbook or equivalent documentation to reflect current policy and procedures; implement a mechanism for systematically reviewing contractors’ compliance with minimum performance requirements through the use of standard metrics; ensure the completion and implementation of the scorecard currently being developed, including ensuring that performance metrics included in the scorecard are consistent with those used to review contractors’ compliance with minimum performance requirements; determine more effective ways, including increased use of in-person inspections, to better ensure that contractors comply with expected requirements; implement controls to ensure that listing brokers are located within close enough proximity to their listed properties to effectively market REO properties; and take steps to develop a legally acceptable means of assigning work to REO contractors that uses more frequent assessments of past performance. We provided a draft of this report to HUD, FHFA, Fannie Mae, Freddie Mac, VA, and RHS for their review and comment. HUD provided written comments, which are reprinted in appendix II. Fannie Mae, Freddie Mac, and VA provided technical comments on the draft report, which we incorporated as appropriate. In a letter from the Assistant Secretary for Housing – Federal Housing Commissioner, HUD agreed with our recommendations. HUD also identified actions that it has taken or planned to take in response to our recommendations. For example, HUD wrote that FHA plans to update its REO disposition handbook. In addition, in response to our recommendation that FHA establish a process for analyzing differences in disposition performance and practices across homeownership centers that can be used to periodically update the handbook, HUD pointed to the monitoring plan that FHA has implemented for its contractors that will analyze disposition performance and practices across homeownership centers. HUD also wrote that any identified best practices will be noted, discussed, and communicated to homeownership centers and contractors. It will be important for FHA to also periodically update the handbook to reflect these changes in practices, as we recommended. HUD acknowledged that budgetary constraints affect implementation of contractor performance scorecards—critical elements in three of our recommendations—and limit its ability to make increased use of in-person property inspections that we suggested could be used to better ensure that contractors comply with expected requirements. While recognizing that FHA’s scope for action may be limited by available budgetary resources, we emphasize the importance of considering not just the costs to undertake these steps but also the potential savings and improved disposition outcomes that would be realized from enhanced contractor oversight. In response to our recommendation to develop a legally acceptable means of assigning work to REO contractors that uses more frequent assessments of past performance, HUD said that FHA has taken steps in its new REO contracts to provide incentives to high-performing contractors and disincentives to lower-performing contractors by transitioning inventory among them based on performance and price. When implementing such a contract structure, we encourage FHA to consider inventory transitions on a frequent basis, such as quarterly, to align with the frequency of the scorecard performance assessments. In technical comments, the Director of Regulatory Affairs of Fannie Mae noted that the REO execution rate performance information that our report presents was inconsistent with publicly disclosed loss severity rates published by FHA. Although loss severity rates—which measure loss on a defaulted loan as a percentage of the unpaid principal balance—can be presented when discussing REO performance, we did not include such analysis in our report because this measure reflects factors beyond the control of the REO programs of these entities. For example, loss severity rates can be affected by the original loan-to-value ratio, loan amortization schedule, origination date, changes in market values, or the existence of mortgage insurance. We therefore do not use loss severity rates to assess REO performance. Fannie Mae also noted that the performance execution information that uses independent valuations may not be comparable across entities because not all REO sellers use the same valuation methodology. Our report notes that the entities use different methods for obtaining an independent valuation— including an independent appraisal for FHA and one of the enterprises and an independent BPO for the other enterprise—and that any systematic differences between the appraisals and BPOs could affect the performance results. We also calculated execution rate results using list prices and these calculations showed similar results. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, HUD, FHFA, Fannie Mae, Freddie Mac, VA, and RHS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your offices have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to examine (1) real estate-owned (REO) property disposition practices used by the Housing and Urban Development Department’s (HUD) Federal Housing Administration (FHA) and other federally related housing entities and how FHA’s effectiveness compares to that of these entities, and (2) how FHA oversees its REO disposition program. To examine the disposition practices, we reviewed REO program regulations, requirements, and policies to determine the goals and strategies for these activities of FHA and other federally related housing entities, including the Department of Veterans Affairs (VA), the Department of Agriculture’s Rural Housing Service (RHS), and two housing government-sponsored enterprises (the enterprises)—Fannie Mae and Freddie Mac. We interviewed HUD and FHA staff in the agency’s Washington, D.C., headquarters and staff in the four regional homeownership centers that oversee REO activities in their areas. We also interviewed staff from other federally related housing entities about the goals and strategies of their REO activities. We also discussed REO goals and strategies with staff from three large private-sector mortgage servicers that also acquire and dispose of REO properties. We selected these servicers because they were among the largest servicers of home mortgages. To obtain additional information on REO activities, we also interviewed staff from the National Association of Realtors, two local realtors identified as knowledgeable about REO properties by that association, and community groups or government housing entities in various cities with large REO concentrations. We also interviewed the National Community Stabilization Trust, which administers a database of REO properties for purchase by community groups, and an appraisal group responsible for promoting appraisal standards. To assess the effectiveness of FHA’s REO dispositions, we obtained and analyzed REO disposition data from FHA and the other federally related housing entities, including all REO properties disposed of from January 1, 2007, through June 30, 2012, as well as properties in inventory at the end of the period. Specifically, we obtained data from the data management systems of FHA, the enterprises, VA, and RHS. We did not include RHS in our analysis of REO property disposition performance because it only obtains and manages REO properties through its direct loan program and these are very small in number compared to the other entities. For example, in the period of analysis, RHS’s direct loan property dispositions represented less than 1 percent of FHA’s REO dispositions. RHS also did not have property-level data available for many of the data elements that we included in our analyses. Table 1 shows the number of dispositions for each entity that occurred during this time period based on the data that we received and analyzed. For RHS, we used data on its direct loan properties because it only acquires and disposes of REO properties related to its direct loans and does not do so for properties from its guaranteed loan program. We assessed the reliability of these data by reviewing agency data documentation, interviewing officials, and testing for missing values, outliers, and obvious errors. VA and RHS did not have property-level data available that were necessary for some of the calculations in this report. In particular, VA and RHS did not have data available on initial REO valuation dates, type of sales method, and type of buyer. Additionally, VA did not have data on initial REO appraisal amounts and RHS did not have data on net sales proceeds. We excluded VA and RHS from calculations that used these data elements and noted these instances as applicable. We addressed missing data by excluding those properties with missing data elements from analyses that relied on those elements as necessary. Finally, in analyses using the net sales proceeds data element, we excluded properties with net sales proceeds of $0, as this is an unlikely value for that element and could indicate missing data. We excluded additional properties from our regression models, as described below. In no instance did we exclude more than 13 percent of all properties. After making the necessary qualifications, corrections, and related assumptions, we believe that the data were reliable for our purposes as described in this appendix. To determine the performance in maximizing the sales price of REO dispositions by the entities, FHA homeownership centers, and FHA asset managers, we calculated various execution rates. These rates included the ratio of REO dispositions’ gross sales prices (for gross execution rates) or net sales proceeds (for net execution rates) to one of several estimated property valuations. The measures of property values that we used included (1) the sum of an independent assessment of a property’s value—an independent appraisal or BPO conducted by a third party if the entity did not use appraisals—plus any repair costs for that property, and (2) the initial list price for the REO property. We calculated aggregate execution rates for each organization by dividing the sum of their property dispositions’ net sales proceeds by the sum of the properties’ independently assessed values or initial list prices. We used aggregate execution rates because, by showing the net return on the total estimated value of properties in an entity’s portfolio, the aggregate rates better reflect the entity’s overall performance than an average of the execution rates of individual properties. The aggregate rate calculates execution rate performance on a value-weighted basis, with higher value properties having a greater impact on the aggregate rate than low-value properties. An average of individual properties’ execution rates gives equal weight to properties of different values despite their unequal effects on total net returns. To assess the timeliness of the entities’ REO property dispositions, we calculated the average number of days from the foreclosure sale to the day on which they sold or otherwise disposed of the property, as well as the number of days they took to move properties between various points within the REO process. To account for the possibility that differences in performance results between FHA and the enterprises might be due to differences in the characteristics of the properties that each entity acquired and disposed of in their REO programs, we used the data on the REO property dispositions from January 1, 2007, through June 30, 2012, to create various regression models. The general regression specification for these models was: y is the performance measure being assessed, such as net execution as a percent of independently assessed value or time from foreclosure to REO sale closing; organization is an indicator variable for the entity holding the property; X is a series of control variables related to the property and represents the relationship between variable j and the outcome variable, independent of the other variables in the model; zip is an indicator variable for the property’s ZIP code; valuation indicates the range of the property’s independently assessed value; α, β, and γ are the parameters of interest; and e represents an error term. We interacted the ZIP code and valuation category variables to allow the average effect of the valuation category on the performance measure to vary between ZIP codes. The parameters of interest were the coefficients for each organization. We used the following additional property characteristics as control variables in the regressions: number of bedrooms (as a categorical variable), number of units for the property (as a categorical variable), year the property was built (as a categorical variable), the change in the FHFA House Price Index for the property’s ZIP code from the month of the initial valuation to the month one quarter after the initial valuation, and the change in the FHFA House Price Index for the property’s ZIP code from the month of the initial valuation to one month after the initial valuation month. In addition, for regressions with an execution rate as the outcome variable, we included the month and year of REO acquisition as a categorical control variable. For regressions with the number of days from the foreclosure sale to the REO disposition as the outcome variable, we included the number of weeks from the scheduled date of the last completed payment to the date of the foreclosure sale as a categorical control variable. For the regression models with an execution rate as the outcome variable, we weighted every property by the amount of the property’s independently assessed value. This was done so that the percentage point differences resulting from the regressions would be similar to overall aggregate differences for models where only the entity dummies were included. Weighting by property value also reduced the effect of heteroskedasticity that was likely present over the range of valuations. To reduce the effect of outliers on the regression estimates, we excluded certain property records that had irregular values for a few data elements. For the regression models with an execution rate based on either the independently assessed value or reconciled value as the outcome variable, we excluded properties with independently assessed value net execution rates greater than 300 percent or less than negative 300 percent. Such large or small execution rates suggest a highly inaccurate valuation or otherwise irregular data such as a very low property value or sales price. For models with execution rate based on initial REO list price as the outcome variable, we excluded properties with the highest and lowest 1 percent of values for the ratio of initial REO list price to independently assessed value for all properties. In total, we excluded fewer than 12 percent of all properties in our data for each of the net execution models. Due to these exclusions, the population of properties used in the net execution regression models was smaller than the full population that was used when calculating the aggregate results. As a result, to estimate the percentage point difference in net execution rates between FHA and the enterprises for the full population, we multiplied the percentage point differences from the aggregate execution rates by the ratio of the percentage point differences resulting from the full regression model to the percentage point differences resulting from a regression model with only entity variables. For the models where the time from the foreclosure sale to the completed REO property disposition was the performance measure, we excluded properties that had a negative value for the number of days from the foreclosure sale to the completed REO property disposition. In total, we excluded less than 13 percent of all properties for the time frame models. We took various steps to test these regression models for robustness. We used various thresholds for the exclusion of property records from the net execution models. For example, we tested results when the models excluded properties with independently assessed values above and below certain amounts rather than properties with execution rates above and below certain amounts. We used various specifications for certain property and market characteristics. For example, we tested models where the categorical variable for independently assessed value was not interacted with ZIP code, where it was specified as a log, and where it was specified as a polynomial. We also tested models where the change in the FHFA House Price Index in each ZIP code was specified as the change in the index in the year prior to the disposition date and in the month prior to that date. We included additional property characteristics as a control variable in the models or excluded some of the property characteristics. For example, we tested models where a property’s occupancy status at REO acquisition was included as a control variable. This variable ultimately was not included in our final models because a significant percentage of FHA’s property dispositions were missing data for this variable. These models showed qualitatively similar results for the reduction in the performance difference between FHA and the enterprises relative to regressions with only entity variables. However, the final model for net execution rates had one of the largest reductions in the performance difference between FHA and the enterprises. This appeared to be due to the interaction of properties’ independently assessed values and ZIP codes. To identify factors that could help explain differences in FHA’s REO performance relative to that of the enterprises, we also created models that included various factors (time from foreclosure sale to REO disposition, ratio of initial list price to initial valuation amount, disposition method, and buyer type) at least partially under the entities’ control and examined how their inclusion affected estimates of the performance difference between the entities as measured by net execution rates. To compare the performance of FHA homeownership centers and contractors, we computed execution rates and average time frames for the properties they managed. When examining time frames, we focused on the time frame from a property’s initial valuation to completion of the disposition because initial valuation was the date available from both FHA and the enterprises that most closely reflected when FHA’s homeownership centers begin managing and overseeing the disposition of REO properties. In addition, when comparing homeownership center performance, we developed regression models to account for the possibility that differences in performance results between the homeownership centers might be due to differences in the characteristics of the properties that each center disposed of in their REO programs. To further control for regional differences that our nationwide regression models may not have been able to capture, we compared the performance of the homeownership centers to the performance of the enterprises in the states where each center operates. For this analysis, we also conducted separate regressions for each homeownership center region to account for differences in the characteristics of properties between homeownership centers and the enterprises in those states. For the regression models used to compare the performance of the homeownership centers, we controlled for the same property and housing market characteristics that we included in our regression models comparing FHA’s overall performance to that of the enterprises. However, we did not include ZIP code in the models directly comparing homeownership center performance because the homeownership centers operate in regions with mutually exclusive ZIP codes. To examine the price reduction strategies used by FHA, we also obtained additional data on all listing prices for properties that FHA disposed of from June 30, 2011, through June 30, 2012. To determine if price reductions occurred at a scheduled amount, we compared the price reductions reflected by these data to the price reduction schedules, if any, listed in FHA contractors’ marketing plans. One contractor did not have a price reduction schedule in its marketing plan so we considered price reductions for properties managed by this contractor to be according to schedule if they occurred at one of the two standard reduction percentages that appeared in most other contractors’ marketing plans. In addition, some marketing plans for one contractor specified a range for the price reduction amount. In these cases, we considered a price change to be for a scheduled amount if the price was reduced by one of the two standard reduction percentages that appeared in most contractors’ marketing plans. Finally, for marketing plans that specified that price changes could be up to a certain amount, we considered a price change to be for a scheduled amount only if the price change was for the maximum amount, since the goal of this analysis was determine how often price changes were for particular, predictable, amounts. To determine how the entities oversee their REO programs, including the contractors they use to perform various REO-related activities, we interviewed staff from FHA and the housing entities. We also reviewed program regulations, requirements, and policies related to oversight. We also discussed REO oversight activities with the staff from the three private sector mortgage servicers. In addition to the individual named above, Cody Goebel, Assistant Director; Kevin Averyt; Stephen Brown; Emily Chalmers; William R. Chatlos; Robin Ghertner; DuEwa Kamara; John Karikari; Jon Menaster; Alise Nacson; Jessica Sandler; Jena Sinkfield; Jack Wang; William T. Woods; and Ethan Wozniak made key contributions to this report.
|
With mortgage foreclosures at historic levels in recent years, FHA is faced with disposing of a high volume of REO properties. The enterprises, other federal agencies, and private sector mortgage servicers also dispose of REO properties from their foreclosures. To assess the relative effectiveness of FHAs REO dispositions, GAO examined (1) FHAs disposition goals, strategies, practices, and effectiveness in disposing of properties compared with those of the enterprises and private servicers; and (2) FHAs oversight of the contractors that maintained and marketed its REO properties. GAO analyzed REO disposition data from FHA and the enterprises, including modeling to control for property differences across the entities. GAO also collected requirements, policies, and interviews on each entitys oversight of its REO dispositions. The Federal Housing Administration's (FHA) performance in selling its foreclosed properties--known as real estate-owned (REO) properties--lagged the performance of both of the government-sponsored enterprises (enterprises), Fannie Mae and Freddie Mac. FHA disposed of more than 400,000 properties from January 2007 through June 2012. Its combined 2007-2012 returns, measured by the net execution rate (net sales proceeds divided by independently assessed property values) were about 4 to 6 percentage points below the enterprises' returns. After controlling for certain differences in their properties' characteristics (e.g., value, location, and local market conditions), differences in combined returns between FHA and the enterprises persisted at an estimated 2 to 5 percentage points. Further, while the enterprises took an average of around 200 days after foreclosure to dispose of REO properties, FHA took about 340 days--more than 60 percent longer . A similar pattern persisted even after controlling for certain property differences. FHA also took longer than the Department of Veterans Affairs (VA). For FHA, unlike the others, a significant part of the time between the foreclosure sale and REO sale is taken by loan servicers who must complete certain activities before conveying title to FHA. In the first half of 2012, FHA's disposition returns and timelines generally improved relative to the enterprises'. All three entities use similar strategies to dispose of their REO properties, but FHA does not use some practices that the enterprises and private mortgage servicers use that may have the potential to improve its sales performance. For example, FHA does not repair its properties to increase their marketability, something both enterprises do. And unlike the enterprises, FHA does not incorporate information from multiple sources in setting list prices or consistently take into account market conditions when reducing prices. Instead, it relies on one appraisal in setting initial prices and often reduces them by set amounts. GAO found that if FHA's execution rate and disposition time frame had equaled those of the enterprises in 2011, it could have increased its proceeds by as much as $400 million and decreased its holding costs--which can include items such as taxes, homeowners' association fees, and maintenance costs--by up to $600 million for the year. In addition, FHAs oversight of the contractors that it uses to maintain and dispose of REO properties has weaknesses, and it does not use some of the oversight tools other entities use that might prove effective. First, government internal control standards require complete, updated policies and procedures to guide program oversight. But FHA has not updated its REO disposition handbook since 1994, even though the agency implemented a different program and contractor structure in 2010. In the absence of a central source of updated guidance, GAO and FHA internal auditors found inconsistencies in both contractor activities and staff oversight across FHAs four regional homeownership centers. Second, FHA has not implemented a uniform system for evaluating contractor performance. For instance, FHA has yet to implement a proposed version of the type of scorecard that the enterprises use to assess differences in contractor performance. Also, its planned incentive structure for contractors has been found not to comply with federal contracting rules. These two shortcomings have prevented FHA from assigning work according to contractors performancea key quality control in its new REO program structure. Further, FHA aims to inspect 2 to 6 percent of its REO properties annually, although other entities with REO properties report inspecting between 25 and 35 percent monthly, or between 7 and 40 percent annually. Finally, FHA has not taken steps to ensure that the listing brokers marketing its REO properties are located close enough to the properties to have adequate knowledge of local markets. Without implementing more effective activities to evaluate contractor performance and ensure compliance with program requirements, FHAs REO properties may continue to remain on the market longer and sell for lower prices than properties held by the enterprises. GAO makes 10 recommendations intended to increase FHAs returns on the disposition of REO properties, including considering repairs that increase net proceeds, requiring the use of additional information for setting initial and subsequent listing prices; and improving its oversight of its contractors, including updating and maintaining comprehensive guidance, implementing a performance inspections, and ensuring that listing brokers are appropriately located. FHA reviewed a draft of this report and agreed with GAOs recommendations.
|
The purpose of the Endangered Species Act of 1973 is to conserve endangered and threatened species and the ecosystems upon which they depend. The act defines “conservation” as the recovery of endangered and threatened species so that they no longer need the protective measures afforded by the Act. The act defines as endangered any species facing extinction throughout all or a significant portion of its range and defines as threatened any species likely to become endangered in the foreseeable future. The act requires the Secretary of the Interior to publish a list of species it determines are endangered or threatened in the Federal Register and specify any critical habitat of the species with in its range—habitat essential to a species’ conservation. Loss of habitat is often the principal cause of species decline. Additionally, the act establishes a process for federal agencies to consult with the Service about their activities that may affect listed species. Federal agencies must ensure that their activities, or any activities they fund, permit or license, do not jeopardize the continued existence of a listed species or result in the destruction or adverse modification of its critical habitat. There were 1,264 species in the United States listed as endangered or threatened as of September 30, 2004. The Service has responsibility for 1252 of these species. Thirty-two species have been removed from the list: 9 species as a result of recovery efforts, 9 because they have been declared extinct, and 14 species for other reasons, mostly because new information showed that listing was no longer warranted. The Service develops and implements recovery plans, among other things, to reverse the decline of each listed species and ensure its long-term survival. A recovery plan may include a variety of methods and procedures to recover listed species, such as protective measures to prevent extinction or further decline, habitat acquisition and restoration, and other on-the- ground activities for managing and monitoring endangered and threatened species. According to Service officials, it is their policy to issue a recovery plan within two and a half years of the species’ date of listing. The Service exempts species from the plan requirement when it is determined a plan will not promote their conservation. For example, the ivory-billed woodpecker is exempt because the Service thinks it is extirpated from the wild throughout its range. Recovery plans aim to identify the problems threatening the species and the actions needed to resolve them. The act directs the Service, to the maximum extent practicable, to incorporate into each recovery plan (1) a description of site-specific recovery tasks necessary to achieve the plan’s goal for the conservation and survival of the species; (2) objective measurable criteria that will result in a determination that the species can be removed from the list of endangered and threatened species (delisted); and (3) an implementation schedule that estimates the time and cost required to carry out the recovery tasks described in the recovery plan. Service employees, independent scientists, species experts, or a mix of these people can develop recovery plans. According to Service officials, as of September 2004, the Fish and Wildlife Service had 551 approved recovery plans covering more than 1025 species (more than 80 percent of all listed species). The act also requires the Service to report biennially to certain Congressional committees on efforts to develop and implement recovery plans, and on the status of listed species for which plans have been developed. The Service implements this requirement through its biennial Recovery Report to Congress. Additionally, the act requires the Service to submit an annual report to the Congress on federal expenditures for the conservation of endangered or threatened species, as well as expenditures by states receiving federal financial assistance for such conservation activities. As part of its efforts to compile data for this report, the Service collects data on recovery fund expenditures on a species-specific basis, although these data have not been reported separately in published expenditure reports. With regard to Service funds, the Endangered Species program is a small portion of the Service’s overall budget ($132 million of $1.9 billion in fiscal year 2003). Of this amount, about one-half is devoted to the recovery program, $65 million (see fig. 2). This is similar to previous fiscal years. The funds spent on the recovery program, however, are only a portion of the total money spent to recover species. Some of the Service’s other programs, including refuges, contribute funds and staff to species recovery. In addition, according to the Service, other federal and non-federal entities contribute substantial funds to species recovery. In addition to the Service’s Endangered Species Program expenditures to recover species, other programs in the Service as well as other federal and state agencies spend substantial funds on endangered species activities, including land acquisition (see table 1). Congress amended the Endangered Species Act in 1979 to require the Secretaries of the Interior and Commerce to establish, and publish in the Federal Register, agency guidelines that include a priority system for developing and implementing recovery plans. The Service adopted recovery priority guidelines in 1980 and amended them in 1983. The guidelines consist of two parts: Species are assigned a priority ranking between 1 and 18 on the basis of (in descending order of importance) (1) the degree of threat confronting the species, (2) recovery potential (the likelihood for successfully recovering the species), and (3) taxonomy (genetic distinctiveness). (See table 2.) Additionally, a “c” is added to the ranking if there is conflict with economic activities, like development; this gives the species priority over other species with the same ranking but without a “c”. Thus, the highest possible priority ranking is a “1c”. The Service sometimes changes a species’ priority ranking when warranted by a change in the species’ situation. The second part of the priority system ranks the recovery tasks within each recovery plan. Each task is assigned a priority number from 1 to 3, with 1 being the highest. A priority 1 task is “an action that must be taken to prevent extinction or to prevent the species from declining irreversibly.” A priority 2 task is “an action that must be taken to prevent a significant decline in species population/habitat quality or some other significant negative impact short of extinction”, and a priority 3 task is “all other actions necessary to provide for full recovery of the species.” The recovery guidelines emphasize that they should be used only as a guide, not as an inflexible framework for determining funding allocations. Within the Service, responsibility for implementing the act is divided among its three administrative levels: headquarters, regions and field offices. Headquarters officials develop policy and guidance and allocate funding to the regions. Regional directors in the seven regions (shown in figure 3) make most decisions on how to spend endangered species program funds and are responsible for managing their field offices’ program activities. Field offices are responsible for implementing program activities and setting priorities for projects they will undertake. The Fish and Wildlife Service spent its recovery funds in a manner generally consistent with species priority in fiscal years 2000 through 2003. From fiscal years 2000 to 2003, the Service spent 44 percent of its recovery funds attributable to individual species on those species with the highest priority, the 415 species ranked 1 through 3 on the 18-point priority ranking scale (see fig. 4). However, 25 of these species received no recovery funding at all during fiscal years 2000 through 2003. Additionally, two species with low priority rankings, the bald eagle (with a priority ranking of 14c) and the Canada lynx (with a ranking of 15), received substantial recovery funding during fiscal years 2000-2003. One reason the Service spent 44 percent of its recovery funds attributable to individual species on the highest priority species is that this group accounts for a significant portion of all listed species—one-third (see fig. 5). Similarly, the Service spent almost all (94 percent) of its attributable recovery funds on species ranked 1 through 9 on the 18-point scale, which account for 92 percent of all listed species. As shown in figure 6, analysis of average spending on a per species basis also reveals that more expenditures are made on higher priority species. Additionally, the analysis shows the emphasis the Service placed on species with a high degree of recoverability. The relatively large amount of funding spent on species with low priority rankings (13 through 15) is greatly influenced by spending on the bald eagle (with a priority ranking of 14c) and the Canada lynx (with a ranking of 15). The bald eagle is nearing delisting and the funding was spent on delisting activities. The Canada lynx was embroiled in controversy that required recovery staff to respond to litigation. When spending on these two species is removed, the average amount spent on species in this priority group is significantly lower. In addition to species priority ranking, another obvious measure of priority is whether a species is endangered or threatened. Over three-quarters (78 percent) of species protected under the act are listed as endangered, and most of these have high priority rankings (see fig. 7). We analyzed spending by species status (endangered or threatened) and found that the Service spent a majority (64 percent) of its recovery funds on endangered species during fiscal years 2000 through 2003. Finally, we analyzed spending by the three taxonomic classifications included in the Service’s recovery priority guidelines—monotypic genus, species, and subspecies. As shown in figure 8, an analysis of average spending on a per species basis reveals that more expenditures are made on listed entities classified as monotypic genus. A species that is a monotypic genus is the only remaining species representing the entire genus. When Service officials allocate recovery funds, they base these decisions to a significant extent on factors other than a species’ priority ranking. At the headquarters level, a formula that accounts for each region’s workload, but not species’ priority rankings, determines how recovery funds are allocated. Each regional office allocates recovery funds to their field offices differently, but in no case is priority ranking the driving factor. Instead, regional officials focus primarily on partnership opportunities, though regional officials told us they do try to provide funds to species that have a high degree of threat. Although field office staff we spoke with use priority rankings, they also emphasized the importance of having flexibility to allocate funds to develop partnerships. The Service does not know the extent to which these disparate allocation systems yield results consistent with the Service’s priority guidelines because the Service does not have a process to routinely measure the extent to which it is spending its recovery funds on higher priority species. In making allocation decisions, headquarters does not consider a species’ priority ranking or any of the factors that go into determining priority rankings. Instead, it allocates recovery funds to its seven regions based primarily on a formula that estimates each region’s workload. The formula estimates the recovery workload for each region by assigning each species a score of between 2 and 7 points, based on the type of species and its habitat needs. Higher points are assigned to those species whose recovery requires higher levels of funding or effort—factors that are not clearly related to a species’ priority ranking. For example, animals are assigned 2 points while plants are assigned 1. Species that occupy habitats larger than 1 million acres or are migratory or aquatic are assigned 5 points whereas species that occupy less than 1,000 acres are assigned 1 point. Recovery funds are then allocated to the regions based on the number of species occurring in each region and the points assigned to those species. Additionally, headquarters uses a workload-based formula to allocate funds to regions to develop recovery plans. Funds are allocated to each region based on the number of species that it is responsible for that have not been exempted from the plan requirement and that have been listed for 4 years or less. If after 4 years there is still no plan, then the region no longer receives recovery-planning money for that species, though the region is still responsible for completing that species’ recovery plan. Service officials in headquarters told us that they use an allocation system based on workload rather than the priority guidelines for a number of reasons. First, this system provides relatively stable funding to each region from year to year. In contrast, priority rankings can change over time, which would add an element of unpredictability to the annual allocations. Stability is important, according to Service officials, because most of a region’s recovery budget supports staff salaries for recovery biologists. These biologists work on a wide variety of recovery activities including helping to develop recovery plans, conducting as well as coordinating on- the-ground actions to implement recovery plans, conducting periodic species status reviews, developing recovery partnerships, and litigation support. Second, although priority rankings indicate which species are higher priority, they do not reflect how much money a species needs. Service officials pointed out that higher priority species are not necessarily more costly to recover than lower priority species. Lastly, Service officials told us that a system based on workload is more objective, and they expressed concern that the subjective nature of priority rankings could create conflict between the regions if allocations were based on these rankings. While Service officials at headquarters told us that recovery funds should be spent according to priority rankings, they believe those decisions should be made at the regional level. Almost all of the regional officials we talked to agreed that the allocation system used by headquarters works well and is fair and equitable, although some of them suggested changes. For example, some regional and field office officials noted that a species’ priority ranking, particularly its degree of threat, could be included, along with the existing workload factors, in headquarters’ formula for allocating recovery funds. While each region allocates recovery funds to its field offices differently, we found that the most important consideration among the regions is to maintain and develop recovery partnerships, either by funding long- standing arrangements to work with partners to recover specific species or by taking advantage of opportunities to develop new partnerships. For example, officials at the Southwest region told us that for the last 10 years the region has allocated its discretionary recovery funds primarily to four species for which it has long-standing partnerships with other entities—the Kemp’s Ridley sea turtle, the whooping crane, the Mexican wolf and the Attwater’s prairie chicken. The financial support from long-term partners, in concert with expenditures from the Service, provides a stable funding source for recovery projects from year to year, helping to create viable recovery programs for these four species. For example, the Kemp’s Ridley sea turtle population has increased from a low of 270 females to several thousand females in the course of this long-term partnership. Service officials told us that it is important to maintain their yearly contributions to long-standing partnerships, regardless of the species’ priority ranking, because the funds these partners contribute are critical to species’ recovery and the partners could lose interest without the Service’s contributions. Officials at all levels of the Service reported to us that they have insufficient recovery funds. Although it is difficult to develop an accurate estimate of the full cost to recover all listed species (and it is unlikely that some species will ever be recovered), we analyzed the cost data contained in 120 recovery plans covering an estimated 189 listed species. Based on the Service’s estimated recovery costs in these plans, we found that it would cost approximately $98 million dollars to fully fund these plans—plans that cover just 15 percent of listed species—for a single year. This amount is well above the $65 million the Service allocated in fiscal year 2003 to develop and implement recovery plans and does not account for the recovery needs of the remaining 1000 listed species. Even implementing only the highest priority recovery plan tasks for those 120 plans—recovery plan tasks “necessary to avoid extinction,” would cost approximately $57 million, nearly 90 percent of the Services’ total recovery budget in fiscal year 2003. Consequently, the Service is dependent on monetary contributions from partners to facilitate species recovery. Regional officials not only fund long-standing partnerships, but look for opportunities to develop new ones as well. Service officials expressed concern that if they were confined to allocating funds strictly by the priority system, they could alienate potential recovery partners. For example, some regional officials pointed out that land acquisition can take many years, so if willing sellers present themselves, the region will take advantage of that opportunity by allocating recovery funds to acquire those lands even if they do not benefit a species of the highest priority. In another example, officials in a field office in the Pacific region told us they were able to leverage its $20,000 investment into a $60,000 project by developing an agreement with the U.S. Forest Service to jointly fund a study to identify how the California red-legged frog was using suitable habitat. Fish and Wildlife Service officials in the Pacific region also leverage funds with non- federal partners. In 2002, a $10,000 investment in desert tortoise monitoring from the Fish and Wildlife Service was matched by $16,540 from Clark County, Nevada and $5,000 from the Arizona Game and Fish Department. Almost all of the Service officials we talked with stressed the importance of having the flexibility to develop partnerships for recovery, particularly to leverage the Service’s scarce recovery funds. Finding partners and other sources of funds to implement recovery actions is also strongly emphasized in the Service’s course on recovery implementation, which is offered at the National Conservation Training Center in West Virginia and other locations around the country. While a species’ priority ranking is not a primary factor for determining how regions distribute recovery funds, regions do consider priority rankings when making recovery allocations. The two regions responsible for the most species, the Southeast region and the Pacific region, specifically incorporate the priority system into their funding allocations. In the Southeast, field offices and other divisions of the Service, like Refuges, submit proposals to obtain recovery funding to implement recovery plan tasks. Once the regional office receives all the proposals, officials determine which ones to fund that year. In doing so, they consider a number of factors, including the species’ priority ranking. Similarly, the Pacific regional office reserves a portion of the recovery funds it receives and uses them to fund proposals submitted by its field offices to implement recovery plan tasks. One of the factors the region considers when determining which proposals to fund is the species’ priority ranking. Most of the other regions we talked to told us that they consider some aspects of the priority system when making funding decisions, particularly the species’ degree of threat, although they do not directly consider a species’ priority ranking. Sometimes regions will also target funds to lower--priority species if they are nearing recovery. For example, the bald eagle ranked 20th among those species with the highest recovery expenditures from fiscal year 2000 to fiscal year 2003, despite having a priority ranking of 14c. A Service official attributed most of these expenditures to delisting activities for the bald eagle. Many Service officials pointed out that the priority system does not contain a mechanism for funding species that are nearing recovery. Because a species’ priority will decrease as its threats are alleviated and it moves closer to recovery, the priority system would dictate that other more imperiled species be funded before those that are close to delisting. Consequently, species close to recovery might never be delisted because funds would not be allocated to complete the tasks required for delisting. Service officials told us they need flexibility to provide funds that will help get species off the list. Headquarters officials have also recognized this issue and, beginning in fiscal year 2004, created a special fund that directs funding to species close to delisting (as well as those close to extinction) in its “Showing Success, Preventing Extinction” initiative. In the field offices we contacted, we found that species’ priority rankings play an important role in recovery allocations. Service personnel in four of the ten field offices we spoke with told us that a species’ priority ranking is one of the key factors they use to allocate recovery funds. For example, in the Pacific Islands field office, which is responsible for the recovery of over 300 species, officials use the recovery priority system as a “first step,” then overlay other factors, like opportunities to leverage funding. Staff in five of the remaining six offices we spoke with told us that while they do not specifically use the priority system when making recovery allocations, they do consider a species’ degree of threat. Staff in the last field office said they did not use the priority system because most of their funds were spent according to direction provided by the region. Despite their use of the priority system, most of the field office staff we contacted also stressed the importance of having the flexibility to allocate funds to take advantage of unique opportunities when they arise. For example, officials in a field office in California told us they took advantage of an opportunity to leverage recovery funding for the California red-legged frog. A population of this frog was recently discovered in Calaveras County, site of Mark Twain’s famous story The Celebrated Jumping Frog of Calaveras County, which featured the California red-legged frog. The landowner where the population was discovered was eager to work with the Service to build a stock pond to provide habitat for the red-legged frog and eradicate bullfrogs (red-legged frog competitors). The discovery of the frog population was momentous because the species is important to local lore, and a population of the frog had not been found in Calaveras County since the late 1800s (see fig. 9). Even though the field office has 65 species with higher priority rankings than the red-legged frog, officials decided to address this recovery opportunity because of the frog’s importance to the local community. Other unique events also require funding flexibility. In a Utah field office last year, for example, a road expansion threatened the existence of the clay phacelia, an endangered plant. The field office staff responded to this threat by working with partners to collect seeds for future propagation. The Service does not know the extent to which recovery fund expenditures are consistent with its priority guidelines. All of the Service’s organizational levels participate in funding decisions, often relying on factors other than species priority. Although our analysis shows that the Service generally spent its recovery resources on higher priority species during fiscal years 2000 through 2003, we found that the Service has no process to routinely measure the extent to which it is spending its recovery funds on higher priority species. Without this information, the Service cannot ensure that it is spending its recovery funds on such species, and in cases where it is not, determine whether the funding decisions are appropriate. This is especially problematic as circumstances change—for example, when species are added to the list or priority rankings change for already-listed species. Although the Service is required to report all federal and some state expenditures on listed species, it does not separately report how it spent its recovery funds by species. This lack of separate reporting can make it difficult for Congress and others to determine whether the Service is focusing its recovery resources on the highest-priority species. For example, the species that received the greatest total federal and state expenditures in fiscal year 2003 are substantially different from those we identified as having received the greatest portion of the Service’s recovery fund expenditures. Of the 47 species that the Service reported as having received the greatest total expenditures in fiscal year 2003, the Service has joint or lead responsibility for 20 of them. The list of 20 species is radically different from the list that we identified as having received the greatest portion of the Service’s recovery fund expenditures (see table 3). In the case of the Southwestern willow flycatcher, the Service reported that more funds were expended on the flycatcher in fiscal year 2003 than for all but three other species for which the Service has lead responsibility. However, the information the Service provided to us shows that it spent relatively few recovery funds on the Southwestern willow flycatcher in fiscal year 2003—it ranked 84th in the Service’s recovery expenditures. Total reported expenditures and Service recovery fund expenditures differ substantially because the Service’s recovery priority guidelines do not apply to most of the reported funds—those funds provided by other federal agencies and some funds reported by state agencies. The Service has little control over how other organizations spend their funds. The reported expenditures also include Service expenditures in addition to recovery funds, such as expenditures on listing and consultation, which are also not subject to the Service’s recovery guidelines. In fact, in many instances, the Service does not have discretion over which species should receive these funds. For example, the Service spends consultation funds largely based on projects submitted to it by other federal agencies. Not unexpectedly, the list of 20 species receiving the greatest portion of the Service’s recovery fund expenditures in fiscal year 2003 is also different from the list of species receiving the greatest portion of total federal and state expenditures in fiscal year 2003 (see table 4). For example, the California condor and the Western population of the gray wolf ranked first and third, respectively in recovery fund expenditures but are ranked 25th and 29th, respectively in overall federal and state expenditures. Without a process to measure the extent to which it is spending its recovery funds on the highest-priority species, the Service lacks valuable information that would aid it in making management decisions. For example, while maintaining partnerships to fund certain species may be reasonable, many of these partnerships have been in place for many years, and changes to the species’ status or threat level, as well as changes to the threat level of other species and the addition of newly listed species, could have occurred in that time. As such, perhaps the reasons for creating some of these partnerships may have been superseded by other needs and it may no longer be appropriate for particular species to garner so much funding from the region. Officials in the Southwest region, for instance, told us that most of the region’s discretionary recovery funds are spent on four species (Kemp’s Ridley sea turtle, Whooping crane, Mexican wolf, and Attwater’s prairie chicken). These officials stated that they did not know these species’ recovery priority rankings until after we scheduled a meeting with them, although they did believe the species to be highly ranked. While these four species all have high priority rankings—2c, 2c, 3 and 3c, respectively— the region has lead responsibility for about 80 other species with a priority ranking between 1 and 3. Although many of these species also received funding during fiscal years 2000-2003, more than one-quarter (20 species) had no Service recovery fund expenditures attributable to them. The Service faces a very difficult task—recovering more than 1,200 endangered and threatened species to the point that they no longer need the protection of the Endangered Species Act. Many of these species face grave threats and have been imperiled for years. There are few easy solutions. Like many other federal agencies, the Service has limited funds with which to address these challenges. Fortunately, many other organizations contribute resources to help species. The Service maintains that its ability to be flexible in allocating its scarce recovery resources is the key to maximizing those contributions from other organizations. We agree that exercising flexibility in allocating recovery funds under its priority guidelines is important, but this needs to occur within the bounds of a systematic and transparent process. The Service, however, does not have such a process. While the Service acknowledges that it strays from its priority guidelines, it does not routinely analyze its allocation decisions to determine whether it is focusing on the highest priority species and, if not, why. Such an analysis is important to ensure that the Service continues to spend its recovery funds on the highest priority species over the long term. Without this information, the Service cannot show Congress or the public the extent that it is focusing its resources on the highest priority species, or explain, in cases where it is not, that its resource decisions are still appropriate. To this end, we believe the Service’s priority guidelines provide it with the means to create a systematic and transparent allocation process while still allowing it needed flexibility. Because the Service already collects data, on a species by species basis, on how it spends its recovery funds, it would be a simple task to measure the extent to which it is spending its recovery funds on high-priority species. It could then make this information publicly available, thus providing the Congress and the public a yardstick with which to judge the efficacy of the Service’s resource allocation decisions. To help ensure that the Service allocates recovery resources consistent with the priority guidelines over the long term and in a transparent fashion, we recommend that the Secretary of the Interior require the Service to take the following two actions: (1) periodically assess the extent to which it is following its recovery priority guidelines and identify how factors other than those in the guidelines are affecting its funding allocation decisions, and (2) report this information publicly, for example, in its biennial recovery report to Congress. We received written comments on a draft of this report from the Department of the Interior. In general, the Department agreed with our findings and recommendations but believes that we underestimated the extent to which the Service’s funding decisions are consistent with its recovery priority guidelines. Because we found that the Service spent its recovery funds in a manner generally consistent with species priority, we do not believe this is a significant issue. See appendix II for the Department’s letter and our response to it. Additionally, the Department provided technical comments that we have incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At the time, we will send copies of this report to the Secretary of the Interior and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix III. In response to a request from the Chairman, House Committee on Resources, we (1) analyzed how the U.S. Fish and Wildlife Service’s allocation of recovery funds compares with its recovery priority guidelines and (2) determined what factors influence the Service’s recovery funding allocation decisions. As agreed with the Chairman’s staff, we evaluated only those funds specifically spent by the Service to implement its recovery program. To address our first objective, we requested recovery expenditure data, on a per species basis, from each of the Service’s seven regions for fiscal years 2000-2003. Because the Service spends most of its recovery funds on salaries that are not allocated on a per species basis, we asked officials in each region to attribute salaries to specific species to the best of their abilities. To assess the reliability of these data, we compared the total estimated expenditures we received from each region for each year to budget documentation provided by headquarters officials, the Department of the Interior’s Budget for fiscal years 2000-2003, and House and Senate committee reports for Department of the Interior appropriations for fiscal years 2000-2003. We also asked the regional officials who provided these data a series of data reliability questions covering issues such as data entry, access, quality control procedures, and the accuracy and completeness of the data, as well as any limitations of the data. All responded that the data were generally accurate, and all but one performed some form of data review to ensure its accuracy. Additionally, officials from all but one region noted, as a limitation to the data, that it is sometimes difficult to link expenditures on activities to specific species. We determined that the expenditure data received from each of the Service’s seven regions were sufficiently reliable for the purposes of this report. We also obtained from the Service data on each species’ priority number for fiscal years 2000 through 2003, as well as other information about each species, such as whether it is threatened or endangered and whether it has a recovery plan. We did not make a judgment about the adequacy or accuracy of the Service’s recovery priority system. The Service also provided us with information on the estimated costs to implement approximately 120 recovery plans. We assessed the reliability of these data by (1) electronically testing required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. In addition, we compared the data set sent to us by the Service to the Service’s publicly available (online) Threatened and Endangered Species System (TESS), which contains data on listed species similar to that we received from the Service. When we identified any difference between these two data sets, we independently corroborated, to the extent possible, which data set was correct by obtaining documentary evidence, either from the Federal Register or the appropriate recovery plan. When appropriate according to this documentary evidence, we made changes to the data sent to us by the Service. For example, the spineless hedgehog cactus was listed in the data set sent to us by the Service but was not found when we compared it to online TESS. We checked the Federal Register and found that this species was removed from the endangered species list in 1993, so we removed it from the data set sent to us by the Service because our time frame of interest is 2000 through 2003. In another instance, the data set sent to us by the Service contained the Berkeley kangaroo rat, but this species was not in TESS. We checked the recovery plan and found that this is a “species of concern,” not an endangered or threatened species. The status field in the data sent to us by the Service was blank, so we re-coded it as a species of concern and then removed it from the data set because species of concern are not part of our review. We also made changes to records that contained errors. For example, the green sea turtle has two different populations. However, the Fish and Wildlife Service reported the total recovery expenditures for these two populations together. When expenditures were merged with species lists, this expenditure total was shown twice. To address that error we removed one expenditure total. All of these types of changes, 5 records with factual errors (or 0.4 percent of the records) and 9 with missing information (0.7 percent of the records), were reviewed and agreed to by all team analysts and supervisors. We also found and removed 14 duplicates and 27 records that were outside our scope (e.g., outside our date range or species managed by the National Marine Fisheries Service, not the Fish and Wildlife Service). On the basis of all of this work, we determined that the data on species and recovery plans we received from Fish and Wildlife Service were sufficiently reliable for the purposes of this report. We then compared the expenditures on each species with the species’ priority ranking for fiscal year 2000 through 2003. We grouped together species with similar rankings to deemphasize minor differences in species’ rankings. Grouping species this way had the effect of eliminating the taxonomic distinction among species found in the recovery priority guidelines. Table 5 shows the groupings. We also assumed that the average cost to implement recovery plans in each group was the same. We made this assumption explicitly because the cost to implement individual recovery plans can vary substantially among species. For example, we analyzed the cost to implement 120 recovery plans (the only plans with these data available electronically) covering an estimated 189 species (or 15 percent of listed species) and found that some plans are very costly—$107,516,000—and some are not—$18,000. However, many plans fall between these two extremes, costing between $1 million and $6 million. We discussed this assumption with the Service, and they agreed to its reasonableness. The number of species in each priority group varied by year (see table 6). In order to analyze overall average spending on a per species bases, we calculated weighted average expenditures per species by priority ranking. To do this we weighted the average expenditure per species for a specific priority group and fiscal year by the proportion: (Number of species in a particular priority group and fiscal year)/ (Number of species in same priority group over all fiscal years). In addressing our second objective, to determine what factors influence the Service’s recovery funding allocation decisions, we interviewed managers and recovery biologists in the Service’s recovery division in headquarters, all seven regions and a nonprobability sample of 10 field offices. We selected at least one field office from each region and selected a second field office from the two regions that collectively have lead responsibility for more than 50 percent of the endangered and threatened species in the United States. Within each region, we selected field offices that have lead responsibility for a high number of species relative to other field offices in that region. The region responsible for the largest number of species, the Pacific region, is operated as two divisions, and we selected a field office from each division. The field office locations in our nonprobability sample were: Hawaii (Pacific Region) Sacramento, California (Pacific Region) Arizona (Southwest Region) Columbia, Missouri (Great Lakes Region) Cookeville, Tennessee (Southeast Region) Vero Beach, Florida (Southeast Region) Virginia (Northeast Region) Utah (Mountain-Prairie Region) Anchorage, Alaska (Alaska Region) Fairbanks, Alaska (Alaska Region) Through our interviews we obtained information on how recovery funds are allocated, the role of the recovery priority system, and suggested improvements to the recovery priority system. We compared the answers we received in these interviews to documents or expenditure data provided by the Service, to the extent this corroborating evidence was available. In addressing both objectives, we reviewed publicly available documents and other information obtained from the Fish and Wildlife Service’s Website. We also reviewed articles in academic and scientific literature related to recovery planning and recovery prioritization, including an extensive study of recovery plans conducted by the Society for Conservation Biology and funded by the Fish and Wildlife Service. We performed our work from February 2004 to January 2005, in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of the Interior’s letter dated March 11, 2005. 1. We agree that some of the recovery funds included in our analysis of how recovery fund allocations compare with the Service’s recovery guidelines include funds for which Congress has provided direction that they be spent on particular projects or species. However, we do not believe that by including these funds we have underestimated the degree to which the Service’s funding decisions are consistent with its recovery priority guidelines. First, we found that the Service spent its recovery funds in a manner generally consistent with species priority. Second, we analyzed a list, provided to us by the Service, of congressionally directed funds and associated projects for fiscal years 2000 through 2003. We compared this list with the priority rankings of the species associated with the projects in a way similar to how we compared species’ expenditures and priority rankings in our report. We found that the list of congressionally directed funds resulted in a spending pattern similar to what we identified when we compared species’ expenditures and priority rankings in our report. Thus, by including these funds in our analysis of how recovery funds allocations compare with the Service’s recovery guidelines, we do not believe that we have underestimated the degree to which the Service’s funding decisions are consistent with its recovery priority guidelines. 2. We agree that the Endangered Species Act does not require it to report separately on how it spent its recovery funds by species. However, reporting this information could be part of an effective strategy to help ensure that the Service allocates recovery resources consistent with the priority guidelines over the long term and in a transparent fashion. 3. In our report, we use the term “imperiled” instead of “threatened” to avoid confusion with the distinction the act makes between “threatened species” and “endangered species.” We agree that the act does not state that the purpose for requiring the Service to establish guidelines for prioritizing the development and implementation of recovery plans was to address concerns that recovery funds were not being directed at the most imperiled species. We have modified the report accordingly. However, we disagree with the Department’s contention that its recovery priority guidelines do not provide that funding should be allocated preferentially to species with the highest priority ranking as depicted in table 2 of our report. The Department relies on a table in the guidelines that is virtually identical to table 2 in our report to describe its priority system. Section 4(h)(4) of the act specifically directs the Service to establish guidelines that shall include “a system for developing and implementing, on a priority basis, recovery plans under subsection (f) of this section.” Further, the guidelines state that “the species with the highest degree of threat have the highest priority for preparing and implementing recovery plans.” In addition, the guidelines state that they are to “aid in determining how to make the most appropriate use of resources available to implement the act.” The Department also contends that allocating funding preferentially to species with the highest priority ranking is contrary to section 4(f)(1)(A) of the act. This provision, which was added in a 1982 amendment to the act, states that recovery plans shall, to the maximum extent practicable, give priority to species most likely to benefit from such plans, particularly those that are, or may be, in conflict with construction or other development projects, or other forms of economic activity. The guidelines specifically state that the priority system established by the guidelines “is intended to satisfy the requirements of the amended Act.” Accordingly, the guidelines include likelihood to benefit from recovery plans and conflict as factors. We agree with the Department that focusing on opportunities for partnerships where multiple parties will work to the benefit of the species is consistent with section 4(f)(1)(A) of the act. In fact, we conclude in our report that the Service’s ability to be flexible in allocating its scarce recovery resources is the key to maximizing contributions from other organizations. However, we believe that this flexibility needs to occur within the bounds of a systematic and transparent process and make recommendations to this effect. In addition to the individual named above, Charles Egan, Jaelith Hall- Rivera, Barry T. Hill, Summer Pachman, Paula Bonin, Judy Pagano, and Cynthia Norris made key contributions to this report.
|
Currently there are more than 1,260 species listed as endangered or threatened under the Endangered Species Act of 1973. While few species have gone extinct since 1973, only 9 have been "recovered" or removed from the list because they no longer need the act's protection. This has raised questions about how the U.S. Fish and Wildlife Service (Service) allocates its recovery funds. Proponents of the act believe that the Service's recovery funds are only a small fraction of what is needed to make greater recovery progress. The act and agency guidelines require the Service to prioritize species to guide recovery fund allocation. In fiscal year 2000 through 2003, the Service spent $127 million dollars in recovery funds attributable to individual species. In this report, GAO analyzed (1) the extent to which the Service's allocation of recovery funds compares with its recovery priority guidelines and (2) what factors influence the Service's recovery allocation decisions. The Service spent its recovery funds in a manner generally consistent with species priority in fiscal years 2000 through 2003, spending almost half (44 percent) of the $127 million on the highest priority species (see figure below). Species in the next two highest priority groups received almost all of the remaining recovery funds (51 percent). Species in the three lowest priority groups received very little funding (6 percent). Most listed species (92 percent) are in the top three priority groups. When Service officials allocate recovery funds, they base their decisions to a significant extent on factors other than a species' priority ranking. At the headquarters level, a formula that focuses on each region's workload determines how recovery funds are allocated to regional offices. Each regional office allocates its recovery funds to their field offices differently, but in no case is priority ranking the driving factor. Instead, regional officials focus primarily on opportunities for partnerships, though they told us that they also focus on species facing the gravest threats. Field office staff we spoke with emphasized the importance of pursuing funding partnerships in order to maximize their scarce recovery funds. The Service does not know the effect of these disparate allocation systems because it does not have a process to routinely measure the extent to which it is spending its recovery funds on higher priority species. While we found that for fiscal years 2000 through 2003 the Service spent a majority of its recovery funds on high priority species, without periodically assessing its funding decisions, the Service cannot ensure that it spends its recovery funds on the species that are of the greatest priority and, in cases where it does not, determine whether its funding decisions are appropriate.
|
The Missouri River basin extends from the Rocky Mountains across portions of the Midwest and Great Plains, covering roughly one-sixth of the continental United States (see fig. 1). Of the six dams along the mainstem of the Missouri River, one is in Montana (Fort Peck), one is in North Dakota (Garrison), three are in South Dakota (Oahe, Big Bend, and Fort Randall), and one is along the South Dakota-Nebraska border (Gavins Point). This reservoir system is the largest in the United States and contains about 73.1 million acre-feet (MAF) of water storage capacity. A majority of the system’s storage capacity is in the three upstream reservoirs—Fort Peck Lake, Lake Sakakawea, and Lake Oahe. Gavins Point dam is the furthest downstream of the six dams, and its water releases support all uses of the river below the reservoir system. Gavins Point dam is about 811 miles upstream from the mouth of the Missouri, where it enters the Mississippi River near St. Louis; water released from Gavins Point dam takes about 10 days to reach the Mississippi River. The Master Manual lays out procedures for the Corps’ management of the six Missouri River mainstem dams as a system. In the Master Manual, the Corps attempts to balance the eight congressionally authorized purposes of the river. The current Master Manual was developed over the course of 17 years and involved extensive consultation between the Corps and basin stakeholders, as well as multiple lawsuits. Key changes in the Master Manual revision include more rapid measures taken in response to drought conditions, changes in the water levels in the upper three reservoirs during the spring to support fish spawning, and measures to support endangered species along the river. The Master Manual allocates water within the reservoir system to four different storage zones (see fig. 2): The Permanent Pool includes about 25 percent of the system’s storage capacity and is intended to be full at all times to maintain a minimum amount of water in the reservoirs for hydropower production, fish and wildlife in and along the reservoirs, and reservoir-based recreation. The Carryover Multiple Use Zone stores water for irrigation, navigation, hydropower, water supply, recreation, water quality control, and fish and wildlife. This zone is intended to maintain downstream river flows, although at lower levels, even in a succession of dry years. When the basin is not experiencing a drought, this zone is designed to be full when the runoff year begins on March 1. During times of drought, water from this zone is used to support the aforementioned authorized purposes, though at lower levels. The Annual Flood Control and Multiple Use Zone provides storage space for spring and summer runoff that can be used throughout the year to support all authorized purposes. The Master Manual sets a goal of having this zone empty on or about March 1 of every year, so any water that is stored here during the spring and summer is meant to be released prior to the start of the next runoff season, which is approximately March 1. The Exclusive Flood Control Zone is only used to store floodwaters in extreme and unpredictable floods and is emptied as rapidly as downstream conditions permit. The eight authorized purposes of the system have different water needs, and the Master Manual addresses each of these purposes (see table 1). In addition, some of the Corps’ reservoir management is related to compliance with the Endangered Species Act of 1973, as amended. Two bird species nest along the river from May to August: the endangered least tern and threatened piping plover. Releases from Gavins Point, Fort Randall, and Garrison dams have been modified to accommodate these bird species by adapting releases to prevent, as much as possible, inundation of bird nests along the river. In addition, one endangered fish species, the pallid sturgeon, lives in the Missouri River. For the pallid sturgeon, the Master Manual calls for two “pulses” of water (temporary higher releases) from Gavins Point dam in the spring to mimic the higher spring river flows that occurred prior to the construction of the mainstem reservoirs. The last spring pulse was implemented in 2009; pulses were cancelled in 2010 and 2011 due to high water levels downstream. According to Corps officials, a 2011 independent review panel questioned the efficacy of the spring pulse and the Fish and Wildlife Service and the Corps are, therefore, currently reevaluating the pulse. The Corps uses numerous types of hydrologic data—data relating to the movement and distribution of water in the basin—to track current conditions in the basin. Most of these data are collected by other federal agencies as part of nationwide efforts to gather weather and hydrologic data (see table 2). Reports by federal agencies and others have highlighted limitations in some of these data collection efforts in the Missouri River basin. Streamflow. According to USGS data and an October 2012 report by the Corps assessing post-flood vulnerabilities, loss of streamgages in the basin has reduced available information about streamflows. For example, according to USGS data, operation of 79 streamgages in the Missouri River basin has been discontinued in the last 10 years; this represents about 9 percent of the streamgages in the basin. Soil Moisture. The October 2012 Corps’ vulnerability report and a December 2011 Independent Technical Review Panel commissioned by the Corps indicate that data on soil moisture in the Missouri River basin is currently limited. The October 2012 Corps’ vulnerability report recommended that soil moisture be measured at predefined locations in plains states. Data on soil moisture can indicate how much of the precipitation that falls can be expected to run off into the reservoir system. For example, if soils are dry then precipitation is more likely to soak into the soil than to runoff into nearby rivers or streams. Plains snowpack. Three reports have recently identified limitations in plains snowpack data: the October 2012 Corps’ vulnerability report, the December 2011 Independent Technical Review Panel Report, and a May 2012 assessment of forecasting during the Missouri River flood For example, the May 2012 NWS report noted that the by the NWS.National Operational Hydrologic Remote Sensing Center (NOHRSC) provides modeled information on snow-water equivalent, but that observational data in the basin are sparse and not always representative of basin-wide conditions. Precipitation. The May 2012 assessment of NWS forecasting also noted that precipitation gauge and radar data on precipitation in the Missouri River basin were insufficient during the flood. Agencies have begun taking steps to address some data limitations. For example, in response to the December 2011 Independent Technical Review Panel Report, the Corps worked with officials from NOAA and NRCS, among others, to develop an interagency proposal, released in February 2013, to create a snowpack and soil moisture monitoring system in the plains. Under the proposal, the agencies would (1) enhance existing climate stations with snow depth and soil moisture sensors; (2) install new climate stations in the basin to enhance existing coverage; (3) enhance NOHRSC airborne surveys; (4) identify and train volunteer or part-time hires to conduct manual snow sampling; and (5) fund state coordinator positions in Montana, Nebraska, North Dakota, South Dakota, and Wyoming to coordinate snow surveys and other snow data networks at a state level. The Water Resources Reform and Development Act (WRRDA) of 2014, enacted into law in June 2014, included a requirement that the Secretary of the Army, in coordination with other specified agencies, develop this type of monitoring system in the Upper Missouri River Basin. In addition, NWS has developed a new technology, the Multi-Radar Multi-Sensor system, which integrates information from NWS, Canadian, and other radar systems with on-the- ground precipitation gauge information and model data to provide better estimates of precipitation. According to NWS officials, the Multi-Radar Multi-Sensor system is also capable of mitigating some gaps in radar coverage by extending the effective range of radar-based precipitation estimates from individual radars. According to NWS officials, this technology will be implemented nationwide by the end of 2014. According to Corps documents and officials, the Corps uses hydrologic data as an input to forecasts used to manage the reservoir system. The Corps runs two key forecasts to generate information for basin stakeholders and to make reservoir release decisions. Monthly forecast. On a monthly basis, or more frequently as needed, the Corps produces a forecast of the expected annual runoff for the remainder of the calendar year. This forecast takes into consideration current basin conditions, such as soil moisture and snowpack, as well as long-range weather outlooks and historical trends. The Corps produces a “basic” forecast, and then adjusts that forecast by a predetermined percentage to generate “upper basic” and “lower basic” forecasts to create a range of potential runoff conditions. According to Corps officials, the upper and lower basic forecasts are designed to be approximately one standard deviation away from the basic forecast and cover approximately 80 percent of the likely variation in expected runoff based on an analysis of historic runoff records. Each month, these runoff forecast estimates are used as input to the 3-week forecast, which forecasts reservoir inflows, releases, storage levels, and hydropower generation, among other things. According to Corps officials, this forecast is used by basin stakeholders to make business decisions that are affected by reservoir releases. For example, the Western Area Power Administration, which is responsible for marketing all the hydropower generated by the six dams, makes power purchase decisions based on this forecast. In addition, the Corps makes some reservoir release decisions based on the monthly forecast, particularly to move water between the six reservoirs to adjust to current weather conditions or support downstream uses. Three-week forecast. On a weekly basis, or more frequently as needed, the Corps produces a forecast of reservoir inflows, outflows, storage, and power generation over the next 3-to-5 weeks. According to Corps officials, this model uses “water on the ground” information— specifically streamflows and reservoir levels—combined with information from the basic monthly forecast. Officials said this is the primary model they use to set daily and weekly reservoir releases and that they try not to deviate significantly from projected releases at Fort Peck and Garrison dams in this forecast, unless there are unusual circumstances. Adjustments at the other four dams are routinely made to respond to changing conditions on the ground, such as rainfall below the reservoir system. The Corps’ current runoff forecasts are deterministic, meaning that the models create a single forecast based on the existing hydrologic data. Although the monthly runoff forecast also includes the upper basic and lower basic conditions, these are still deterministic because they are generated by using a multiple of the basic forecast. However, according to NOAA documents, error can be introduced into deterministic forecasts when initial hydrologic conditions are not fully known. According to NWS officials, a different type of forecasting—called probabilistic forecasting— attempts to account for uncertainty in the forecast by, for example, using statistical techniques to simulate multiple, slightly different initial conditions. These officials said that probabilistic forecasts provide a range of potential outcomes and their likeliness. Probabilistic techniques are used extensively in weather forecasting both for routine forecasts and for more rare events such as hurricanes, according to NWS officials. Annual runoff into the Missouri River reservoir system can vary significantly from year to year (see fig. 3). The lowest runoff year was 1931, with 10.6 MAF of runoff. The highest runoff year was in 2011, with 61 MAF; 61 MAF is about enough water to cover nearly the entire state of Oregon (61.4 million acres) in water 1 foot-deep. Prior to 2011, the highest runoff, which also caused flooding along the river, was 49 MAF in 1997. Runoff in 2011 was about 25 percent greater than in 1997 and 148 percent greater than the historical median of 24.6 MAF. According to the May 2012 assessment of NWS forecasting, several factors combined in 2011 to produce record runoff: wet soil conditions throughout the basin leading into the winter of 2010-2011, high snowpack in both the plains and mountains, and extreme rainfall in May and June of 2011. Wet basin conditions. After experiencing a drought between 2000 and 2008, the Missouri River basin experienced relatively wet years in both 2009 and 2010. According to a December 2013 NOAA report examining climate extremes in the Missouri River basin, 2010 was the fifth wettest year on record. This precipitation created wet soil moisture conditions throughout the upper basin in the fall of 2010. Plains snowpack. Snow can accumulate on the plains through late March or early April. In 2011, total snowfall in the Missouri River basin plains states was well above average, and the snowpack was greater than usual. Numerous cities in the basin set new seasonal snowfall records. For example, Williston, North Dakota, had 107 inches of snow, compared with a long-term average of about 35 inches. Mountain snowpack. Snowpack generally accumulates in the mountains of Montana and Wyoming throughout the winter, peaking in mid-April and then providing runoff as it melts through May and June. As of March 1, 2011, the mountain snowpack was slightly above average at about 110 percent of normal (see fig. 4). However, late April and May were extremely wet and cold, and mountain snowpack continued to build to record levels in many areas. Mountain snowpack in 2011 peaked in early May at approximately 140 percent of normal. Rainfall in May and June. Record rain fell in Montana, northern Wyoming, and the western Dakotas in May and early June 2011. Areas of south central and southeast Montana received as much as 15 inches of rain in May, which is 12 inches above normal. Most of eastern Montana received at least three times more precipitation than normal, and the month of May and was ranked as one of the wettest Mays on record in Montana, Wyoming, North Dakota, South Dakota, and Nebraska. Rain continued to fall in June, with Montana, North Dakota, South Dakota, and Nebraska receiving 3 to 8 inches more rain than normal. As these weather conditions unfolded in 2011, the Corps continued to modify its release rates from the reservoir system (see fig. 5). In early April, the Corps began flood control operations by increasing releases from Gavins Point in response to the above average mountain snowpack. The Corps continued increasing releases throughout April and May, reaching 50,000 cubic feet per second (cfs) on May 9 and surpassing the previous high release rate of 70,000 cfs (set in 1997) on May 29. Release rates increased particularly fast between late May and late June, when releases peaked at about 160,000 cfs, more than double the previous high release rate; 160,000 cfs is about the amount of water from two Olympic-sized swimming pools going past a single point in 1 second. Gavins Point release rates remained above 100,000 cfs until August 31, and it was not until December that releases returned to a more normal rate of 35,000 cfs. In the fall of 2011, as basin stakeholders and the Corps were repairing infrastructure and recovering from the flood, there was concern that additional flooding would occur in 2012. However, 2012 brought drought throughout the Missouri River basin. Nebraska and Wyoming experienced their driest year in 118 years of recordkeeping, and several other states in the basin also had very dry years. For example, Missouri had its seventh driest year, Iowa had its 11th driest year, and South Dakota had its 13th driest year. Drought intensity, as defined by the U.S. Drought Monitor, increased from January through July of 2012, at which point moderate and severe drought conditions were present in southern Montana, western South Dakota, western Nebraska, and Wyoming. Conditions worsened during the summer, and by October 2012, extreme and exceptional drought conditions were present across Wyoming, South Dakota, Nebraska, and western Iowa. Total runoff into the Missouri River mainstem reservoir system in 2012 was 19.5 MAF, or about 77 percent of normal runoff. In addition, runoff into the Missouri River below the reservoir system was also extremely low at 51 percent of normal runoff. According to the Master Manual, March 15 reservoir levels must be at least 54.5 MAF to initiate full-service to navigation. July 1 reservoir levels must be at least 57 MAF to continue full-service to navigation for the remainder of the season. and the Master Manual called for minimum winter releases of 12,000 cfs from Gavins Point. However, water intake owners in the lower basin were concerned about maintaining access to the river at those low flows, particularly since the 2011 flood scoured the river bottom in many areas. According to the Corps report describing reservoir management in 2012, the Corps exercised the flexibility in the Master Manual and elected to keep winter releases at 14,000 cfs to prevent municipalities and power plants from losing access to the river. Drought conditions persisted into 2013, and the reservoir system was 7.4 MAF below the top of the Carryover Multiple Use zone on April 1. Due to the low volume of water in the reservoirs, the Corps continued implementing drought conservation measures, according to Corps officials. For example, navigation releases during April through June were at a minimum service level, meaning flows were high enough for an 8- foot-deep channel. The drought began to ease in parts of the basin during the summer due to rainfall and associated runoff. The higher volume of water in the reservoirs in July led to a slight increase in release rates for navigation, as well as a full 8-month navigation season. Runoff into the Missouri River reservoirs was about average in 2013 at 25.1 MAF, although water levels in the upper three reservoirs remained low. Experts who participated in our meeting agreed that the Corps made appropriate release decisions during the flood and drought, given that neither the flood nor drought could have been predicted and the Corps’ need to follow the guidelines in the Master Manual. These experts did not suggest changes to the Master Manual due to the 2011 flood or subsequent drought. Experts who participated in our meeting discussed the above normal snowpack in the mountains and the plains, but they agreed the flood was triggered by the extreme rain in eastern Montana in May and June 2011. This conclusion is consistent with the December 2013 NOAA report examining climate extremes in the Missouri River basin that stated the record-setting rains were the final and, perhaps most critical, meteorological factor leading to high runoff and flooding in 2011. The experts agreed that no existing forecasting tools, including those used by the Corps and NOAA, could have accurately predicted the extreme rainstorms that occurred in Montana more than a week in advance. The December 2011 Independent Technical Review Panel Report commissioned by the Corps also reached this conclusion, noting that accurate prediction of precipitation more than a week in advance is beyond the current state of science. Prior to our meeting, one of the experts reviewed information that was available on March 1, 2011, to determine what long-range forecast models were projecting about precipitation in the Missouri River basin for spring 2011. This expert noted that of the models he examined, only one forecasted a wet spring, and all other models forecasted normal or dry conditions. Based on the information the Corps had available in March 2011—these forecasts as well as evidence of the slightly above-normal mountain snowpack—experts who participated in our meeting said they considered the Corps release decisions early in the spring to be appropriate. Experts who participated in our meeting also agreed that the Corps could not have prevented flooding in 2011. Snow continued to accumulate in the mountains in April and May—well past the average date of maximum snow accumulation. The experts said that, by June 2011, the volume of water coming into the reservoirs from the extreme rains and melting snow was so great that the Corps had no choice in June and July but to release water to accommodate the inflow and prevent damage to dam infrastructure, such as spillways in danger of being overtopped. The December 2011 Independent Technical Review Panel Report reached a similar conclusion. This report cited the absence of major dam failures as evidence of the Corps’ success during the flood and noted that dam failure beginning at Fort Peck would have caused a catastrophic disaster of unprecedented magnitude. Even if the Corps had decided on March 1, 2011, to increase releases due to the slightly larger-than-average snowpack in the mountains and plains, experts who participated in our meeting agreed that action would not have significantly reduced peak flows because of the extremely large amount of runoff in 2011. One of the experts said it would have taken several months for the Corps to release enough water from the reservoirs to make space for the runoff from the rainstorms and melting snow, and that action also could have resulted in downstream flooding. Specifically, this expert noted that high releases during the winter can cause flooding because of ice on the river, so the Corps would have needed to know in October 2010 about the upcoming extreme spring rain to release enough water in the fall to create more space in reservoirs. One of the experts said that having additional space in the reservoirs on March 1 was the only way the Corps could have significantly reduced the peak downstream flooding. This expert also noted, however, that taking steps to lower reservoir levels in this way may not be consistent with the Master Manual. Another of the experts who participated in our meeting noted that while having more flood control storage available on March 1 each year reduces the chances of flooding, it could have negative effects on the other authorized purposes of the mainstem dams in nonflood years. Experts who participated in our meeting generally agreed that the Missouri River basin’s rapid descent into drought could not have been predicted. One of the experts qualified this statement, noting that the drought could not have been predicted with sufficient certainty to change reservoir decisions, given the high costs of the forecast being incorrect. Prior to the meeting, one of the experts reviewed information that was available in the spring of 2012 to determine what long-range forecast models were projecting about precipitation in the Missouri River basin for the remainder of 2012. This expert noted that there was no predictability; some of the models were forecasting wet conditions, and others were forecasting dry conditions. He also explained that some of the models predicting dry conditions frequently forecast dry conditions that do not materialize. As the drought took hold, experts who participated in our meeting said the Corps followed procedures as laid out in the Master Manual. For example, the experts noted that, in 2012, the Corps released water for a full-service navigation season. The experts said that the navigation season was in accordance with the Master Manual, but it drained the reservoirs relatively quickly during the very dry summer of 2012. Specifically, according to the Corps report describing its management of the reservoir system in 2012, 22 percent of the water in storage was released in 2012, which would have reduced the amount of water available for future years if the drought lasted for several years. Experts who participated in our meeting also agreed that the Corps appropriately exercised the reservoir release flexibility granted by the Master Manual. For example, the experts agreed it was appropriate that, in the winter of 2012-2013, the Corps kept winter releases higher than normal to ensure that water intakes along the river had continued access for municipal and industrial uses. Experts who participated in our meeting agreed that the Corps does not need to change the Master Manual due to the 2011 flood or 2012 and 2013 drought, noting that there are no obvious deficiencies in the Master Manual. One of the experts noted that occurrence of similar extreme events should be incorporated in analyses that support any potential future changes in operating rules. In addition, several of the experts mentioned that developing the Master Manual took 17 years and that Missouri River basin stakeholders have agreed to the trade-offs and compromises in the current Master Manual. Other experts who participated in our meeting noted, however, that if the Corps could develop improved forecasting tools, it might be useful to evaluate whether changes to the Master Manual would help the Corps to act on information from the new tools. These experts explained that they were not sure whether such an evaluation would find that changes to the Master Manual would significantly help the Corps manage the reservoirs and balance the authorized purposes, but they thought it was worth examining if new forecasting tools are developed. Finally, experts who participated in our meeting also discussed challenges the Corps faces in balancing reservoir releases for all eight authorized purposes. Some of the experts thought that the Corps should restart a study called the Missouri River Authorized Purposes Study (MRAPS). MRAPS was authorized in the Corps’ fiscal year 2009 appropriations act to examine the extent to which the current authorized purposes of the river meet the needs of the residents of the Missouri River basin. The Corps worked on MRAPS for 2 years before it was defunded by Congress in fiscal year 2011 appropriations.experts thought that an examination of the purposes was warranted, in part, because of the number of reservoir regulation decisions made for the purpose of navigation. Another of the experts cautioned, however, that such a study might also open the idea of operating the Missouri River to benefit stakeholders outside the basin, such as navigators along the lower Mississippi River. This expert said that the navigation on the Mississippi River is a $1.2 billion industry and, in some years, could benefit from flow support from the Missouri River. He pointed out, however, that such an action could use a significant amount of water from the reservoirs, perhaps to the detriment of current authorized purposes. According to Corps officials, they are not authorized by Congress to make reservoir release decisions to support Mississippi River navigation. Experts who participated in our meeting suggested that collecting more hydrologic data, improving existing hydrologic data, and incorporating probabilistic forecasting techniques could improve the Corps’ ability to make release decisions in nonextreme events. The experts stated that these data and forecasts would not have predicted the 2011 flood. However, they explained that these data and forecasts could be helpful in future, less extreme, floods. Experts who participated in our meeting suggested that improving existing hydrologic data and collecting new data could improve the Corps’ ability to make release decisions. The experts mentioned that streamflow and precipitation data could be improved, and that new soil moisture, plains snowpack, and archaeological flood and drought data could be collected. The experts said they did not believe that having these data would have materially impacted the Corps’ response to the 2011 flood. One of the experts said, while improved data would not have prevented the flood, it might have helped the Corps reduce the severity of the flood to a small degree. However, it is important to note that the hydrologic data systems discussed by the experts are not managed by the Corps but by other federal agencies as part of nationwide efforts to gather this data. Therefore, the Corps cannot directly control the extent to which improvements in these systems are made. Experts who participated in our meeting said that maintaining and improving the USGS streamgage network is critical because it provides important data on current and historical streamflows. The experts said that historical streamflow records can also help modelers describe how flow conditions persist in streams, which enables them to create probabilistic forecasts of possible future river flows. As previously mentioned, USGS data indicates that about 9 percent of streamgages in the Missouri River basin have been discontinued in the last 10 years. USGS officials said that streamgages are often discontinued due to funding shortages, either at USGS or from the cooperative partner agencies which help fund the streamgages. According to USGS officials, the Corps provides funding for 264 of the 892 streamgages in the Missouri River basin. According to Corps officials, when their support for streamgages is reduced, they prioritize saving downstream streamgages on tributaries with more than one streamgage because downstream streamgages capture more of the river’s flow. Corps officials said that under normal circumstances, losing data from upstream streamgages is not a serious problem, but that during a flood it can become a major challenge. For example, during the 2011 flood, the sole streamgage on the Judith River—a Missouri River tributary that runs through central Montana—was destroyed when the bridge it was attached to was washed away. Without streamflow data on the Judith River, the Corps had to estimate the Judith River flows, which resulted in less accurate information. USGS officials said that they do their best to maintain the integrity of the streamgage network. USGS officials told us that most streamgages are funded through cooperation with federal, state and local government agencies. When streamgages are in danger of losing their funding, USGS officials work with their cooperative partners to find other funding sources to maintain the streamgage and are usually successful in finding funding for the most crucial ones. However, one of the experts who participated in our meeting said that the current cooperative funding model relied on by USGS to support most of the streamgages makes it a challenge to maintain the network since the cooperative partners may have other priorities. For example, USGS officials told us that one reservoir manager in Illinois pulled funding for a streamgage and used the money to build an outhouse. USGS officials began the National Streamflow Information Program in 2003 to federally fund a core network of 4,756 streamgages throughout the country. This program is designed to, among other things, improve USGS’ ability to continue operating high-priority streamgages when partners discontinue funding. According to USGS officials, the National Streamflow Information Program received a $6 million funding increase in FY 2014 to $33 million. Experts who participated in our meeting also identified gaps in the weather radar and precipitation gauge collection systems. Specifically, one of the experts said that weather radar does not do a good job of measuring winter precipitation and, even if it did, radar coverage in many parts of the basin is limited. For example, this expert noted that areas approximately 30 miles north of Pierre, South Dakota, have relatively poor radar coverage. Weather radar data is supplemented by precipitation gauge data, such as through the volunteer Community Collaborative Rain, Hail and Snow (CoCoRaHS) Network. However, the experts said that the basin is sparsely populated, which limits the pool of volunteer observers, potentially making it more difficult to collect the data needed to supplement the weather radar network. Corps officials said that the CoCoRaHS Network can compensate for gaps in the radar coverage, and few stakeholders are seeking to expand the radar network. NWS officials agreed that radar coverage was poor in the northern and western parts of the basin, such as southeastern Montana and central South Dakota. However, NWS officials told us that they do not currently have plans to expand radar coverage because current off-the-shelf radar does not have the same capabilities as the NWS’s current system. Integrating off-the- shelf radars into the system would be difficult, and building radars that match the capabilities of the current system would be expensive, according to these officials. NWS officials said that new technology—such as the Multi-Radar Multi-Sensor system—will help mitigate the gaps in precipitation data in areas where radar coverage is sparse. Experts who participated in our meeting said that making improvements to soil moisture and snowpack data would be very useful for making long- term forecasts because these conditions can be observed months before the associated runoff reaches the reservoir system. Some of the experts noted that there are major gaps in plains snowpack and soil moisture monitoring data, and improving these data would be useful in improving Corps’ forecasting models. The experts said that gathering this data could be accomplished if the resources were available and stakeholders were willing to participate. A NOAA official working on the February 2013 interagency proposal to create a snowpack and soil moisture monitoring system said that NOAA is working with stakeholders to develop the interagency proposal, but implementation was on hold while stakeholders were waiting to see if it would be included in the then pending 2014 WRRDA. However, this official noted that, even though the 2014 WRRDA requires the development of a monitoring system for soil moisture and snowpack data, there may be challenges in funding the proposal, which has a projected up-front cost of $6.25 million. Specifically, agencies supporting the proposal—such as the Corps, NOAA and NRCS—will need to find money for upfront costs in their existing budgets, which could take funds away from other programs and priorities. In addition, according to the February 2013 interagency proposal, maintaining the network once it is built would cost $1.46 million per year. Some of the experts who participated in our meeting also recommended collecting archaeological data on floods and droughts, which could be used to provide a better understanding of the extreme floods and droughts in the basin before recordkeeping began in 1898. For example, USGS has ongoing archaeological work in the Black Hills of South Dakota that could allow a better understanding of how large a 10,000- year flood would be compared with a 100-year flood. the experts cautioned that these data may not be useful for the Corps. According to a Corps official, these data would not be used in developing the manuals they use to guide regulation decisions, although it would provide information about the risk of larger floods. The “100-year flood” is a flood that has a 1 percent chance of occurring each year. This flood magnitude is used by federal agencies to administer floodplain management programs. Experts who participated in our meeting agreed that by incorporating probabilistic techniques into runoff forecasts, the Corps could improve its ability to make release decisions in nonextreme events. Two of the experts indicated that probabilistic forecasting techniques could also improve the Corps’ ability to make release decisions in extreme events. Experts who participated in our meeting agreed that probabilistic forecasting techniques could help the Corps manage risks and make decisions in the face of uncertainty. One of the experts said that these techniques are useful for communicating risk information—such as the risk of severe excesses or deficiencies of water—to the public and public officials. Another of the experts said that probabilistic techniques could allow the Corps to provide increased benefits to the system. These benefits could include higher reliability of water supply without increasing flood risk, increased flood protection, small increases in hydropower production, and easier implementation of variable river flows to create and maintain specific fish and wildlife habitats. The primary type of probabilistic forecasting techniques discussed by experts who participated in our meeting was ensemble forecasting. Ensemble forecasting combines multiple forecasts to generate a sample of potential future weather developments. The individual forecasts, called ensemble members, can be created either using several different models in concert or multiple runs of the same model with slightly different initial conditions. These forecasts are then compared to determine how much agreement there is between the various ensemble members. Some of the experts said that ensemble forecasts can help forecasters correct for both uncertainty about initial conditions and uncertainty about the how a model is constructed, which are common causes of forecasting error. One method of generating ensemble forecasts that the experts said could be potentially useful in the Missouri River basin is the Hirsch method. This method uses correlations between 1 month’s streamflow and the previous month’s flow to generate ensemble members, since flow conditions from the previous month generally persist into the next. For example, Hirsch model forecasts examine the historical statistical relationships between streamflow in March and streamflow in April since March’s conditions persist into April. When this relationship exists, it allows for forecasts with less variance and uncertainty compared with other methods, leading to more accurate forecasts. Another way experts who participated in our meeting suggested that the Corps could incorporate ensemble modeling into its forecasts would be to leverage the existing ensemble streamflow forecast created by the NWS Advanced Hydrologic Prediction Service (AHPS). The AHPS forecasts use ensemble streamflow predictions to determine the chances of a river exceeding minor, moderate, or major flood levels over the next 90 days. One of the experts noted that the current AHPS forecast locations in the Missouri River basin are not located where the Corps likely needs them to be to have enough information to make their decisions. NWS officials told us that they produce probabilistic forecasts at 465 AHPS forecast locations in the basin but that none of these locations are along the mainstem of the Missouri River because probabilistic forecasting along the mainstem would require integrating the Corps’ reservoir management procedures into the NWS probabilistic models. One of the experts said the Corps could overcome this challenge by identifying statistical relationships between the existing AHPS locations and the locations that the Corps would want to use for decision making and conduct a pilot project to see how useful these statistical relationships would be for reservoir management decisions. This expert said that this pilot effort would not be a difficult undertaking and could be accomplished for roughly $100,000, but the Corps would need to coordinate closely with NWS. According to experts who participated in our meeting, reservoir managers in several basins throughout the United States currently use probabilistic techniques to manage reservoirs. For example, reservoir managers in the Occoquan River basin (which provides drinking water to Fairfax County, Virginia) successfully used Hirsch method forecasts in the 1970s to support implementation of drought mitigation measures that were less onerous than were thought necessary based on their deterministic forecasts. In 2009, the New York City Department of Environmental Protection (DEP) began developing a tool that incorporates both Hirsch method and NWS forecasts to help manage drinking water reservoirs that serve 9 million residents of the city and surrounding areas, as well as other competing demands on the system, such as release requirements on the Delaware River, flood control, and recreational fisheries. DEP officials said that a similar tool could help the Corps better manage other reservoir systems, such as along the Missouri River. DEP officials said the tool, which cost about $8 million, was developed in stages: in late 2010, it included modeling based on the Hirsch method and, in November 2013 it incorporated NWS hydrologic ensemble forecasts. DEP officials said the tool uses these ensemble forecasts to model potential reservoir management scenarios to meet water quality goals—such as reducing the amount of sediment in the city’s drinking water—without having to install expensive new water filtration plants. DEP officials also said the tool supports reservoir operations decisions, including preemptive releases in advance of large storm events to create space in the reservoirs and releases to support downstream communities. DEP officials said the tool is more effective than their previous method of forecasting, which, much like the Corps’ Missouri River forecasting, used historical data and runoff volume calculations. DEP officials said that ensemble forecasts have been effective in modernizing management of their reservoir system by reducing uncertainty and helping them to better assess risk and make informed decisions, and that similar systems could help the Corps make risk-based decisions about reservoir releases informed by real probabilities. Corps officials told us that they have not considered using probabilistic techniques, such as the Hirsch method or NWS forecast products, in the Missouri River system because they are not sure the benefits would outweigh the difficulty of creating the models or explaining the new methods to their stakeholders. Corps officials told us that deterministic forecasts are easy to maintain and simpler to explain to stakeholders than probabilistic methods. These officials also said that their current methods work well in all but the most extreme events and are a more efficient use of their limited staff resources. Corps officials said that assigning probabilities to their current runoff forecast could give them and their stakeholders more certainty about the likelihood that a very high or very low runoff year will develop. However, the Corps would nonetheless have to select one of the forecasts on which to base their real-time operations. This would, in effect, require the application of the same engineering judgment that is used in their deterministic forecast, according to Corps officials. In addition, Corps officials said that Missouri River basin stakeholders who see the results of the Corps models would face similar challenges, but they would not have the years of engineering expertise to determine how to act on a range of potential very high or very low releases with the same likelihood of occurrence. Furthermore, Corps officials said that basing system operations on probabilistic forecasts would require changes to many of their current operational procedures, and thus changes to the Master Manual. However, experts who participated in our meeting agreed that Corps should investigate probabilistic techniques. Some of the experts said that the Corps can achieve better outcomes for the basin using probabilistic techniques than their current methods. One of the experts noted that using probabilistic techniques can help the Corps focus on the risks of flood and drought in less extreme years than 2011, which may help to increase the benefits to basin stakeholders from the six mainstem dams. The experts also agreed on the importance of evaluating any new forecasting methods using hindcasting, which uses historical weather and stream information to determine how effectively a given forecasting approach would have predicted past events. One of the experts said that hindcasts are a powerful tool for showing the Corps and their stakeholders that a new probabilistic forecasting model would have provided useful information in the past, will be able to provide useful information in the future, and that changes to operating rules based on the ensemble forecasts would create win-win outcomes. During both the 2011 flood and the subsequent drought, the Corps communicated with Missouri River stakeholders in a variety of ways, which most stakeholders we interviewed said were effective. Nearly all stakeholders we interviewed were generally satisfied with the Corps’ communication with them during these events, saying that the information they received from the Corps was timely and sufficient for their purposes. Stakeholders we interviewed said that the Corps communicated with them in a variety of ways during both the flood and drought. For most stakeholders, the Corps was the primary source of information during the flood, but fewer than half of these stakeholders used the Corps as their primary source of information during the drought. In both events, most stakeholders said that the methods through which they received information from the Corps were the most effective way for the Corps to communicate with them. As shown in table 3, many of the communication methods mentioned by the stakeholders we interviewed were used by the Corps in both the flood and drought. One of those methods was the Corps’ conference calls. According to Corps officials, these calls began in May 2011 as a way to provide daily updates and information during the flood to congressional representatives, state and local officials, and the news media about (1) planned water releases from the six Corps dams on the Missouri River, (2) weather forecasts, and (3) the Corps’ repair schedule. The conference calls also gave these stakeholders an opportunity to ask the Corps questions about their own communities. In 2012 and 2013, the Corps continued these calls on an almost monthly basis during the winter and spring runoff season. Nearly all of the stakeholders we interviewed reported that they took part in the conference calls at some point. Stakeholders also reported receiving information from the Corps via e- mail. For example, a few stakeholders mentioned receiving information during the flood in e-mails from a Corps district official who updated them on release rates and provided other useful information. One of these stakeholders reported that, if they had questions, they could contact this official by e-mail and would obtain a response. During the drought, one stakeholder said that he received monthly updates about the reservoir by e-mail. In addition, some stakeholders mentioned communication methods specific to the drought. For example, four stakeholders mentioned receiving a letter from the Corps that was sent to water intake owners in April 2013 warning of a potential need to reduce water releases from Gavins Point dam to as low as 9,000 cfs in the fall of 2013 as a conservation measure.steps to ensure that their intakes would operate at the low levels. Nearly all of the stakeholders we interviewed were generally satisfied with the Corps’ communication during the flood. The information they received from the Corps helped them take a number of actions, as shown in table 4. For example, officials from several state and local governments agencies said they used information from the Corps to identify infrastructure that would be affected by floodwaters and either protect it by sandbagging or, if possible, relocate it out of the flood zone. Other stakeholders, such as one state parks department official, used the Corps’ information to plan for facility closures or remove equipment to prevent damage. Stakeholders also shared information they received from the Corps with other people. For example, one state agency official shared the Corps information with others in his agency, with other state agencies, and with farmers and levee districts throughout the state. Similarly, one nonprofit organization official disseminated information to its members who are located throughout the Missouri River basin. Most of the stakeholders we interviewed said that the information they received from the Corps was sufficient for their purposes. For example, one state agency official praised the inundation maps received from the Corps for having considerable amounts of useful information, and also praised the Corps’ status updates about which levees were in danger of failing or had already failed due to the effects of the flood. In contrast, several stakeholders said that the information was not sufficient for their purposes. For example, 8 stakeholders said that the Corps changed their water release estimates from the dams too frequently. Specifically, one stakeholder reported that during May 2011, the Corps revised its release estimates upward five times in a 2-week span. The final estimate, provided in early June, was nearly three times the size of the first estimate from mid-May. This stakeholder said his agency had to revise flood control plans after each estimate, and that his agency eventually decided to plan for the highest possible releases rather than frequently revising its plans. Most of the stakeholders we interviewed said that the Corps’ information was generally timely. For example, one state agency official said that the Corps provided data in a timely manner, and that the agency was always informed when the Corps planned to increase releases and by how much. Similarly, a local government official said that while he did not always like what the Corps was telling him, the Corps always provided accurate and timely data about when the releases were going to change, and by how much. In contrast, 5 stakeholders said that the Corps information was generally untimely during the flood, and 5 others said that the Corps’ information was untimely in the beginning stages of the flood but improved over time. Seven of the 10 stakeholders who cited problems with timeliness were located in North and South Dakota, the states where five of the six Corps dams are located. Although nearly all of the stakeholders we interviewed were satisfied with the Corps’ communication during the flood, most stakeholders offered at least one suggestion for how the Corps could improve its communication during future floods. However, there was no consensus among stakeholders on these suggestions. A few stakeholders suggested that the Corps communicate any uncertainty associated with its release estimates. One of these stakeholders explained that this could include the Corps telling them the worst-case scenario for releases, not just the most likely case. Corps officials expressed concerns about communicating this type of information with their release estimates since it could be difficult to use without the appropriate context. In addition, a few stakeholders suggested that the Corps hold a conference call specifically for agency officials. One of these stakeholders said that having the news media on the call made it difficult to discuss sensitive response-related information. Corps officials also said that having separate conference calls for agency officials and elected officials and media is feasible and that they would consider this in the future. In contrast, some stakeholders said there is nothing more the Corps could do to improve communication in the event of another flood. A few stakeholders identified actions the Corps took during the flood that they appreciated such as having a Corps official embedded in their emergency operations center. Nearly all of the stakeholders we interviewed who were in contact with the Corps were generally satisfied with the Corps’ communication during the drought. These stakeholders used the information they received to take a number of actions, as shown in table 5. For example, officials from one local government used the information they received from the Corps to analyze options to update their intake to ensure that it would remain operational if the Corps’ winter releases dropped to 9,000 cfs. As was the case during the flood, some of these stakeholders said that they shared information with others during the drought. For example, one nonprofit official said that he disseminated Corps information to Missouri River navigators, shipping companies, and agricultural producers, among others. Nearly all of these stakeholders said that the Corps gave them sufficient information for their purposes. One state agency official said that he had a “good handle” on the problems caused by drought and that the Corps has explained things well. Nearly all of the stakeholders we interviewed said that the Corps communicated with them in a timely fashion. However, 8 of these stakeholders said that the issue of timeliness is different in a drought than during a flood. As one stakeholder explained, droughts do not present the same type of near-term safety issues that must be dealt with immediately. Instead, droughts stress the water system at a lower level over a longer period of time. Although nearly all of the stakeholders we interviewed were satisfied with the Corps’ communication during the drought, some stakeholders offered suggestions for how the Corps could improve its communication during future droughts. However, there was no consensus among stakeholders on these suggestions. The most common suggestion, mentioned by several stakeholders was that the Corps make better use of technology in presenting information, such as by improving their website. Corps officials acknowledged the importance of a user friendly website and that it can be hard to find information on their current website. These officials noted that redesigning a website would take a significant effort and that doing so is not a high priority given current staffing and funding levels. However, these officials mentioned that some website improvements have been made recently, such as adding a map of streamgages within the basin with links to the raw data, which could be on Corps, U.S. Bureau of Reclamation, USGS, or NWS websites. Most stakeholders did not have suggestions for the Corps on improving communication in future droughts. The extreme flood of 2011 followed by severe drought in 2012 and 2013 created challenging conditions on the Missouri River for the Corps. Experts who participated in our meeting agreed that the Corps made appropriate release decisions during the flood and drought, given the circumstances. However, the experts agreed that techniques such as probabilistic forecasting have the potential to improve the Corps’ ability to make release decisions in nonextreme events. Probabilistic forecasting could allow the Corps to make better risk-based decisions and provide increased benefits to residents in the Missouri River basin, such as higher reliability of water supply, increased flood protection, small increases in hydropower production, and easier implementation of variable river flows to create fish and wildlife habitats. However, the Corps currently uses deterministic forecasting methods, and Corps officials told us that they have not assessed the pros and cons of using probabilistic techniques because they are not sure that the benefits of more sophisticated probabilistic modeling techniques would outweigh the difficulty of creating the models or explaining the new methods to the stakeholders in the Missouri River basin. New forecasting methods can be evaluated—using a technique known as hindcasting—to determine how effectively a new forecasting approach would have predicted past events. According to the experts, hindcasting is a powerful tool for showing the Corps and their stakeholders that a new probabilistic forecasting model would have provided useful information in the past, will be able to provide useful information in the future, and that changes to operating rules based on the ensemble forecasts would create win-win outcomes. To ensure the U.S. Army Corps of Engineers considers a full range of forecasting options to manage the Missouri River mainstem reservoir system, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to evaluate the pros and cons of probabilistic forecasting techniques that could improve the U.S. Army Corps of Engineers’ ability to anticipate weather developments, and to evaluate whether forecasting changes are warranted. We provided a draft of this product to the Departments of Commerce, Defense, and the Interior for comment. In its written comments, reprinted in appendix III, the Department of Defense concurred with our recommendation and noted that it will take steps to address the recommendation. The Department of Commerce provided technical comments that we incorporated as appropriate. The Department of the Interior had no comments. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, and the Interior; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Larry Cieslik, retired, U.S. Army Corps of Engineers (Corps) Does the Corps have access to adequate data and forecasting information that allows them to make timely release decisions during droughts and floods? If not, why? What are the consequences of not having this data and forecasting? If data elements needed for decision making are missing, what technical challenges, if any, are there in collecting these data? Based on information presented by the Corps about how they use data collected by federal and state agencies, are those data being used to support reservoir operations decisions, as appropriate? Why, or why not? How should the Corps make reservoir operations decisions in the face of uncertainty in runoff forecasts? How, if at all, should the Corps incorporate uncertainty in runoff forecasts into information provided to basin stakeholders or the public? What are the pros and cons of the Corps’ policy to regulate releases based on “water on the ground”? What steps, if any, could the Corps take to improve runoff forecasting? In the 2011 flood, were there additional reservoir release actions the Corps might have taken to better manage flood risks and, if so, what may have been the consequences of not taking these actions? What constraints might exist on those actions? In the 2012 and 2013 drought, were there additional reservoir release actions the Corps might have taken to better manage drought risk and, if so, what may have been the consequences of not taking these actions? What constraints might exist on those actions? Does the Corps have appropriate flexibility to regulate the river to manage risk of flood and drought? What are the pros and cons of the current amount of flexibility the Corps has? During low water conditions, the Corps works to balance the competing interests of the eight authorized purposes of the Missouri River reservoirs. How well is the Corps balancing these purposes and what, if any, improvements could the Corps make? What constraints might exist on those actions? The most recent changes to the Master Manual raise the threshold below which the Corps regulates the system in drought conservation mode. What are the pros, and cons, of this new threshold? This report describes (1) experts’ views on the Corps’ release decisions during the 2011 flood and 2012 and 2013 drought; (2) additional actions, if any, experts recommend to improve the Corps’ ability to make future release decisions; and (3) stakeholders’ views on how the Corps communicated information during the flood and drought and improvements, if any, stakeholders suggest. To address all of our objectives, we reviewed relevant laws, including the Flood Control Act of 1944 that authorized the construction of dams along the Missouri River. In addition, we reviewed the documents that the U.S. Army Corps of Engineers (Corps) uses to guide their release decisions, including the Missouri River Mainstem Reservoir System Master Water Control Manual (Master Manual) (last updated in 2006) and the Annual Operating Plans for 2010-2011, 2011-2012, and 2012-2013. We also reviewed documents produced or commissioned by the Corps that describe the details of the Corps’ release decisions during the flood and drought, including “Summary of Actual 2011 Regulation,” “Summary of Actual 2012 Regulation,” and the December 2011 Independent Technical Review Report. In addition, we reviewed National Oceanic and Atmospheric Administration (NOAA) and U.S. Geological Survey (USGS) documents about existing hydrologic data collection systems and National Weather Service (NWS) documents about weather and flood forecasts during 2011. We conducted a technical review of the December 2011 Independent Technical Review Report and the NOAA Climate Assessment Report to ensure that the methodologies used were appropriate to support the conclusions reached. We also interviewed Corps officials at the Missouri River Basin Water Management Division who are responsible for making release decisions and conducted a site visit to the Oahe project in South Dakota to gather information about dam operations and water levels during the 2011 flood. To obtain expert views on the Corps’ information and release decisions, we convened a meeting of experts to discuss these issues. This meeting was held at the National Academy of Sciences (NAS) in February 2014, and staff at NAS assisted in identifying experts for the meeting. To identify the experts appropriate for this meeting, NAS staff solicited nominations from: current and former members of the NAS Water Science and Technology Board; current and former members of the Water Science and Technology Board study committees; select members of the NAS and the National Academy of Engineering; National Research Council staff; and other experts. Experts were selected based on knowledge of: (1) reservoir operations and river basin system modeling (both in the United States and abroad); (2) weather conditions and forecasts, and river stage forecasting, specifically in the Missouri River system; and (3) modeling of the relationship between precipitation and runoff; and Corps operational decisions. The experts identified by NAS included individuals with a broad set of viewpoints and knowledge, including experts from federal and state government agencies, the private sector and consultants, academia, and retired federal water and engineering experts. The range of the experts’ expertise included reservoir system operations and modeling, hydrology and hydraulics, civil engineering, meteorology, rainfall-runoff modeling, and Corps reservoir operations. The nine experts were evaluated for conflicts of interest. A conflict of interest was considered to be any current financial or other interest that might conflict with the service of an individual because it (1) could impair objectivity and (2) could create an unfair competitive advantage for any person or organization. All potential conflicts were discussed by NAS and GAO staff. The nine experts were determined to be free of conflicts of interest, and the group as a whole was judged to have no inappropriate biases. See appendix I for a list of the experts and the questions discussed during the 2-day meeting. The 2-day expert meeting began with a 1-hour presentation by the Chief of the Missouri River Basin Water Management Division and an opportunity for the experts to ask questions. After questions, the Corps’ representative left the meeting and experts began discussing the Corps’ data, forecasts, and release decisions. The meeting was recorded and transcribed to ensure that we accurately captured the experts’ statements, and we reviewed the transcripts as a source of evidence. We analyzed the transcripts to identify key statements the experts made regarding the Corps’ data, forecasts, and release decisions. We sent these statements, via e-mail, to each of the experts to ensure that they all agreed with our characterization of the findings of the 2-day meeting. We received replies from all nine experts generally agreeing to the statements and making some additions and clarifications, which we incorporated as appropriate. To obtain stakeholders’ views of the Corps’ communication during flood and drought, we used a standard set of questions to interview a nonprobability sample of 45 stakeholders in the Missouri River basin. We identified these stakeholders based on our preliminary interviews and sought to include organizations from each of the seven states included in our review and related to the eight authorized purposes of the Missouri River reservoir system. For example, to obtain perspectives related to flood control, we identified communities of different sizes along the river and interviewed officials from those communities responsible for emergency management and public works. These local communities included cities as large as Kansas City, Missouri (more than 450,000 people), and as small as Fort Pierre, South Dakota (roughly 2,000 people). We also spoke with state emergency management agencies in each of the seven states. Similarly, to obtain perspectives related to navigation, we interviewed a barge operator, a terminal operator, and nonprofit organizations that advocate for navigation along the Missouri River. In addition, to obtain perspectives related to fish and wildlife, we interviewed an official at the U.S. Fish and Wildlife Service active in addressing endangered species in the Missouri River basin and state officials involved in fish and wildlife issues from each of the seven states. See table 6 for a complete list of the stakeholders we interviewed about the Corps’ communication efforts. To obtain the views of these 45 agencies and organizations, we developed a structured interview guide that included questions about the Corps’ mode of communication with stakeholders (for example, via e- mail, phone, or letter), and how effective that communication was during the recent flood and drought. We conducted two pretests of the questionnaire and made appropriate changes based on these pretests. We used the structured interview guide to obtain the views of these organizations either via phone or, in cases where the respondent preferred, via e-mail. We analyzed the responses to provide insight into organizations’ views on the Corps’ communication during the flood and drought. These stakeholder interviews provide key insights and illustrate opinions concerning Missouri River basin issues; however the results of our interviews cannot be used to make generalizations about all views. In some cases, interview questions were skipped, as appropriate. For example, some stakeholders did not interact with the Corps during the drought, and we did not ask these stakeholders questions about the Corps’ communication during that time. We conducted this performance audit from August 2012 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Vondalee R. Hunt (Assistant Director); Cheryl Arvidson; Elizabeth Beardsley; Michelle Cooper; Cindy Gilbert; Geoffrey Hamilton; Armetha Liles; Perry Lusk, Jr.; and Janice Poling made key contributions to this report.
|
The Missouri River stretches from western Montana to St. Louis, Missouri. The Corps manages six dams and reservoirs on the river to provide flood control and for other purposes, such as recreation and navigation. The Corps bases reservoir release decisions on the guidance in the Master Manual. In the 2011 flood, the Corps managed the highest runoff volume since 1898, resulting in record reservoir releases. Subsequently, drought occurred in the basin in 2012 and 2013. GAO was asked to review the Corps' release decisions and communication during the flood and drought. This report examines (1) experts' views on the Corps' release decisions; (2) experts' recommendations to improve the Corps' release decisions; and (3) stakeholders' views on the Corps' communication, as well as any suggested improvements. GAO worked with the National Academy of Sciences to convene a meeting of nine experts to discuss the Corps' data, forecasts, and release decisions. GAO also interviewed 45 Missouri River basin stakeholders, including state and local agencies, among others, to discuss their views on the Corps' communication. The views of stakeholders are not generalizable. Experts who participated in a GAO-sponsored meeting agreed that the U.S. Army Corps of Engineers (Corps) made appropriate release decisions during the 2011 flood and 2012 and 2013 drought affecting the Missouri River basin, given the severity of these events. These experts acknowledged that the flood was primarily due to extreme rain in eastern Montana in May and June 2011. The experts agreed that no existing forecasting tools could have accurately predicted these extreme rainstorms more than a week in advance. One of the experts also said that the Corps would have needed several months to release enough water from the reservoirs to have sufficient space for the runoff that occurred in 2011, and predicting an extreme runoff year that far in advance is beyond the current state of science. Moreover, the experts agreed that the Corps appropriately followed the drought conservation procedures in the Missouri River Mainstem Reservoir System Master Water Control Manual (Master Manual), which sets out policies for managing the river. The experts agreed that the Corps does not need to change the Master Manual in response to the 2011 flood or subsequent drought. However, some of the experts noted that if the Corps develops improved forecasting tools, it might be useful to evaluate whether changes to the Master Manual would help the Corps to act on information from the new tools. The experts suggested that improving data systems and introducing new runoff forecasting techniques could improve the Corps' ability to make release decisions in less extreme events than the 2011 flood. These data systems—such as streamgages, weather radar, precipitation gauges, soil moisture monitoring, and monitoring for snow on the plains—are not managed by the Corps, but by other federal and state agencies, which creates challenges beyond the Corps' control. The experts agreed that probabilistic forecasting techniques—which correct for unknown initial conditions using statistical techniques and provide a range of potential outcomes and their likeliness—could help the Corps manage risks better than their current methods that create one forecast estimate. One of the experts said that probabilistic methods could provide greater benefits, such as higher water supply reliability, increased flood protection and hydropower production, and easier implementation of variable flows to create fish and wildlife habitats. Probabilistic techniques are currently used by New York City to support reservoir releases to manage flood risk and meet water quality goals without adding expensive new filtration equipment. Corps officials said that they have not considered using probabilistic techniques in the Missouri River basin because they are not sure the benefits would outweigh the difficulty of creating the models or explaining the new methods to their stakeholders. During both the flood and drought, the Corps communicated with Missouri River stakeholders in a variety of ways, which most stakeholders GAO spoke with said were effective. Most stakeholders were generally satisfied with the Corps' communication, saying that the information they received from the Corps was timely and sufficient for their purposes. Most stakeholders had at least one suggestion on how the Corps could improve communication; however, there was little consensus on any one suggestion. A few stakeholders suggested that the Corps hold separate conference calls to discuss sensitive response-related issues. Corps officials said that they would consider this in the future. GAO recommends that the Corps evaluate the pros and cons of incorporating new forecasting techniques into its management of the Missouri River reservoirs. The Department of Defense concurred with the recommendation.
|
PBGC was created as a government corporation by the Employee Retirement Income Security Act of 1974 (ERISA) to help protect the retirement income of U.S. workers with private-sector defined benefit plans by guaranteeing their benefits up to certain legal limits. PBGC receives no funds from general tax revenues. Operations are financed by insurance premiums set by Congress and paid by sponsors of defined benefit plans, recoveries from the companies formerly responsible for the plans, and investment income of assets from pension plans that PBGC trustees. Under current law, other than statutory authority to borrow up to $100 million from the Treasury Department, no substantial source of funds is available to PBGC if it runs out of money. In the event that PBGC were to exhaust all of its holdings, benefit payments would have to be drastically cut unless Congress were to take action to provide support. In 2003, GAO designated PBGC’s single-employer program as high-risk, and PBGC has remained high-risk with each subsequent update, including our most recent update in 2009. This means that the program still needs urgent congressional attention and agency action. We specifically noted PBGC’s prior-year net deficit, as well as the risk of the termination among large, underfunded pension plans, as reasons for the program’s high-risk designation. Over the last 6 years or so, the assets and liabilities that PBGC accumulated from trusteeing plans have increased rapidly. This is largely due to the termination, typically through bankruptcies, of a number of very large, underfunded plan sponsors. Last May, PBGC reported that unaudited financial results through the second quarter of fiscal year 2009 showed its deficit tripling since the end of fiscal year 2008, from about $11 billion to about $33.5 billion. Since then, the influx of large plan terminations has continued. For example, in August 2009, PBGC assumed responsibility for six Delphi pension plans, covering about 70,000 workers and retirees, and underfunded by a total of about $7.0 billion. PBGC estimated that it would be liable for about $6.7 billion of this underfunding. Our review of plans terminated and trusteed between fiscal years 2000 and 2008 found that PBGC completed most participants’ benefit determinations in less than 3 years, but required more time—up to 9 years—to process determinations for complex plans, plans with missing data, and plans with large numbers of participants. As some pension advocacy groups and union representatives have noted, long delays and uncertainty over final benefit amounts make it difficult for workers to plan for retirement, and especially for retirees who have come to depend on a certain level of monthly income. At the same time, the benefit determination process requires many steps to be complete. It requires gathering extensive data on plans and each individual’s work and personnel history, and identifying who is eligible for benefits under the plan. This can be particularly complicated if the company or plan has a history of mergers, an elaborate structure, or missing data. It requires calculating each participant’s benefit amount based on provisions that vary from plan to plan, applying the legal limits on guaranteed benefit amounts in each case, and valuing plan assets and liabilities to determine if some or all of the nonguaranteed benefit amount can still be paid. Also, the larger the plan, the heavier the workload for PBGC. While the average number of participants per plan is slightly fewer than 1,000, we found that some plans have many more—nearly 93,000 in the case of Bethlehem Steel. PBGC’s benefit determination process is illustrated in figure 1. The key points of contact with workers and retirees that occur during this process are described in detail below. PBGC’s first communication with participants is generally a letter informing them that their pension plan has been terminated and that PBGC has become the plan trustee. Shortly thereafter, this letter is generally followed by a more detailed letter with a packet of materials, including a DVD with an introduction to PBGC and answers to frequently- asked questions about how the benefit determination process works. PBGC officials refer to this as a “welcome” package. Additionally, for large plans likely to have many participants affected by the legal limits on guaranteed benefits, PBGC will hold on-site information sessions shortly after plan termination. PBGC also operates a customer service center with a toll-free number that participants can call if they have questions, provides a Web site for workers and retirees with detailed information about plans and benefits, and sends participants a newsletter with information about PBGC once or twice per year. Nearly all pension advocacy groups and union representatives with whom we spoke praised PBGC’s efforts to hold information sessions with the larger plans. One union representative commended PBGC staff for going out into the field to talk with participants and answer questions even though participants are likely to be angry. Other union representatives commented that they have been impressed by PBGC’s staff for staying at these sessions until they have answered every participant’s questions. While these sessions are generally viewed as helpful, some pension rights advocates noted that the information presented is difficult for participants to understand and apply to their own situations. Comments about PBGC’s customer service center and Web site were also mixed. If the participant is already retired, or retires before the benefit determination process is complete, PBGC makes payments to the retiree based on an estimate of what the final benefit amount will be. According to PBGC, most participants of terminated plans are entitled to receive the full amount of benefits they earned under their plans. In such cases, the calculation of an estimated benefit is straightforward. However, some participants may have their benefits reduced to comply with certain limits, specified under ERISA and related regulations. These limits include the phase-in limit, the “accrued-at-normal” limit, and the maximum limit (see fig. 2). In these cases, the calculation of an estimated benefit is more complicated. PBGC does not systematically track the number of participants affected by the limits on guaranteed benefits or how much these limits affect benefit amounts; however, PBGC has conducted two studies on the impact of these limits in a sample of large plans. The first study, issued in 1999, found 5.5 percent of participants were affected by the limits; and the second study, issued in 2008, found that 15.9 percent were affected. Following the termination of their plans, those who are already retired may continue to receive their same plan benefit amount as an estimated benefit for several months—or even years—before the estimate is adjusted to reflect the legal limits on guaranteed benefits. When plans are terminated at the sponsor’s request as distress terminations, the sponsors are required to impose these limits themselves so that participants’ benefits are reduced as of the date of termination. However, when plans are terminated involuntarily, there can sometimes be lengthy delays before PBGC reduces estimated benefits to reflect these limits. Not only must PBGC estimate the possible impact of applying the guarantee limits to the participant’s benefit, PBGC must also estimate whether there might be sufficient plan assets or recoveries of company assets to pay all or part of the nonguaranteed portion of the participant’s benefit. According to PBGC officials, when it is unclear how much a plan’s assets or recoveries will be able to contribute toward the nonguaranteed portion of a retiree’s benefit, it can be difficult to calculate an accurate benefit amount until the benefit determination process is complete. We found cases where estimated benefits were adjusted within 9 months of termination, while in other cases, more than 6 years elapsed before estimated benefits were adjusted. Finalized Benefit Amounts Once the benefit determination process is complete, PBGC notifies each participant of the final benefit amount with a “benefit determination letter.” From the time of its initial contact with plan participants until the benefit determination process is complete, PBGC generally does not communicate with participants. In some cases, this period can stretch into years. Some of the pension advocacy groups and union representatives we spoke with said that these long periods without communication are problematic for participants for several reasons. For example, retirees whose benefits are subject to the guarantee limits but who continue to receive their higher plan-level benefits for long periods of time may come to depend on these higher amounts and believe that the this payment level is permanent. They are surprised when—years later—their benefits are suddenly reduced. Even for participants who are not yet receiving benefits, the lack of communication about the likely amount of their final benefits makes it difficult to plan for retirement. In addition, PBGC’s benefit determination letters generally provide only limited explanations for why the amount may be different from the amount provided under their plan. In complex plans, when benefit calculations are complicated, the letters often do not adequately explain why benefits are being reduced. Although benefit statements are generally attached, the logic and math involved can be difficult even for pension experts. Some pension advocates and union representatives we spoke with said that they found the explanations in these letters to be too vague and generic, and that the letters did not provide enough information specific to the individual’s circumstances to be helpful. At the same time, they were generally sympathetic to the difficulty of communicating such complicated information. As one advocate acknowledged, for the letters to be accurate, they have to be complicated; this may just be “the nature of the beast.” PBGC officials have taken steps to shorten the benefit determination process, although their initiatives have focused on ways to expedite processing of straightforward cases instead of the more difficult cases prone to delays. PBGC has also developed more than 500 letter formats— in both English and Spanish—to address the myriad of situations that may arise in the benefit determination process. Nevertheless, PBGC officials acknowledged that their standard letter formats may not always meet the needs of participants, especially those with complex plans and complicated benefit calculations. PBGC recently undertook a project to review and update their letters to try to better meet participant needs. The vast majority of participants in terminated plans are not affected by overpayments or PBGC’s recoupment process. Overpayments generally occur when a retiree receives estimated benefits while PBGC is in the process of making benefit determinations and the final benefit amount is less than the estimated benefit amount. However, we found that of the 1,057,272 participants in plans terminated and trusteed during fiscal years 2000 through 2008, more than half were not yet retired and, therefore, did not receive estimated benefits before the benefit determination process was complete. Moreover, for most who were retired, the estimated benefit amount received did not change when finalized. As shown in figure 3, of the 6.5 percent with benefits that did change when finalized, about half received a benefit amount that was greater, and half received a benefit amount that was less (about 3 percent of total participants in these plans, overall). In cases with a final benefit greater than the estimated amount, retirees are likely due a backpayment for having been underpaid, which PBGC repays in a lump sum, with interest. In cases with a final benefit that is less, the retirees are likely to have received an overpayment, which they then must repay to PBGC, with no added interest. Overpayments can occur for two basic reasons: (1) there is a period of time when the retiree’s estimated benefit has not yet been reduced to reflect applicable limits; and (2) the retiree’s estimated benefit is adjusted to reflect applicable limits, but the estimate is still greater than the benefit amount that is ultimately determined to be correct. In general, the longer the delay before a retiree’s estimated benefit is adjusted to reflect the correct amount, the larger the overpayment, and the greater the amount that will need to be recouped from future monthly benefit payments. When an overpayment occurs, retirees typically repay the amount owed by having their monthly benefits reduced by some fraction until the debt is repaid. According to PBGC data, 22,623 participants in plans terminated and trusteed during fiscal years 2000 through 2008 (2.1% of the total) were subject to such recoupment. The total overpayment amounts varied widely—from less than $1 to more than $150,000—but our analysis of PBGC data suggests that most owed less than $3,000. Since in most cases PBGC recoups overpayments by reducing a participant’s final benefit by no more than 10 percent each month, recoupment is amortized over many years and the impact on the participant’s benefit is limited. Per individual, we found that the median benefit reduction due to recoupment was about $16 a month, or about 3 percent of the monthly payment amount, on average. The effect of receiving an overpayment of estimated benefits on one retiree’s monthly payment is illustrated in figure 4. The total amount of this retiree’s overpayment was $5,600. His monthly payment was ultimately reduced by nearly one-half, but this was primarily due to the application of the guarantee limits. The amount of the benefit reduction for recoupment of the overpayment is $38 per month, to be paid until 6/1/2020. Participants are warned at the beginning of the process that their benefits may be reduced due to the legal limits on guaranteed benefits, and retirees are notified of possible overpayments when they begin to receive estimated payments. However, these warnings may not have the same meaning for participants when talked about in generalities as when they later receive notices concerning their specific benefit amounts. It can still come as a shock when—perhaps years later—they receive a final benefit determination letter with this news. Their frustration may be compounded if they fail to understand the explanations provided in the benefit determination letters. Some pension advocates and union representatives we spoke with said that this is often the case in complex cases involving large benefit reductions. They noted that they did not think most participants would be able to understand the accompanying benefit statements without additional information and assistance. In the participant files we reviewed, the benefit statements that accompanied the letters ranged in length from 2 to 8 pages. In some cases, there were as many as 20 to 30 different line items that required making comparisons between the items to understand the logic of the calculations. Participants may appeal the results of the benefit determination process within 45 days of receiving a final benefit determination. Appeals are accepted if they raise a question about how the plan was interpreted, how the law was interpreted, or the practices of the plan’s sponsor, but not if they are based only on hardship. Although some appellants have successfully used the appeals process to increase their benefits, less than 20 percent of appeals docketed since fiscal year 2003 have resulted in appellants receiving higher benefit amounts. We found that a lack of understanding on the part of participants about how their benefits are calculated may engender unnecessary appeals, and that PBGC is not readily providing key information that would be helpful to participants in deciding whether or not to pursue an appeal. Participants may request hardship waivers for overpayments, but only in cases that do not involve an ongoing payment. PBGC policy stipulates that in cases with an ongoing payment, recoupment of an overpayment may not be waived unless the monthly reduction would be less than $5. By comparison, federal agencies such as the Social Security Administration and the Office of Personnel Management generally pursue repayment at a faster rate with larger reductions to benefits when recouping overpayments, but their policies also give greater prominence to waivers. To address the concerns of workers and retirees in terminated plans who stand to lose as much as one-half or more of their long-anticipated retirement income, and who will likely have to make painful financial adjustments, PBGC needs a more strategic approach for processing complex plans prone to delays and overpayments. The failure to communicate more often and clearly with participants awaiting a final determination can be disconcerting—especially when participants receive the news that their final determination is “surprisingly” less than they anticipated, or when retirees learn that the estimated interim benefit they had been receiving was too high and that they owe money. More frequent and clearer communication with plan participants, including more timely adjustments to estimated benefits, more information about how their benefits are calculated, and where to find help if they wish to appeal, would better manage expectations, help people plan for their future, avoid unnecessary appeals, and earn good will during a trying time for all. In our recently issued report, we recommended that PBGC develop a better strategy for processing complex plans in order to reduce delays, minimize overpayments, improve communication with participants, and make the appeals process more accessible. After reviewing the draft report, PBGC generally agreed with our recommendations, noting the steps it would take to address GAO’s concerns. For example, PBGC said that it had started to track and monitor tasks associated with processing large, complex plans, and would continue to look for other ways to improve its processes. A complete discussion of our recommendations, PBGC’s comments, and our evaluation are provided in our recently issued report. As PBGC’s financial challenges continue to mount and dramatic increases to PBGC’s workload appear imminent, improvements to PBGC’s processes are urgently needed. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have. For further information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Blake L. Ainsworth (Assistant Director), Margie K. Shields, Kristen W. Jones, James Bennett, Susan C. Bernstein, and Craig W. Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Under the single-employer insurance program, the Pension Benefit Guaranty Corporation (PBGC) may become the trustee of underfunded plans that are terminated and assume responsibility for paying benefits to participants as they become due, up to certain legal limits. From its inception in 1974 through the end of fiscal year 2008, PBGC has terminated and trusteed a total of 3,860 single-employer plans covering some 1.2 million workers and retirees. Since 2008, the economic downturn has brought a new influx of pension plan terminations to PBGC, and more are expected to follow. The committee asked GAO to discuss our recent work on PBGC. Specifically, this testimony describes: (1) PBGC's process for determining the amount of benefits to be paid; and (2) PBGC's recoupment process when the estimated benefit provided is too high and a retiree receives an overpayment that must be repaid. To address these objectives, GAO relied primarily on a recent report titled Pension Benefit Guaranty Corporation: More Strategic Approach Needed for Processing Complex Plans Prone to Delays and Overpayments ( GAO-09-716 , Aug. 2009). In that report, GAO made numerous recommendations. PBGC generally agreed and is taking steps to address the concerns raised. No new recommendations are being made in this testimony. Most participants must wait about 3 years for PBGC to complete the benefit determination process and provide their finalized benefit amounts, but the vast majority are not affected by overpayments or the recoupment process. Nevertheless, long delays and uncertainty over final benefit amounts make it difficult for workers to plan for retirement, and for retirees who may have come to depend on a certain level of monthly income. During the benefit determination process, key points of contact with workers and retirees include: (1) Initial notification: PBGC's first communication with participants is generally a letter informing them that their pension plan has been terminated and that PBGC has become the plan trustee. (2) Estimated benefits: For retirees, PBGC continues payments after plan termination, but adjusts the amounts to reflect limits set by law. These payments are based on estimates, so overpayments can occur. (3) Finalized benefit amounts: Once the benefit determination process is complete, PBGC notifies each participant of the final benefit amount through a "benefit determination letter." A small percentage of participants have incurred overpayments to be repaid through the recoupment process. But for those affected, the news can still come as a shock, especially when several years have elapsed since their benefits were reduced to comply with legal limits. Their frustration may be compounded if they cannot understand the explanations provided by PBGC. As the influx of large, complex plan terminations continues, improvements in PBGC's processes are urgently needed.
|
FPS faces a number of challenges that hamper its ability to protect government employees and the public in federal facilities. For example, these challenges include (1) developing a risk management framework, (2) developing a human capital plan, and (3) better oversight of its contract security guard program. In our June 2008 report we found that in protecting federal facilities, FPS does not use a risk management approach that links threats and vulnerabilities to resource requirements. We have stated that without a risk management approach that identifies threats and vulnerabilities and the resources required to achieve FPS’s security goals, there is little assurance that programs will be prioritized and resources will be allocated to address existing and potential security threats in an efficient and effective manner. While FPS has conducted risk related activities such as building security assessments (BSAs), we have reported several concerns with the Facilities Securities Risk Management system FPS currently uses to conduct these assessments. First, it does not allow FPS to compare risks from building to building so that security improvements to buildings can be prioritized across GSA’s portfolio. Second, current risk assessments need to be categorized more precisely. According to FPS, too many BSAs are categorized as high or low risk, which does not allow for a refined prioritization of security improvements. Third, the system does not allow for tracking the implementation status of security recommendations based on assessments. BSAs are the core component of FPS’s physical security mission. However, ensuring the quality and timeliness of them is an area in which FPS continues to face challenges. Many law enforcement security officers (LESOs) in the regions we visited stated that they do not have enough time to complete BSAs. For example, while FPS officials have stated that BSAs for level IV facilities should take between 2 to 4 weeks, several LESOs reported having only 1 or 2 days to complete assessments for their buildings, in part, because of pressure from supervisors to complete BSAs as quickly as possible. Some regional supervisors have also found problems with the accuracy of BSAs. One regional supervisor reported that an inspector was repeatedly counseled and required to redo BSAs when supervisors found he was copying and pasting from previous assessments. Similarly, one regional supervisor stated that in the course of reviewing a BSA for an address he had personally visited, he realized that the inspector completing the BSA had not actually visited the site because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. Moreover, some GSA and FPS officials have stated that LESOs lack the training and physical security expertise to prepare BSAs according to the standards. Currently, LESOs receive instructions on how to complete BSAs as part of a 4-week course at the Federal Law Enforcement Training Center’s Physical Security Training Program. However, many LESOs and supervisors in the regions we visited stated that this training is insufficient and that refresher training is necessary to keep LESOs informed about emerging technology, but that this refresher training has not been provided in recent years. Regional GSA officials also stated that they believe the physical security training provided to LESOs is inadequate and that it has affected the quality of the BSAs they receive. Further complicating FPS’s ability to protect federal facilities is the building security committee structure. Building Security Committees (BSC) are composed of representatives from each tenant agency who generally are not security professionals but have responsibility for approving the countermeasures FPS recommends. However, in some of the facilities that we visited, security countermeasures were not implemented because BSC members could not agree on what countermeasures to implement or were unable to obtain funding from their agencies. For example, an FPS official in a major metropolitan city stated that over the last 4 years LESOs have recommended 24-hour contract guard coverage at one high-risk building located in a high crime area multiple times, but the BSC is not able to obtain approval from all its members. In addition, FPS faces challenges in ensuring that its fee-based funding structure accounts for the varying levels of risk and types of services provided at federal facilities. FPS funds its operations through security fees charged to tenant agencies. However, FPS’s basic security fee, which funds most of its operations, does not account for the risk faced by specific buildings, the level of service provided, or the cost of providing services, raising questions about equity. FPS charges federal agencies the same basic security fee regardless of the perceived threat to a particular building or agency. In fiscal year 2009, FPS is charging 66 cents per square foot for basic security. Although FPS categorizes buildings according to security levels based on its assessment of each building’s risk and size, this assessment does not affect the security fee FPS charges. For example, level I facilities typically face less risk because they are generally small storefront-type operations with a low level of public contact, such as a small post office or Social Security Administration office. However, these facilities are charged the same basic security fee of 66 cents per square foot as a level IV facility that has a high volume of public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. FPS’s basic security rate has raised questions about equity because federal agencies are required to pay the fee regardless of the level of service FPS provides or the cost of providing the service. For instance, in some of the regions we visited, FPS officials described situations where staff are stationed hundreds of miles from buildings under its responsibility, with many of these buildings rarely receiving services from FPS staff and relying mostly on local law enforcement agencies for law enforcement services. However, FPS charges these tenant agencies the same basic security fees as buildings in major metropolitan areas where numerous FPS police officers and LESOs are stationed and are available to provide security services. Consequently, FPS’s cost of providing services is not reflected in its basic security charges. We also have reported that basing government fees on the cost of providing a service promotes equity, especially when the cost of providing the service differs significantly among different users, as is the case with FPS. In our July 2008 report, we recommended that FPS improve FPS’s use of the fee-based system by developing a method to accurately account for the cost of providing security services to tenant agencies and ensuring that its fee structure takes into consideration the varying levels of risk and service provided at GSA facilities. While DHS agreed with this recommendation, FPS has not fully implemented it. In our July 2009 report, we reported that FPS does not have a strategic human capital plan to guide its current and future workforce planning efforts. Our work has shown that a strategic human capital plan addresses two critical needs: It (1) aligns an organization’s human capital program with its current and emerging mission and programmatic goals, and (2) develops long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. In 2007, FPS took steps toward developing a Workforce Transition Plan to reflect its decision to move to a LESO- based workforce and reduce its workforce to about 950 employees. However, in 2008, FPS discontinued this plan because the objective of the plan—to reduce FPS staff to 950 to meet the President’s Fiscal Year 2008 Budget—was no longer relevant because of the congressional mandate in its Fiscal Year 2008 Consolidated Appropriations Act to increase its workforce to 1,200 employees. FPS subsequently identified steps it needed to take in response to the mandate. However, we found that these steps do not include developing strategies for determining agency staffing needs, identifying gaps in workforce critical skills and competencies, developing strategies for use of human capital flexibilities, or strategies for retention and succession planning. Moreover, we found FPS’s headquarters does not collect data on its workforce’s knowledge, skills, and abilities. Consequently, FPS cannot determine what its optimal staffing levels should be or identify gaps in its workforce needs and determine how to modify its workforce planning strategies to fill these gaps. Effective workforce planning requires consistent agencywide data on the skills needed to achieve current and future programmatic goals and objectives. Without centralized or standardized data on its workforce, it is unclear how FPS can engage in short- and long-term strategic workforce planning. Finally, FPS’s human capital challenges may be further exacerbated by a proposal in the President’s 2010 budget to move FPS from Immigration and Custom Enforcement to the National Protection and Programs Directorate within DHS. If the move is approved, it is unclear which agency will perform the human capital function for FPS, or how the move will affect FPS’s operational and workforce needs. We also recommended that FPS take steps to develop a strategic human capital plan to manage its current and future workforce needs. FPS concurred with our recommendation. FPS’s contract guards are the most visible component of FPS’s operations as well as the public’s first contact with FPS when entering a federal facility. Moreover, FPS relies heavily on its guards and considers them to be the agency’s “eyes and ears” while performing their duties. However, as we testified at a July 2009 congressional hearing, FPS does not fully ensure that its guards have the training and certifications required to be deployed to a federal facility. While FPS requires that all prospective guards complete approximately 128 hours of training, including 8 hours of x-ray and magnetometer training, FPS was not providing some of its guards with all of the required training in the six regions we visited. For example, in one region, FPS has not provided the required 8 hours of x-ray or magnetometer training to its 1,500 guards since 2004. X-ray and magnetometer training is important because the majority of the guards are primarily responsible for using this equipment to monitor and control access points at federal facilities. According to FPS officials, the 1,500 guards were not provided the required x-ray or magnetometer training because the region does not have employees who are qualified or have the time to conduct the training. Nonetheless, these guards continue to control access points at federal facilities in this region. In absence of the x- ray and magnetometer training, one contractor in the region said that they are relying on veteran guards who have experience operating these machines to provide some “on-the-job” training to new guards. Moreover, in the other five regions we visited where FPS is providing the x-ray and magnetometer training, some guards told us that they believe the training, which is computer based, is insufficient because it is not conducted on the actual equipment located at the federal facility. Lapses and weaknesses in FPS’s x-ray and magnetometer training have contributed to several incidents at federal facilities in which the guards were negligent in carrying out their responsibilities. For example, at a level IV federal facility in a major metropolitan area, an infant in a carrier was sent through the x-ray machine. Specifically, according to an FPS official in that region, a woman with her infant in a carrier attempted to enter the facility, which has child care services. While retrieving her identification, the woman placed the carrier on the x-ray machine. Because the guard was not paying attention and the machine’s safety features had been disabled, the infant in the carrier was sent through the x-ray machine. x- ray machines are hazardous because of the potential radiation exposure. FPS investigated the incident and dismissed the guard. However, the guard subsequently sued FPS for not providing the required x-ray training. The guard won the suit because FPS could not produce any documentation to show that the guard had received the training, according to an FPS official. In addition, FPS officials from that region could not tell us whether the x- ray machine’s safety features had been repaired. Moreover, FPS’s primary system—Contract Guard Employment Requirements Tracking System (CERTS)—for monitoring and verifying whether guards have the training and certifications required to stand post at federal facilities is not fully reliable. We reviewed training and certification data for 663 randomly selected guards in 6 of FPS’s 11 regions maintained either in CERTS, which is the agency’s primary system for tracking guard training and certifications, databases maintained by some regions, or contractor information. We found that 62 percent, or 411 of the 663 guards who were deployed to a federal facility had at least one expired certification, including for example, firearms qualification, background investigation, domestic violence declaration, or CPR/First Aid training certification. Without domestic violence declarations certificates, guards are not permitted to carry a firearm. In addition, not having a fully reliable system to better track whether training has occurred may have contributed to a situation in which a contractor allegedly falsified training records. In 2007, FPS was not aware that a contractor who was responsible for providing guard service at several level IV facilities in a major metropolitan area had allegedly falsified training records until it was notified by an employee of the company. According to FPS’s affidavit, the contractor allegedly repeatedly self-certified to FPS that its guards had satisfied CPR and First Aid training, as well as the contractually required bi-annual recertification training, although the contractor knew that the guards had not completed the required training and was not qualified to stand post at federal facilities. According to FPS’s affidavit, in exchange for a $100 bribe, contractor officials provided a security guard with certificates of completion for CPR and First Aid. The case is currently being litigated in U.S. District Court. FPS has limited assurance that its 15,000 guards are complying with post orders once they are deployed to federal facilities. At each guard post, FPS maintains a book, referred to as post orders, that describes the duties that guards are to perform while on duty. According to post orders, guards have many duties, including access and egress control, operation of security equipment, such as x-ray and magnetometer, detecting, observing and reporting violations of post regulations, and answering general questions and providing directions to visitors and building tenants, among others. We found that in the 6 regions we visited that guard inspections are typically completed by FPS during regular business hours and in cities where FPS has a field office. In most FPS regions, FPS is only on duty during regular business hours and according to FPS, LESOs are not authorized overtime to perform guard inspections during night shifts or on weekends. However, on the few occasions when LESOs complete guard inspections at night or on their own time, FPS has found instances of guards not complying with post orders. For example, at a level IV facility, an armed guard was found asleep at his post after taking the pain killer prescription drug Percocet during the night shift. FPS’s guard manual states that guards are not permitted to sleep or use any drugs (prescription or non-prescription) that may impair the guard’s ability to perform duties. Finally, we identified substantial security vulnerabilities related to FPS’s guard program. Each time they tried, our investigators successfully passed undetected through security checkpoints monitored by FPS guards, with the components for an IED concealed on their persons at 10 level IV facilities in four cities in major metropolitan areas. The specific components for this device, items used to conceal the device components, and the methods of concealment that we used during our covert testing are classified, and thus are not discussed in this testimony. Of the 10 level IV facilities we penetrated, 8 were government owned and 2 were leased facilities. The facilities included field offices of a U.S Senator and U.S. Representative as well as agencies of the Departments of Homeland Security, Transportation, Health and Human Services, Justice, State and others. The two leased facilities did not have any guards at the access control point at the time of our testing. Using publicly available information, our investigators identified a type of device that a terrorist could use to cause damage to a federal facility and threaten the safety of federal workers and the general public. The device was an IED made up of two parts—a liquid explosive and a low-yield detonator—and included a variety of materials not typically brought into a federal facility by employees or the public. Although the detonator itself could function as an IED, investigators determined that it could also be used to set off a liquid explosive and cause significantly more damage. To ensure safety during this testing, we took precautions so that the IED would not explode. For example, we lowered the concentration level of the material. To gain entry into each of the 10 level IV facilities, our investigators showed photo identification (state driver’s license) and walked through the magnetometer machines without incident. The investigators also placed their briefcases with the IED material on the conveyor belt of the x-ray machine, but the guards detected nothing. Furthermore, our investigators did not receive any secondary searches from the guards that might have revealed the IED material that we brought into the facilities. At security checkpoints at 3 of the 10 facilities, our investigators noticed that the guard was not looking at the x-ray screen as some of the IED components passed through the machine. A guard questioned an item in the briefcase at one of the 10 facilities but the materials were subsequently allowed through the x-ray machines. At each facility, once past the guard screening checkpoint, our investigators proceeded to a restroom and assembled the IED. At some of the facilities, the restrooms were locked. Our investigators gained access by asking employees to let them in. With the IED completely assembled in a briefcase, our investigators walked freely around several floors of the facilities and into various executive and legislative branch offices, as described above. Despite increased awareness of security vulnerabilities at federal facilities, recent FPS penetration testing—similar to the convert testing we conducted in May 2009—showed that weaknesses in FPS’s contract guard training continue to exist. In August 2009, we accompanied FPS on a test of security countermeasures at a level IV facility. During these tests, FPS agents placed a bag on the x-ray machine belt containing a fake gun and knife. The guard failed to identify the gun and knife on the x-ray screen and the undercover FPS official was able to retrieve his bag and proceed to the check-in desk without incident. During a second test, a knife was hidden on a FPS officer. During the test, the magnetometer detected the knife, as did the hand wand, but the guard failed to locate the knife and the FPS officer was able to gain access to the facility. According to the FPS officer, the guards who failed the test had not been provided the required x-ray and magnetometer training. Upon further investigation, only two of the eleven guards at the facility had the required x-ray and magnetometer training. However, FPS personnel in its mobile command vehicle stated that the 11 guards had all the proper certifications and training to stand post. It was unclear at the time, and in the after action report, whether untrained guards were allowed to continue operating the x-ray and magnetometer machines at the facilities or if FPS’s LESOs stood post until properly trained guards arrived on site. While FPS has taken some actions to improve its ability to better protect federal facilities, it is difficult to determine the extent to which these actions address these challenges because most of them occurred recently and have not been fully implemented. It is also important to note that most of the actions FPS has recently taken focus on improving oversight of the contract guard program and do not address the need to develop a risk management framework and a human capital plan. In response to our covert testing, FPS has taken a number of actions. For example, in July 2009, the Director of FPS instructed Regional Directors to accelerate the implementation of FPS’s requirement that two guard posts at Level IV facilities be inspected weekly. FPS also required more x-ray and magnetometer training for LESOs and guards. For example, FPS has recently issued an information bulletin to all LESOs and guards to provide them with information about package screening, including examples of disguised items that may not be detected by magnetometers or x-ray equipment. Moreover, FPS produced a 15- minute training video designed to provide information on bomb- component detection. According to FPS, each guard was required to read the information bulletin and watch the DVD within 30 days. However, there are a number of factors that will make implementing and sustaining these actions difficult. First, FPS does not have adequate controls to monitor and track whether its 11 regions are completing these new requirements. Thus, FPS cannot say with certainty that it is being done. According to a FPS regional official implementing the new requirements may present a number of challenges, in part, because new directive appears to be based primarily on what works well from a headquarters or National Capital Region perspective, and not a regional perspective that reflects local conditions and limitations in staffing resources. In addition, another regional official estimated that his region is meeting about 10 percent of the required oversight hours and officials in another region said they are struggling to monitor the delivery of contractor-provided training in the region. Second, according to FPS officials, it has not modified any of its 129 guard contracts to reflect these new requirements, and therefore the contractors are not obligated to implement these requirements. One contractor stated that ensuring that its guards receive the additional training will be logistically challenging. For example, to avoid removing a guard from his/her post, one contractor plans to provide some of the training during the guards’15 minute breaks. Third, FPS has not completed any workforce analysis to determine if its current staff of about 930 law enforcement security officers will be able to effectively complete the additional inspections and provide the x-ray and magnetometer training to 15,000 guards, in addition to their current physical security and law enforcement responsibilities. Our previous work has raised questions about the wide range of responsibilities LESOs have and the quality of BSAs and guard oversight. According to the Director of FPS, while having more resources would help address the weaknesses in the guard program, the additional resources would have to be trained and thus could not be deployed immediately. In addition, as we reported in June 2008, FPS is in the process of developing a new system referred to as the Risk Assessment Management Program (RAMP). According to FPS, RAMP will be the primary tool FPS staff will use to fulfill their mission and is designed to be a comprehensive, systematic, and dynamic means of capturing, accessing, storing, managing, and utilizing pertinent facility information. RAMP will replace several legacy GSA systems that FPS brought to DHS, including CERTS, Security Tracking System, and other systems associated with the BSA program. We are encouraged that FPS is attempting to replace some of its legacy GSA systems with a more reliable and accurate system. However, we are not sure FPS has fully addressed some issues associated with implementing RAMP. For example, we are concerned about the accuracy and reliability of the information that will be entered into RAMP. According to FPS, the agency plans to transfer data from several of its legacy systems including CERTS into RAMP. In July 2009, we reported on the accuracy and reliability issues associated with CERTS. FPS subsequently conducted an audit of CERTS to determine the status of its guard training and certification. However, the results of the audit showed that FPS was able to verify the status for about 7,600 of its 15,000 guards. According to an FPS official, one of its regions did not meet the deadline for submitting data to headquarters because its data was not accurate or reliable and therefore about 1,500 guards were not included in the audit. FPS was not able to explain why it was not able to verify the status of the remaining 5,900 guards. FPS expects RAMP to be fully operational in 2011, however until that time FPS will continue to rely on its current CERTS system or localized databases that have proven to be inaccurate and unreliable. Finally, over the last couple of years we have completed a significant amount of work related to challenges described above and made recommendations to address these challenges. While DHS concurred with our recommendations, FPS has not fully implemented them. In addition, in October 2009, we plan to issue a public report on FPS key practices involving risk management, leveraging technology and information sharing and coordination. This concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark Goldstein at 202-512-2834 or by email goldsteinm@gao.gov. Individuals making key contributions to this testimony include Tida Barakat, Jonathan Carver, Tammy Conquest, Bess Eisenstadt, Daniel Hoy, Susan Michal-Smith, and Lacy Vong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
To accomplish its mission of protecting federal facilities, the Federal Protective Services (FPS), within the Department of Homeland Security (DHS), currently has a budget of about $1 billion, about 1,200 full-time employees, and about 15,000 contract security guards. This testimony is based on completed and ongoing work for this Subcommittee and discusses: (1) challenges FPS faces in protecting federal facilities and (2) how FPS's actions address these challenges. To perform this work, GAO visited FPS's 11 regions, analyzed FPS data, and interviewed FPS officials, guards, and contractors. GAO also conducted covert testing at 10 judgmentally selected level IV facilities in four cities. Because of the sensitivity of some of the information, GAO cannot identify the specific locations of incidents discussed. A level IV facility has over 450 employees and a high volume of public contact. FPS faces challenges that hamper its ability to protect government employees and members of the public who work in and visit federal facilities. First, as we reported in our June 2008 report, FPS does not have a risk management framework that links threats and vulnerabilities to resource requirements. Without such a framework, FPS has little assurance that its programs will be prioritized and resources will be allocated to address changing conditions. Second, as discussed in our July 2009 report, FPS lacks a strategic human capital plan to guide its current and future workforce planning efforts. FPS does not collect data on its workforce's knowledge, skills, and abilities and therefore cannot determine its optimal staffing levels or identify gaps in its workforce and determine how to fill these gaps. Third, as we testified at a July 2009 congressional hearing, FPS's ability to protect federal facilities is hampered by weaknesses in its contract security guard program. GAO found that many FPS guards do not have the training and certifications required to stand post at federal facilities in some regions. For example, in one region, FPS has not provided the required 8 hours of X-ray or magnetometer training to its 1,500 guards since 2004. GAO also found that FPS does not have a fully reliable system for monitoring and verifying whether guards have the training and certifications required to stand post at federal facilities. In addition, FPS has limited assurance that guards perform assigned responsibilities (post orders). Because guards were not properly trained and did not comply with post orders, GAO investigators with the components for an improvised explosive device concealed on their persons, passed undetected through access points controlled by FPS guards at 10 of 10 level IV facilities in four major cities where GAO conducted covert tests. FPS has taken some actions to better protect federal facilities, but it is difficult to determine the extent to which these actions address these challenges because many of the actions are recent and have not been fully implemented. Furthermore, FPS has not fully implemented several recommendations that GAO has made over the last couple of years to address FPS's operational and funding challenges, despite the Department of Homeland Security's concurrence with the recommendations. In addition, most of FPS's actions focus on improving oversight of the contract guard program and do not address the need to develop a risk management framework or a human capital plan. To enhance oversight of its contract guard program FPS is requiring its regions to conduct more guard inspections at level IV facilities and provide more x-ray and magnetometer training to inspectors and guards. However, several factors make these actions difficult to implement and sustain. For example, FPS does not have a reliable system to track whether its 11 regions are completing these new requirements. Thus, FPS cannot say with certainty that the requirements are being implemented. FPS is also developing a new information system to help it better protect federal facilities. However, FPS plans to transfer data from several of its legacy systems, which GAO found were not fully reliable or accurate, into the new system.
|
When the Congress established the transition assistance program in 1990,significant reductions in military force levels were expected. The law noted that many of these service personnel specialized in critical skills, such as combat arms, which would not transfer to the civilian workforce. Transition assistance, including employment and job training services, was established to help such service members make suitable educational and career choices as they readjusted to civilian life. The law directed DOL, DOD, and VA to jointly administer the program. To do so, the agencies entered into a Memorandum of Understanding (MOU), which spelled out each agency’s role in the provision of services to members of the Army, Navy, Air Force and Marines. When the Coast Guard started to operate the transition assistance program in 1994, DOT entered into a similar agreement with VA and DOL. Each branch of the military is required to provide pre-separation counseling to all military personnel no later than 90 days prior to their separation from the military. Pre-separation counseling, according to the law, shall include information about education and vocational rehabilitation benefits, selective reserve options, job counseling and job search and placement information, relocation assistance services, medical and dental benefits, counseling on the effects of career change, and financial planning. The military branches are to provide space for the provision of transition services at locations with more than 500 active duty personnel. Separating service members must complete a pre-separation counseling checklist indicating that they have been informed of the services available to them and, on this checklist, they are to indicate the services they wish to receive, including if they wish to participate in the transition assistance workshop. For locations in the United States, DOL is responsible for providing transition assistance workshops, which are generally 3-day training sessions focusing primarily on resume writing and job search strategies and includes a manual with basic information on the material covered in the workshop. The MOU identifies specific workshop objectives, including preventing and reducing long-term unemployment, reducing unemployment compensation paid to veterans, and improving retention. DOL spent about $5 million in fiscal year 2001 to provide about 3,200 workshops, in addition to the funding spent on transition assistance by the military branches shown in table 1. The workshop and other transition services are to be accessible to service members two years prior to retirement and one year prior to separation. As part of the transition assistance workshop, VA is responsible for providing information on veterans’ benefits, including information on disability benefits. VA is also responsible for providing more detailed information and assistance to those service members separating or retiring due to a disability. In 1996, Congress established the Commission on Servicemembers and Veterans Transition Assistance and directed it to review programs that provide benefits and services to veterans and service members making the transition to civilian life. The Transition Commission examined pre- separation counseling and transition assistance program workshops as part of its work. Each branch of the military provides the required pre-separation counseling and offers workshops focusing on employment assistance and veterans’ benefits, although not all service members participate. In addition, disabled service members are provided detailed information on benefits and services available to service members with disabilities as well as assistance in accessing these services. The military branches have considerable flexibility in designing their programs, allowing them to vary the content as well as the delivery of their programs. Moreover, the priorities of the military mission can also affect delivery and access to transition assistance. All military branches provide pre-separation counseling and offer transition workshops that include employment assistance and information on veterans’ benefits. However, military branch data indicate that not all service members receive the required pre-separation counseling or participate in the workshops. As shown in table 2, in fiscal year 2001, 81 percent of service members received pre-separation counseling, and 53 percent attended a transition workshop. The transition workshop participation rates for each branch ranged from 29 percent for the Coast Guard to 72 percent for the Navy. These participation rates may not be reliable because some branches’ data include service members who participated but did not separate. To the extent that this is occurring, the percentages on participation are overstated. In addition to pre-separation counseling and the transition assistance workshops, the military branches may provide optional services such as (1) help with resume writing, (2) career counseling, (3) training in job interview skills and strategies, (4) stress management, (5) job fairs, and (6) access to automated job listings. Service members separating with a disability are offered more detailed information relevant to their unique needs. For these service members, VA offers detailed information on VA disability-related benefits such as disability compensation, health care and vocational rehabilitation, as well as assistance in accessing them. These efforts are considered to be a part of the disability transition assistance program. Because the military experiences of the members in each branch are different, some branches tailor the content of transition services to better meet the needs of their service members. For example, the Army believes that many of its separating soldiers need additional employment-related assistance and more individualized attention. A large number of the Army’s separating service members have held combat-related jobs, which provide skills that have limited transferability to jobs in the civilian labor market. Further, many of these soldiers are young and have little civilian work experience. Therefore, the Army supplements DOD transition assistance funds to provide additional one-on-one counseling and interactive job training and assistance. The Coast Guard also tailors the content of its program to meet what it believes are the unique needs of its service members. According to the program manager, many separating members of the Coast Guard have unique classifications like Marine Science Technician, a job category not easily transferable to the civilian labor market. In an attempt to provide their members with transition assistance workshops that will help them find appropriate civilian employment, the Coast Guard hires contractors to facilitate its transition assistance workshops. The contractor staff is trained along with the Coast Guard’s transition assistance coordinators to help service members identify their most marketable skills and communicate them in a way that will make them successful in the civilian labor market. The military branches also have different methods of delivering both pre- separation counseling and workshops. For example, some military branches provide pre-separation counseling in individual sessions prior to attendance at a transition workshop, while others may provide group counseling. In addition, the length of transition workshops can vary by location. While the transition assistance workshop was designed to take 3 days, the schedule of workshops for fiscal year 2002 shows the actual time ranges from 1 day to 5 days depending on the local arrangements between military installations and DOL. For example, according to the program manager, the Navy added a day to the 3-day workshop to provide more detailed information on military benefits. Further, program officials told us that at some locations different transition assistance workshops are provided to separating and retiring military members. For example, at one location we visited the separatees had a 2-day transition assistance workshop and the retirees had a 3-day workshop. Transition assistance program managers told us that workshops can be offered in a variety of settings. For example, at one location, the workshop was offered in a religious education building. At another, it was offered in space rented at a nearby hotel. At several locations we visited, class size greatly exceeded the maximum of 50, recommended in the MOU. At one location, to accommodate large numbers of service members separating with the number of workshops scheduled, the workshop had 300 participants. Other adjustments to the delivery of transition assistance are potentially more far-reaching. For example, to create a more comprehensive approach to career planning, the Air Force is integrating transition assistance into the role of a work life consultant who works with airmen throughout their military career. This individual serves as a focal point for information on all personnel matters and helps with paperwork for anticipated separations and retirements. The Navy is providing transition assistance services earlier in a sailor’s military career than the law requires to help them more easily translate their military experience into the civilian labor force when they do separate. The Navy has also broadened the mission of its transition assistance program to increase retention by providing professional career development resources throughout the service member’s military career. Providing earlier services responds to comments from service members that they would benefit from beginning the transition process sooner than 90 days before separation. The delivery of transition assistance for disabled service members appears to be more consistent across the branches. In the past, disabled service members were primarily offered separate disability transition assistance briefings supplemented by individualized assistance upon request. The current practice is generally to provide a basic discussion of disability benefits and services as part of the transition assistance workshop provided to all participants, supplemented by one-on-one sessions with disabled service members, upon request. However, some locations still offer a separate disability transition assistance briefing. In addition, as part of an initiative in two regions, VA provides special 3-5 day workshops focusing on the unique needs of disabled veterans seeking employment. Service members also experience variation in access to transition assistance based on specific circumstances. Service members who are deployed, stationed in remote locations, or engaged in essential military duties may access a modified version of transition assistance services. For example, the Marines place a transition assistance specialist on some ships and give priority to those who will be separating from the military within 90 days or less. The specialist offers a condensed version of the transition assistance workshop and will meet with Marines during their free time, which could be anytime of the day or night. Service members stationed in remote locations also received modified versions of transition assistance. For example, a significant percent of service members in the Coast Guard tend to be stationed in remote areas far from where the transition assistance workshops are offered. To address their needs for transition assistance, the Coast Guard sends a videotape accompanied by the DOL workbook. The videotape presents general information on how to conduct a job search and the workbook covers the topics offered in the transition assistance workshops. The Army also mails materials to soldiers in remote locations and follows up with distance counseling by telephone, fax, or e-mail. In addition, the Army will send transition assistance specialists periodically to remote sites with small populations of soldiers. Even when service members are in locations where a range of transition services can be offered on site, military mission and the support that supervisors have for transition services may determine the degree to which they have access to the services. Because the military mission is always the top priority, it can be difficult for service members to be released from military duties to receive services. Service members, supervisors, commanders, and transition assistance program staff at the locations we visited told us that because of mission-related work priorities, service members may receive transition assistance later than is optimal. Several service members told us that they had to delay attending the transition workshop because of their military duties, thereby limiting their ability to utilize other transition services. In addition, if supervisors are not supportive of the transition assistance, or if they feel that mission needs are too pressing, they may be reluctant to allow service members under them to access services offered. In 1994, we reported that lack of support from military commanders was one of the most frequently cited reasons for not using transition assistance. In response, the Secretary of Defense issued a memorandum to the secretaries of the military departments and other key DOD officials underscoring the need for commander support of transition assistance for all service members. The Marines recently made participation in a transition assistance workshop a mandatory activity for all Marines because they recognized that service members were having difficulty being released from their military duties to attend the workshops. The Transition Commission noted that starting transition assistance earlier could provide commanders more flexibility to meet mission needs, because many service members are deployed during the last 6 months of their active duty. Reducing potential mission conflicts in this way could help increase commander support of the program, thereby resulting in increased participation. Several studies confirm participant satisfaction with transition assistance, but limited information is available about the overall effectiveness of the transition assistance program. Evaluating the effectiveness of these services is complicated by data inadequacies and methodological difficulties. For example, most of the data currently available are collected for purposes of program monitoring and are not comparable across the branches. Also, following up with service members who have separated is challenging. Moreover, the changing nature of transition assistance could result in a shift in emphasis on different goals, including recruitment and retention, in addition to civilian employment. In 1994, we reported that service members and spouses we surveyed found seminars and employment assistance centers were beneficial in readjusting to civilian life. They said that they learned about individualized job search techniques and other benefits available to them. They also reported that their confidence had increased as a result of receiving these services, especially in the areas of resume preparation and job search and interview techniques. During our interviews, service members told us that the transition assistance workshop either met or exceeded their expectations. Many service members told us that they thought the resume preparation and job search and interview techniques would be the most helpful in their transition. However, some felt that the workshop was not long enough for them to complete preparing their resumes and develop their job interviewing skills. Several service members told us that they had pursued or planned to pursue additional job-related transition assistance offered at their locations. Some service members also found other transition assistance informative, such as financial planning, stress management, and VA benefit information. Service members told us, however, that earlier access to this assistance would enable them to better utilize it and smaller class sizes would allow them adequate time for questions and answers. In 1999, DOL sponsored a study to assess the attitudes and opinions of participants in the transition assistance workshop. Twenty-one focus groups of persons who had attended a transition assistance workshop in the prior month were asked about the structure and content of the workshops and the extent to which they felt their participation helped prepare them to find civilian employment. Participants generally agreed that the services they received contributed to their knowledge and confidence about transitioning to civilian life. Many participants felt that attendance in the workshop should be mandatory and that receiving the service earlier in an individual’s military career would be beneficial. While participants generally appear to find assistance helpful, much less is known about the ultimate impact of transition services on employment or other outcomes, such as education and retention. Two studies conducted about 10 years ago found limited impact of transition assistance on employment. An early DOL evaluation required by the Congress assessed the impact of the pilot transition assistance program on service members who transitioned to civilian life in 1992. This study compared a sample of those who had attended a transition assistance workshop with those who had not to analyze whether transition assistance had any effect on post military job search and employment. Although both groups were found to have similar aspirations for jobs, careers and salaries, the results indicated little difference between the employability of those who had taken the workshop and those who had not. However, the study noted that service members who received transition assistance found jobs 3 to 7 weeks earlier than those who had not. The Army sponsored an evaluation of its Job Assistance Centers to determine whether services provided at these centers affect soldiers’ employment outcomes. A group of ex-service members who separated between October 1, 1992, and September 30, 1993 were interviewed to determine whether the job assistance services they received affected their post-transition earnings, receipt of unemployment compensation, and ratings of preparedness for the job market. The study reported that individuals who said they had received more job search assistance services, and those who indicated a greater degree of satisfaction with the services, were more likely to feel prepared for the civilian labor market and were also more likely to have some increase in earnings. However, because this study did not verify the self reported information, the conclusions cannot be validated. Currently at least two branches of the military, the Army and Navy, track the amount of unemployment compensation paid to separating service members as an indicator of program effectiveness. For example, the Army reports that the amount of unemployment compensation benefits paid to soldiers separating in fiscal year 2001 was about half that paid out in fiscal year 1994. However, Army officials concede that it would be difficult to attribute these changes to transition assistance services alone. Several factors complicate evaluating the effectiveness of human resource interventions, including the transition assistance program. First, achieving consensus on program goals is necessary to develop measurement and data collection strategies. Second, service branch data on what specific assistance service members received is necessary to compare the effects of different interventions. Third, following up periodically after separation with those who received services as well as those who did not is necessary to try to isolate the impact of transition assistance. Assessing overall effectiveness of the transition assistance program would require agreement on what the program is trying to accomplish. When first piloted, the objectives of the program included helping the military meet its personnel needs as well as helping separating service members meet their needs. However, since that time, the goals have expanded as a result of changing military needs and service member expectations. When the program was fully implemented, it dropped the retention goal and focused on providing transition assistance, coinciding with the downsizing of the military. During this time, the program focused on employment-related transition assistance. The Transition Commission noted that transition assistance needs to continually evolve to remain capable of bridging the ever-changing military and civilian environments. Service members also seek assistance with furthering their education or obtaining vocational rehabilitation in addition to employment-related transition assistance. For example, some service members enlisted with the specific intention of returning to school at the completion of their military service rather than working right away. Moreover, the military’s personnel needs have changed from downsizing to recruiting and retaining service members. The Transition Commission reported, for example, that retention was positively affected by transition assistance because it offers a realistic view of civilian job market prospects. This may lead some service members to conclude that they need more preparation to reenter the civilian workforce and to postpone separation to gain additional skills, education, or income.
|
Since its inception, the Transition Assistance Program has served more than one million separating and retiring military personnel through the coordinated efforts of the Departments of Defense, Transportation, Labor (DOL), and Veterans Affairs. In fiscal year 2001, the military branches and DOL spent $47.5 million to provide transition assistance to 222,000 separating and retiring service members. Although each branch provides required preparation counseling and offers transition assistance workshops to help service members transition to civilian life, not all eligible service members receive transition assistance. Because they have considerable flexibility in designing their programs, transition assistance varies in content and delivery across the military branches. In addition, service members experienced differences in access to transition assistance depending on their unique circumstances. Isolating the impact of transition assistance on employment, education, and other outcomes is difficult because of data inadequacies and methodological challenges.
|
For decades, we and DOD auditors have reported that DOD has not promptly or accurately charged its appropriation accounts for all of its disbursements and collections. Instead, DOD has recorded billions of dollars in suspense and other accounts that were set up to temporarily hold disbursements and collections until the proper appropriation account could be identified. But, rather than being a temporary solution, amounts accumulated and remained in suspense for years because DOD did not routinely research and correct its records. Over time, DOD lost the ability to identify the underlying disbursement and collection transactions in suspense because they had been summarized and netted over and over. Also, in many cases the documentation necessary to properly account for the transactions was lost or destroyed. It is important that DOD charge transactions to appropriation accounts promptly and accurately because these accounts provide the department with legal authority to incur and pay obligations for various kinds of goods and services. DOD has hundreds of current and closed appropriation accounts that were authorized by law over the years. In some ways, appropriation accounts are similar to an individual’s checking account— the funds available in DOD’s appropriation accounts must be reduced or increased as the department disburses money or receives collections that it is authorized to retain. Just as an individual who maintains multiple checking accounts must be sure that transactions are recorded to the proper account, DOD also must ensure that the proper appropriation account is charged or credited for each specific disbursement and receipt. DOD’s failure over the years to promptly and correctly charge and credit its appropriation accounts has prevented the department and Congress from knowing whether specific appropriations were over- or underspent, whether money was spent for authorized purposes, and how much money was still available for spending in individual appropriation accounts. Many disbursements and collections remained in DOD suspense accounts well beyond the date that the associated spending authority expired and canceled. DOD’s inability to properly record its financial transactions has also created an environment conducive to fraud, waste, and mismanagement. Auditors have issued numerous reports over the years that identify specific problems related to DOD’s poor controls over its accounting for disbursements and collections. But DOD’s ability to improve its accounting has historically been hindered by its reliance on fundamentally flawed financial management systems and processes and a weak overall internal control environment. Complex disbursement processes, missing information, and errors often combine to prevent DOD from promptly and accurately charging its appropriation accounts. In general, DOD’s disbursement process begins with military service or defense agency personnel obligating funds in specific appropriations for the procurement of various goods and services. Once the goods or services are received, DFAS personnel pay for them using electronic funds transfers (EFT), manual checks, or interagency transfers. Although the bill for goods and services received should be matched to the relevant obligation to ensure that funds are available for payment before any disbursement is made, DFAS, military service, or defense agency personnel often do not identify the correct appropriation and perform the match until after making the payment. If the appropriation and obligation then cannot be identified based on the available information, the disbursement is recorded in a suspense account until research is performed, additional information is received, or any errors are corrected. If DFAS staff cannot determine the correct appropriation account to charge, DOD policies allow DFAS staff to request approval for charging current funds. Several military services and DOD agencies can be involved in a single disbursement, and each has differing financial policies, processes, and nonstandard nonintegrated systems. As a result, millions of disbursement transactions must be keyed and rekeyed into the vast number of systems involved in any given DOD business process. Also, DOD disbursements must be recorded using an account coding structure that can exceed 75 digits, and this coding structure often differs by military service in terms of the type, quantity, and format of data required. The manual entry and reentry of the account code alone often results in errors and missing information about transactions. Automated system edit checks identify transaction records with invalid or missing account coding information, such as the appropriation account number or the chargeable entity, and refuse to process the faulty records. DFAS then records the problem disbursements in suspense accounts until the individual transactions can be corrected and reprocessed by the accounting systems. Other reasons for disbursement transactions to be recorded to suspense accounts include no valid obligation data identified, DOD disbursement records and Treasury disbursement records differ, unsupported charges between DOD services and defense agencies. DOD uses suspense accounts to hold several different kinds of collections until they can be properly credited to the relevant appropriation account or organization. For example, contractors often return overpayments they received for the goods and services they provided without including sufficient information for DOD to identify which account or which service location should be credited for the reimbursement. DOD also routinely accumulates estimated payroll tax withholding amounts in suspense accounts until the payments must be transferred to the Internal Revenue Service. If the estimates are higher than actual payments, amounts can be left in suspense indefinitely. Similarly, DOD records user fees collected for various purposes, such as grazing rights and forestry products, to suspense accounts until the accumulated funds are credited to the correct appropriation account or organization. DOD has recognized that using suspense accounts for accumulating withholding taxes and user fees is not appropriate and exacerbates its problems with these accounts but has stated that system and other problems prevent establishment of proper holding accounts for these collections. Check differences refer to differences between the summary and detail amounts reported by DOD for the paper checks it issued as well as differences with the amounts reported by banks for the paper checks that were cashed. Monthly, Treasury compares the DOD summary and detail amounts and bank discrepancy reports, identifies check issue and payment differences, and sends a report to DOD with the cumulative difference amount. While the check issue and payment differences could occur for various reasons, some of the common reasons are check issue records excluded from DOD detail reports but included in DOD summary reports to Treasury, erroneous check amount reported by DOD, check paid by the bank but not reported by DOD, voided check erroneously reported by DOD as check issued, and check dated and paid by the bank in a previous month but DOD reported its issuance in the current month. DOD does not record these differences in a suspense account or any other holding account. However, Treasury continues to track and report aged check differences monthly to DOD until they are cleared. DOD recognized that it would never be able to correctly account for billions of dollars of aged, unidentifiable, and unsupportable amounts recorded in its suspense accounts or reported as check payment differences. Therefore, DOD management requested and received statutory authority to write off these problem transactions. The NDA Act authorized DOD to cancel long-standing debit and credit transactions that could not be cleared from the department’s books because DOD lacked the supporting documentation necessary to record the transactions to the correct appropriations. The legislation specified that the write-offs include only suspense account disbursement and collection transactions that occurred prior to March 1, 2001, and that were recorded in suspense accounts F3875, F3880, or F3885; include only check payment differences identified by Treasury for checks issued prior to October 31, 1998; be supported by a written determination from the Secretary of Defense that the documentation necessary for correct recording of the transactions could not be located and that further research attempts were not in the best interest of the government; be processed within 30 days of the Secretary’s written determination; be accomplished by December 2, 2004. DOD officials estimated the value of the suspense account and check payment write-offs to be an absolute amount of nearly $35 billion, or a net amount of $629 million. However, neither of these amounts accurately represented the total value of all the individual transactions that DOD could not correctly record to appropriations and, therefore, left in suspense for years. Many DOD accounting systems and processes routinely offset individual disbursements, collections, adjustments, and correcting entries against each other and record only the net amount in suspense accounts. Over time, amounts might even have been netted more than once. Because DOD had not developed effective tools for tracking or archiving the individual transactions that had been netted together, there was no way for DOD to know how much of the suspense amounts recorded prior to March 1, 2001, represented disbursements and collections versus how much represented adjustments and correcting entries. In order to calculate absolute values for the suspense account write-offs, DOD could only add together the already netted disbursement, collection, adjustment, and correcting amounts. Table 1 shows the net and absolute values of the suspense write-offs as calculated by DOD and illustrates how the use of net values can present an entirely different picture than the use of absolute values. While suspense account write-offs related to Army appropriations represented nearly the total of the calculated absolute values, they represented less than 30 percent of the calculated net values—far less than the net write-off amounts related to Navy appropriations. Also, amounts that have been netted and that cannot be traced back to the underlying transactions cannot be audited. For the nearly $34 billion of suspense write-offs related to Army appropriations, DFAS had almost no transaction level information that could differentiate between individual disbursement and collection transactions that related to net reconciling adjustments that resulted from comparing monthly totals for Army records with Treasury records; net cumulative monthly charges from other military services, defense agencies, or federal agencies for goods or services provided to the Army; summarized suspense account activity reported by Army field correcting entries from center or field staff meant to clear amounts from suspense. According to DFAS officials, the system used to account for Army appropriations had accumulated about 30 years worth of individual, netted, summarized, and correcting entries that could not be identified and therefore were eligible for write-off. Unlike the accounting system used for Army, the systems used by DFAS centers to account for the other military services and the defense agencies did not accumulate billions of dollars in correcting entries that were meant to clear amounts from suspense. However, they did include significant amounts of non-transaction-level information, such as reconciling adjustments, net charges, and summarized account activity. For example, one of the write-offs processed for the Navy consisted of a single $326 million amount for which DFAS Cleveland was unable to distinguish any of the underlying individual transactions. As a result, DFAS Cleveland had no way of knowing what amounts might have been netted or summarized in order to arrive at the $326 million figure. DOD also wrote off $14.5 million of differences between what DOD reported as its check payment amounts and what Treasury reported as check amounts cleared through the banking system. Treasury had accumulated these check payment differences and reported them to DOD monthly on its Comparison of Checks Issued reports. Since the Treasury reports contained only the cumulative net check payment differences and DOD could not identify all of the underlying checks, as with suspense account write-offs, it was not possible to calculate an absolute value for all of the individual check errors. All of the monthly summary totals reported by Treasury for paper checks cashed during the period covered by the legislation were higher than the totals reported by DOD for paper checks issued during that period. To manage the suspense account write-off process, DOD developed detailed guidance and review procedures that provided reasonable assurance, given the limitations in the quality of the underlying data, that the department complied with legislative requirements. Before suspense amounts were approved for write-off, multiple layers of DOD officials and internal auditors reviewed the packages submitted by the five DFAS centers. The write-off packages varied in content but generally included a certification statement from the DFAS center director, an electronic file and a narrative description of the individual amounts that made up the package, and any additional system reports or documents that demonstrated compliance with legislative limits regarding dates and accounts. For check payment differences, DOD’s management process was less complicated—written instructions on how to submit the write-off amounts to Treasury were prepared, but there were no reviews other than those done at the DFAS centers. The check differences write-offs also met the legislative requirements except that the Secretary of Defense did not make a written determination regarding the necessity for the write-offs. The overall write-off process was not without cost to DOD, however; DOD’s lack of enforcement of proper accounting procedures and its own regulations meant that significant management and staff resources were required to prepare, support, and review the packages submitted for write-off. DOD developed guidance for the preparation of the write-off packages and implemented a series of reviews by high-ranking DOD officials. The guidance identified different types of transactions in suspense and specified the documentation requirements for each. For example, nearly a quarter of the write-offs represented disbursement transactions for which vouchers existed, but the vouchers did not contain sufficient information for the transactions to be posted to valid lines of accounting. For this type, the DFAS center director had to certify that steps were taken to obtain the missing information to clear the transactions and that further action was not warranted. For more than half of the write-off amounts, the underlying transactions could not be identified and vouchers and supporting documentation did not exist. Guidance included requirements that this write-off type be accompanied by written narrative from the DFAS center that described in detail the reason why amounts could not be cleared through normal processing. DFAS centers identified amounts to be written off in various ways depending upon the systems and processes in place at each center. Using the guidance discussed above, center officials then separated the amounts into transaction types, prepared the required supporting documentation or narratives, and grouped the amounts into “packages” to be sent forward for review. DOD’s multilayered review process served as the primary control for providing reasonable assurance that the suspense account write-offs met legislative requirements. As illustrated in figure 1, the reviews were performed sequentially by officials from the DFAS centers, the military service and defense agency FMOs, DFAS Arlington and DFAS internal review and by the DOD Comptroller, the Secretary of Defense’s designee. As each level of review was completed, the reviewing official was required to sign a certification statement or memorandum. The certification was a DOD requirement to demonstrate that reviews had been performed by various management officials and all agreed that the proposed write-off amounts met the legislative requirements. DOD’s review process was effective in identifying write-off amounts that did not appear to meet legislative requirements. DOD reviewers told us— and documentary evidence supports their claims—that additional information was requested from DFAS centers to support various questioned amounts or that packages with unsupported amounts were rejected and returned to the centers. For example, a $326 million package, consisting of a single amount supposedly representing transactions dating back to May 1992, was questioned by DFAS Arlington, DFAS internal review, and the Comptroller’s office. Because no supporting detailed transactions were identified and because the package did not clearly demonstrate that the amount had been recorded prior to March 1, 2001, the package was flagged. Reviewers contacted the originating DFAS center and requested additional documentation and explanation. The center provided the reviewers with detailed analyses demonstrating that the proposed write-off amounts had to represent transactions transferred into the center’s suspense accounts when the center was established in May 1992. Based on the additional evidence, the reviewers concluded that the proposed write-off met legislative requirements and approved the package. DOD reviewers rejected numerous proposed write-off amounts that did not comply with the legislation, including 18 of the original 116 packages submitted by the DFAS centers, often because they did not clearly support a transaction date prior to March 1, 2001. To ensure suspense write-off amounts were recorded within 30 days of the determination by the Secretary of Defense’s designee and before the legislative deadline of December 2, 2004, DFAS center officials reviewed accounting system records and requested additional information from their staff. The Columbus, Denver, and Indianapolis DFAS centers provided us with information that demonstrated the time frames were met with a few exceptions. DFAS Cleveland and DFAS Kansas City officials told us that they met the time frames for write-offs but could not provide any supporting documentation. Officials at these centers explained that as soon as the Comptroller’s office certified each write-off package, center staff sent data files to system technicians specifying the information to be deleted from suspense account records. According to officials, once the technicians had deleted the records, they sent e-mails back to the requesting center officials confirming that they had deleted the information within the required time frames. However, center officials were unable to provide us with copies of these e-mails or the deleted files. Although DOD did not establish a multilayered review process for check payment differences, the department did comply with legislative requirements for the write-offs with one exception—the Secretary of Defense did not provide the required written determination prior to Treasury’s recording of the write-off amounts. As specified in the legislation, DFAS centers used Treasury reports (the Treasury Comparison of Checks Issued reports) to identify check payment differences dated prior to October 31, 1998. DFAS staff reviewed available documents to determine that sufficient information was no longer available to identify the proper appropriation account. Even for very large differences, DOD’s accounting records provided no information to help explain the difference in checks issued and paid or to identify what records needed correction. For example, the Treasury report included a single difference of almost $6 million (over 40 percent of the total write-off amount) that represented a check issued on October 31, 1991, by DFAS Columbus payable to the U.S. Treasury. DFAS Columbus was unable to locate any documentation to support the reason for the check payment, the amount of the check, or the associated appropriation. DOD established a much abbreviated process for check payment differences write-offs. Rather than having check payment write-offs reviewed by the Comptroller’s office, DFAS Arlington, DFAS internal review, and military service and defense agency FMOs prior to submission to Treasury, DOD relied solely on DFAS center management to ensure compliance with the legislation. Our review indicated that center officials adequately documented that all amounts written off were dated prior to October 31, 1998, and were reported on the Treasury Comparison of Checks Issued report. However, DOD did not comply with the requirement in the legislation that prior to submission to Treasury, the Secretary of Defense make a written determination that DOD officials have attempted without success to locate the documentation necessary to identify which appropriation should be charged with the amount of the check and that further efforts to do so are not in the best interests of the United States. In October 2004, after DOD had submitted all of the check payment difference write-offs to Treasury and Treasury had recorded them, DOD asked DFAS internal review to look at all the submissions and determine whether they complied with the legislation. According to a DFAS Arlington official, internal review completed its work and concluded that the check payment write-offs sent to Treasury were certified by disbursing officers, DFAS centers, and the services (either in writing or orally) prior to clearing the transactions. The official also stated that this matter has been forwarded to the DOD Comptroller’s office for a formal determination to meet the legal requirements under the now expired law. Figure 2 below illustrates the write-off process for check payment differences. The write-off process itself could not and did not fix DOD’s underlying problems—outdated, nonstandard, and nonintegrated financial systems and lack of enforcement of proper accounting policies and procedures— that led to the build-up of aged, unsupported suspense transactions and check payment differences. To the extent that DOD allows large aged suspense and check difference balances to recur, the department will again be required to undertake costly procedures to try to support the proper recording of those transactions or to write them off. According to DOD officials, numerous staff members at every level were needed to prepare, support, and review the write-off packages and, in some instances, to rework previously submitted packages. For example, DOD officials told us that for the most part, the research and preparation of the write-off packages represented additional tasks that were added to the staff’s normal workload. We were told that, although staff tried to prioritize their work in order to prevent a backlog related to current suspense account balances, they could not keep up with their daily activities and current suspense account balances increased over the period. Also, several DFAS center officials told us that for much of 2003, DFAS Arlington, the Comptroller’s office, and Treasury officials tried to reach an agreement on exactly how to process the write-off amounts. Because the official guidance was not issued by DFAS Arlington until January 2004, there was a significant delay in preparing the write-off packages. Although DOD had hoped to finish the write-offs by the end of fiscal year 2004, only 24 packages had been approved by that time. DOD had to assign additional resources to enable the remaining 71 packages to be reviewed, approved, and processed by December 2, 2004, the legislative cutoff date. Writing off aged suspense account amounts and check payment differences did not change DOD’s reported appropriation account balances. Nor did the write-offs correct any of the over- and undercharges that may have been made to those appropriations over the years as a result of not promptly resolving suspense account transactions and check payment differences. DOD will never identify which, if any, of the aged underlying transactions in suspense would have resulted in Antideficiency Act violations had they been correctly charged. The suspense account write-offs also did not affect the reported federal cumulative budget deficit; however, the write-off of check payment differences increased the deficit by $14.5 million. The most significant result of the write-off process was to guarantee that current appropriation balances would not be required to cover the aged unrecorded transactions. The legislated write-off of aged suspense account amounts and check payment differences did not change DOD’s current or past appropriation account balances. Because amounts in suspense and check payment differences had never been recorded to the proper appropriation accounts, DOD had over- or undercharged these appropriations. To accomplish the write-off, Treasury reclassified the aged suspense amounts that met legislative requirements from DOD-specific suspense accounts to non-agency-specific general government suspense accounts. The check payment differences, which had never been recorded in any DOD accounts, were simply “sent” to Treasury for recording in that same general government suspense account. Although it was unlikely that DOD would ever identify individual aged transactions and the support for their proper recording, the write-off process was the final step in ensuring that the over- and undercharged DOD appropriation accounts will never be corrected. While the write-off authority did not change or correct any DOD appropriation balances, it did mean that DOD’s current appropriations would not be used to pay for the uncharged disbursements. Generally, authorized disbursements may be made only to pay valid obligations properly chargeable to an appropriation account. If the correct appropriation and obligation cannot be identified and charged with a disbursement, DOD regulations provide that the disbursement be treated as an obligation that is chargeable against current appropriations. However, using current funding authority to cover past disbursements reduces the funds available to purchase goods and services needed to support current operations. We found that the write-off of suspense amounts had no effect on the cumulative federal deficit. The suspense account transactions had already been charged to the federal surplus or deficit in the specific year that DOD reported the related collection and disbursement transactions to Treasury. The reclassification of suspense amounts from DOD accounts to general government suspense accounts did not affect Treasury’s previous recording of the underlying collection and disbursement transactions to the cumulative deficit. With regard to the write-off of check payment differences, according to Treasury, the surplus/deficit had not been adjusted to recognize differences between issued check amounts as reported by DOD and paid check amounts as reported by banks. Since the check payment differences had not previously been reported as disbursements by DOD and thus included in the deficit calculation, the cumulative federal deficit was increased by DOD’s write-off amount of $14.5 million. We found that, even though DOD policies require that most suspense account transactions and check differences be resolved within 60 days, DFAS centers were reporting an absolute value of $1.3 billion in aged suspense account amounts and an absolute value of $39 million in aged check differences as of December 31, 2004. DFAS knows that the reported suspense amounts are not complete and accurate because DFAS center officials are still not performing the required reconciliations of their appropriation accounts, including suspense accounts, with Treasury records; some field sites are not reporting any suspense activity to the centers or are reporting inaccurate suspense account information; and some of the reported amounts for suspense and check differences still reflect netted and summarized underlying transaction information. Given these deficiencies with suspense account reporting, the actual value of aged problem transactions could be significantly understated. DFAS centers are not performing effective reconciliations of their appropriation activity, including suspense account activity, even though DOD policies have long required them. Similar to checkbook reconciliations, DFAS centers need to compare their records of monthly activity to Treasury’s records and then promptly research any differences in order to identify and correct erroneous or missing transactions. When we reviewed the DFAS centers’ December 31, 2004, reconciliations of suspense account activity, we found that all of the centers had unexplained differences between their records and Treasury records—differences for which they could not identify transaction-level information. DFAS excluded transactions related to the unexplained differences from its reports on suspense account activity. In addition, we noted that amounts recorded in DFAS suspense accounts often reflected transactions that had been netted or summarized at a field site level. As illustrated by the recent write-off activity, netting transactions often obscures the underlying transactions, makes it more difficult for the centers to identify and correct errors and omissions, and understates the magnitude of suspense account problems. In 1999, DFAS Arlington issued guidance that instructed each of its centers to develop their own procedures for preparing a monthly suspense account report (SAR) that would show the net value, absolute value, and aging of amounts charged to each suspense account. Because the systems and processes are not uniform across the centers, they were instructed to develop their own procedures for obtaining the necessary information from their systems, reconcile their suspense account records to Treasury records to help ensure accuracy and completeness, and explain any improper charges or overaged amounts. However, as discussed previously, we found that the centers were not effectively reconciling their suspense accounts and, therefore, could not demonstrate that their SARs were complete and accurate. In fact, center officials told us that some field sites did not report any of their suspense information or they reported inaccurate information in the SAR; however, those officials could not quantify the missing information or inaccuracies. As discussed above, the SARs also did not include transactions related to the unreconciled differences between center and Treasury records, including residual balances from prior to March 2001 that DOD was unable to write off. Figure 3 shows the aging of the $1.3 billion of suspense amounts reported on the December 31, 2004, SAR. We also found that DFAS Arlington officials had not performed any comprehensive reviews to determine whether the centers were compiling the SARs in accordance with their own guidance. DFAS Arlington officials and other center officials told us that it would be an overwhelming task to review the information submitted by the hundreds of DFAS field sites responsible for compiling the SARs. Although not required, some centers have documented the processes they are following to gather suspense account information and prepare the SARs; however, DFAS Arlington officials have not reviewed the written documentation. Arlington officials also did not know whether the centers were using the same criteria for reconciling and calculating absolute values. As previously stated, as of December 31, 2004, DFAS reports identified $1.3 billion absolute value of aged suspense account amounts and Treasury reports identified $39 million in absolute value of unresolved check differences. These aged problem transactions persist despite the DOD Financial Management Regulation (FMR) that requires staff to identify and charge the correct appropriation account within 60 days. The FMR allows DFAS to charge current appropriations for suspense account transactions and problem disbursements that cannot be resolved through research if approved by the fund holder, military service assistant secretaries, or defense agency Comptroller. For suspense account transactions, DFAS officials stated that the primary reasons for not consistently following the FMR are (1) staff have been too busy processing the write-off amounts and have not had the resources to clear more recent suspense transactions promptly and (2) military service and defense agency officials are unwilling to accept charges to current appropriation accounts without DFAS supplying them with sufficient proof that the charges actually belong to them. For the $39 million of unresolved check differences, DFAS officials stated that $36 million is related to transactions initiated by Army staff overseas. DFAS officials claimed that with the exception of the $36 million, they have been able to resolve almost all check differences within 60 days due to increased oversight and staff efforts, implementation of new controls over the check reconciliation process, and the increasing use of EFTs rather than checks. Overall, the write-off process enabled DOD to clear aged, unsupported amounts from its accounting systems and records and ensured that current appropriations would not be required to cover these amounts. However, the write-off did not correct appropriation account records or fix any of DOD’s deficient systems or accounting procedures. Therefore, DOD needs to continue its focus on the keys to eliminating aged problem disbursements and preventing their future occurrence, including improved disbursement processes and better management controls. Until DOD enforces its own guidance for reconciling and resolving its suspense accounts and check differences regularly, balances will likely grow. Without adequate tools for tracking and archiving the individual transactions charged to suspense, DOD will continue to have difficulty researching and determining proper accounting treatment. DOD’s inability over the years to promptly and correctly charge its appropriation accounts has prevented the department and Congress from knowing whether specific appropriation accounts were overspent or underspent and from identifying any potential Antideficiency Act violations. Unless DOD complies with existing laws and its own regulations, its appropriation accounts will remain unreliable and another costly write-off process may eventually be required. To prevent the future buildup of aged suspense accounts and check payment differences, we recommend that the Secretary of Defense take the following three actions: enforce DOD’s policy that DFAS centers and field-level accounting sites perform proper reconciliations of their records with Treasury records each month, use the results of the monthly reconciliations to improve the quality of DFAS suspense account reports, and enforce guidance requiring that disbursements in suspense be resolved within 60 days or be charged to current appropriations if research attempts are unsuccessful. In written comments on a draft of the report, the Principal Deputy Under Secretary of Defense (Comptroller) stated that the department concurred with our recommendations and described actions that are being taken to address them. DOD’s comments are reprinted in appendix II. We are sending copies of this report to other interested congressional committees; the Secretary of the Treasury; the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Director, Defense Finance and Accounting Service; and the Assistant Secretaries for Financial Management (Comptroller) for the Army, the Navy, and the Air Force. Copies will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-9505 or kutzg@gao.gov if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and key contributors to this report are listed in appendix III. As required by the conference report (H.R. Conf. Rep. No. 107-772) that accompanied the Bob Stump National Defense Authorization Act for Fiscal Year 2003 (Pub. L. No. 107-314 § 1009, 116 Stat. 2458, 2635), we undertook a review of the Department of Defense’s (DOD) use of authority to write off certain aged suspense account transactions and check payment differences. Our objectives were to determine (1) what amount DOD wrote off using the legislative authority, (2) whether DOD had effective procedures and controls to provide reasonable assurance that amounts were written off in accordance with the legislation, (3) how the write-offs affected Treasury and DOD financial reports, and (4) what aged DOD suspense account balances and check payment differences remain after the write-offs have been accomplished. In conducting this work, we identified prior audit reports and other background information to determine the events that led DOD to request write-off authority. We visited DOD Comptroller offices, visited DFAS centers in Arlington, Indianapolis, Cleveland, and Denver, and contacted officials in DFAS Columbus and Kansas City to perform the following: Interviewed Comptroller and DFAS officials to obtain a general understanding of DOD’s use of suspense accounts and the department’s request for write-off authority. Gathered, analyzed, and compared information on how write-off amounts were identified and processed. Compared DOD’s policies and practices for the write-offs (including those policies and practices in effect at the relevant DFAS centers) to the specific provisions contained in the legislation and with any Treasury requirements. Identified DOD’s primary controls over the suspense account write-offs—a series of reviews performed by DOD/DFAS management and DFAS internal review—and tested the effectiveness of these controls by reviewing all certification statements resulting from the control procedures, comparing amounts reviewed to amounts written off, inquiring about and reviewing examples of rejected write-off amounts, and reviewing all of the support available for selected individual write-off amounts. Compared all check payment difference write-offs to Treasury reports to ensure the amounts were in compliance with the legislative requirements. To determine the impact of the suspense account and check payment write-offs on DOD’s budgetary and financial reports, we determined which specific DOD/Treasury accounts were affected by the write-off entries. We asked DOD and Treasury officials how the write-off entries affected DOD budgetary accounts and the federal deficit. We also reviewed financial reports, journal vouchers, and other documents provided by DOD and Treasury. To identify the current outstanding suspense account balances and check payment differences, we reviewed amounts disclosed in DOD’s fiscal year 2004 financial statements and obtained relevant performance metrics as of September 30, 2004, and December 31, 2004. We identified any remaining aged suspense account or check differences being monitored by DOD management. To determine whether DOD reconciles its records to Treasury, we requested proof of DOD’s most current suspense account reconciliations and check difference reports. We performed our work from June 2004 through April 2005. Because of serious data reliability deficiencies, which the department has acknowledged, it was not our objective to—and we did not—verify the completeness and accuracy of DOD reported amounts, including current suspense account report amounts. We requested comments from the Secretary of Defense or his designee. We received written comments from the Principal Deputy Under Secretary of Defense (Comptroller), which are reprinted in appendix II. We also sent the draft report to the Secretary of the Treasury. Treasury sent us a few technical comments, which we have incorporated in the report as appropriate. We performed our work in accordance with generally accepted government auditing standards. Staff making key contributions to this report were Shawkat Ahmed, Rathi Bose, Molly Boyle, Sharon Byrd, Rich Cambosos, Francine Delvecchio, Gloria Hernandez-Saunders, Wilfred Holloway, Jason Kelly, and Carolyn Voltz.
|
Over the years, the Department of Defense (DOD) has recorded billions of dollars of disbursements and collections in suspense accounts because the proper appropriation accounts could not be identified and charged. DOD has also been unable to resolve discrepancies between its and Treasury's records of checks issued by DOD. Because documentation that would allow for resolution of these payment recording problems could not be found after so many years, DOD requested and received legislative authority to write off certain aged suspense transactions and check payment differences. The conference report (H.R. Conf. Rep. No. 107-772) that accompanied the legislation (Pub. L. No. 107-314) required GAO to review and report on DOD's use of this write-off authority. After decades of financial management and accounting weaknesses, information related to aged disbursement and collection activity was so inadequate that DOD was unable to determine the true value of the write-offs. While DOD records show that an absolute value of $35 billion or a net value of $629 million of suspense amounts and check payment differences were written off, the reported amounts are not reliable. Many of the write-offs represented transactions that had already been netted together (i.e., positive amounts offsetting negative amounts) at lower level accounting sites before they were recorded in the suspense accounts. This netting or summarizing of transactions misstated the total value of the write-offs and made it impossible for DOD to locate the support needed to identify what appropriations may have been under- or overcharged or determine whether individual transactions were valid. In particular, DOD could not determine whether any of the write-off amounts, had they been charged to the proper appropriation, would have caused an Antideficiency Act violation. It is important that DOD accurately and promptly charge transactions to appropriation accounts since these accounts provide the department with legal authority to incur and pay obligations for goods or services. DOD has hundreds of current and closed appropriation accounts that were authorized by law over the years. Similar to a checking account, the funds available in DOD's appropriation accounts must be reduced or increased as the department spends money or receives collections that it is authorized to retain for its own use. Just as an individual who maintains multiple checking accounts must be sure that transactions are recorded to the proper account, DOD also must ensure that the proper appropriation account is charged or credited for each specific disbursement and collection. Our review found that DOD's guidance and processes developed to ensure compliance with the legislation provided reasonable assurance that amounts were written off properly except that check payment differences did not have the required written certification. The write-off process did not correct underlying records and significant DOD resources were needed to ensure that write-off amounts were properly identified and handled. Also, using staff resources to process old transactions resulted in fewer staff to research and clear current problems. At December 31, 2004, DOD reports showed that after the write-offs, more than $1.3 billion (absolute value) of suspense amounts and $39 million of check differences remained uncleared for more than 60 days. However, DOD has acknowledged that its suspense reports are incomplete and inaccurate. Until DOD complies with existing laws and enforces its own guidance for reconciling, reporting, and resolving amounts in suspense and check differences on a regular basis, the buildup of current balances will likely continue, the department's appropriation accounts will remain unreliable, and another costly write-off process may eventually be required.
|
In the early 1970s, BIA began giving the tribes more training, involvement, and influence in the process of allocating TPA funds. TPA funds are used for programs such as law enforcement, social services, adult vocational training, child welfare, and natural resource management. Most tribes have placed all their available TPA base funding in only 5 or 6 of the more than 30 TPA base fund programs. All federally recognized tribes are eligible to receive TPA funds—either through contracts for operating tribal programs or through BIA-provided programs. As of October 1997, 556 tribes had been recognized by the federal government. The Joint Tribal/BIA/DOI Advisory Task Force on Bureau of Indian Affairs Reorganization, which was created in 1990 to develop goals and plans for reorganizing BIA to strengthen its administration of Indian programs, recommended that all small tribes—those with service populations of 1,500 or less—be brought up to a minimum level of TPA base funding to allow them the opportunity to develop basic self-governance capability.The task force recommended that the small tribes in the lower 48 states have available at least $160,000 in TPA base funds and that the tribes in the state of Alaska have available $200,000. BIA identified 307 tribes as part of this “small tribes funding initiative” that were below the recommended minimum funding levels. As directed by the Congress, BIA used part of its appropriation for fiscal years 1995, 1997, and 1998 to raise all the small tribes up to $160,000 in available TPA base funding. About two-thirds of the tribes included in the funding initiative are located in Alaska, and, for fiscal year 1999, BIA has requested an additional $3 million to move them closer to the recommended $200,000 funding level. The Single Audit Act of 1984, as amended, requires reporting by nonfederal entities, including tribes, that meet certain federal assistance thresholds. Before fiscal year 1997, the threshold was the annual receipt of $100,000 or more in federal funds; in fiscal year 1997, the threshold became the annual expenditure of $300,000 or more in federal funds. Entities meeting the threshold must submit an audited financial statement and a schedule of federal financial assistance. Of the $757 million appropriated for TPA in fiscal year 1998, about $507 million was for base funding, and about $250 million was for non-base funding. BIA distributes TPA base funds primarily on the basis of historical distribution levels. That is, the amount available to a particular tribe is generally the same as last year’s amount, without considering tribal needs or the tribes’ own revenues. Depending on annual appropriation levels, increases or decreases to a tribe’s base fund amount are made on a pro rata basis (e.g., a tribe that receives 0.2 percent of the total TPA base fund amount would receive 0.2 percent of any increase or decrease). In contrast, non-base TPA funds are distributed according to specific program criteria and, in some cases, the income levels of individual tribal members. BIA’s distribution of TPA base funds has been criticized over the last 20 years for, among other things, not being responsive to changes in the relative needs of the tribes. Nor has the distribution of TPA base funds been responsive to changes in the relative levels of the tribes’ own revenues. The majority of the fiscal year 1998 TPA base funds was distributed on the basis of historical funding levels, as has been the case for decades. The funding process used today has remained essentially the same since the early 1970s, when BIA began allowing the tribes more input into budget decisions and priorities. TPA was created to further Indian self-determination by giving the tribes the opportunity to establish their own priorities and to move funds among programs accordingly, in consultation with BIA. BIA believes that a stable funding base enhances the tribes’ ability to plan and budget their funds. Once a program is categorized as part of the TPA base funds, it loses any need-based identity it once had. Prior to their inclusion in TPA, some programs were funded according to specific program criteria. For example, funds provided under the Johnson O’Malley Act of 1934, as amended, are intended to provide supplementary financial assistance to meet the educational needs of Indian children attending public schools. These funds used to be distributed on the basis of education costs in each state and the number of students eligible for program services. Beginning in 1996, however, Johnson O’Malley funds were transferred into the tribes’ TPA base funds, using 1995 data on program costs and eligibility. Because both cost and eligibility may have changed since 1995, and because the tribes have the authority to move funds in and out of the program, the current distribution may bear little resemblance to the original, formula-driven one. In contrast to the base funds, the non-base funds—used for such programs as road maintenance, housing improvement, welfare assistance, and contract support—are generally distributed according to specific program criteria that consider, in some cases, individual income. Eligibility for housing improvement funds, for example, is based on whether and to what degree the housing being considered for improvement is substandard, as well as on the applicant’s income level. Similarly, eligibility for welfare assistance is based on a determination that an individual’s income is insufficient to meet his or her essential needs. In general, the tribes may not shift non-base funds among programs without special authorization. BIA’s process for allocating base funds, while enhancing the tribes’ budget flexibility, has been criticized for, among other things, not being responsive to changes in the relative needs of the tribes. Although the tribes set priorities for their individual programs, no mechanism exists within TPA for identifying the tribes’ comparative needs and funding them accordingly. As the relative needs of the tribes have changed over time, no corresponding change has occurred in the distribution of TPA base funds, making the apparent funding discrepancies among the tribes more pronounced. In 1978, we reported that BIA had been criticized by the Office of Management and Budget, the American Indian Policy Review Commission, and the tribes for its failure to develop a formula to ensure the equitable allocation of TPA funds among the tribes on the basis of need. We reiterate our belief, as stated in our 1978 report, that “accurate, current, and comparable comprehensive tribal needs analyses would provide BIA with a measurement to be considered in developing a formula on which to allocate Bureau [TPA] funds.” “Developing a system to measure the relative needs of Tribes with widely varying locations, mix of programs, size, and circumstances will be a monumental undertaking. The development will rely heavily on information which can be gathered only through a Tribal/BIA partnership. Much of this information is not currently available in a consistent and reliable form.” Finally, in 1998, a task force—established pursuant to Interior’s 1998 appropriation bill and charged with deciding how to distribute a general increase in 1998 TPA funds—emphasized the importance of BIA’s developing a TPA funding allocation method that addressed funding inequities and unmet tribal needs. This task force recommended in its January 1998 report that funds be set aside to create a working group to develop a standard assessment methodology. Accordingly, BIA set aside $250,000 for this purpose. The working group, according to BIA’s fiscal year 1999 budget request, will develop a revised TPA allocation model that is based on tribal needs. BIA has requested an additional $250,000 for the working group in fiscal year 1999. As long as it continues to use a funding distribution method that is relatively static, based largely on the initial division of funds among the tribes that was developed in the early 1970s, BIA has no assurance that its current TPA distribution is most effectively meeting the needs of the tribes. According to Interior officials, there is no clear documentation on how TPA base amounts for each tribe were initially determined. And even if those initial divisions were clearly documented, they may not support a distribution made in the early 1970s as the appropriate distribution today. At least two significant changes have occurred in the last 25 years that affect Indian tribes: The Indian population has more than doubled in size, and the revenues of some tribes have greatly increased since the approval of Indian gaming in 1988. In 1995, net income from Indian gaming operations was about $1.9 billion. Because the tribes’ own revenues are not considered in distributing TPA base funds, rich and poor tribes alike receive them. In fact, the tribes in our analysis that reported the highest amounts of their own revenues received more in TPA base funds, in total, than did those tribes that reported the lowest amounts of their own revenues. Furthermore, all the tribes covered by the small tribes funding initiative were brought up to about the same level of available TPA base funding, even though some reported substantial revenues of their own. Of the 299 tribes included in our analysis, one reported having more than $300 million in revenues of its own in 1996. Yet that same year, this tribe received over $350,000 in TPA base funds. Similarly, the five other tribes that reported more than $100 million in revenues of their own in 1995 or 1996 received TPA base amounts ranging from about $500,000 to $40 million in the corresponding year. In contrast, three of the six tribes that reported a deficit in their own revenues each received less than $350,000 in TPA base funds. Similarly, for the small tribes identified by BIA as being part of the small tribes funding initiative, the tribes’ own revenues did not affect the TPA base funds they received. Specifically, 72 of these tribes included in our analysis reported revenues that ranged from minus $1.4 million to over $30 million, with a median of about $92,000. Of the 72 tribes, 62 reported having revenues of their own, and 10 reported having no revenues or having losses. The small tribes that reported no revenues of their own and the one that reported having over $30 million in revenues, along with all the other tribes in the small tribes funding initiative whose revenues fell in between, were brought up to the same minimum level—$160,000—of available TPA base funding in 1998, at congressional direction. BIA stated that “there are a number of statutes which specifically prohibit BIA from considering certain revenue sources when making funding allocations.” BIA also said that comparing one tribe’s own revenues and TPA funding with those of another tribe is an “unfair comparison” because it does not take into consideration many other factors that may affect the level of TPA funding, such as a tribe’s land base and population. We agree that BIA is prohibited from considering certain tribal revenues in making funding allocations. The comparisons we present among tribes—regarding their own revenues and their TPA funding—are intended to illustrate the point that tribal revenues are currently not considered in the distribution of TPA funds. We agree that many other factors should be considered in determining an appropriate TPA distribution method. The question of how to ensure equity in the distribution of TPA funds among tribes is not an easy one. Among the key pieces of information that could prove useful in answering that question are (1) the economic status of each tribe, (2) its needs, and (3) the government’s responsibility to it. Much of this information, however, is not currently or readily available. Furthermore, the question of how to ensure equity in funding goes beyond TPA funds; it pertains as well to funding for many Indian programs provided by federal agencies other than BIA. Ultimately, however, the issues of how TPA and other federal funds are distributed and what information should be considered in that process are policy questions for the Congress and other federal decisionmakers to decide. To obtain an understanding of the economic status of each tribe, complete and reliable information on its finances would be needed. Like all nonfederal entities that expend at least $300,000 annually in federal funds, tribes must file financial reports under the Single Audit Act to account for the expenditure of those funds. Although the purpose of the Single Audit Act is to safeguard federal funds, not to require complete financial reporting by Indian tribes, the financial information submitted by the tribes under the act is the information most readily available from the majority of tribes. Our analysis of that information showed that tribes reported widely varying amounts of their own revenues. Specifically, six tribes each reported that they had revenues of over $100 million, for a total of over $1.1 billion; at the other extreme, six tribes reported losses that totaled over $9 million. Another 128 tribes reported revenues of between $1 million and $100 million, totaling almost $1.6 billion, and 159 tribes reported that they had from $0 up to $1 million of their own revenues, totaling over $36 million. However, the information reported by the tribes under the Single Audit Act does not always provide a complete picture of the tribes’ financial positions and, in some cases, is not reliable. Our examination of tribal financial statements filed under the act showed that the tribes frequently excluded financial information on business enterprises that do not involve the expenditure of federal funds. For example, in a previous review of Indian gaming revenues, we identified a tribe that reported over $100 million in net income from its gaming operations in 1995; yet the tribe’s financial statement for the same year (submitted under the Single Audit Act) reported only about $1.3 million in revenues of its own. Furthermore, the reliability of the data contained in the financial statements we reviewed was, in some cases, questionable. About half of these financial statements received auditors’ opinions indicating that the statement was deficient in some way and did not fairly represent the financial position of the reporting entity. Deficiencies ranged from the use of a cost rather than an accrual basis of accounting for revenues and expenditures to the complete unreliability of the data the tribe used for its accounting system. Deficiencies commonly cited included the exclusion of proprietary funds and of the general fixed asset account group. In addition, not all tribes are required to report under the Single Audit Act. Prior to fiscal year 1997, only those nonfederal entities (including Indian tribes) that received $100,000 or more in total federal assistance were subject to the reporting requirements in the Single Audit Act. Beginning in fiscal year 1997, the reporting requirement threshold under the Single Audit Act was increased to the expenditure of at least $300,000 in total federal assistance annually. In addition, tribes that rely on the federal government or some other entity to provide services for them, rather than expending the funds themselves, are not subject to the Single Audit Act. A second key piece of information that could prove useful in answering the question of how to ensure equity in the distribution of TPA funds is information on tribes’ needs. Over the last 20 years, we and others have commented that BIA should develop measurements of the relative needs of each tribe for each BIA program. This recommendation was repeated in January 1998 by a joint BIA/tribal task force. An assessment of a tribe’s needs is crucial to determining the extent to which a tribe’s own revenues foster its self-sufficiency. In and of themselves, a tribe’s financial resources do not serve as a measure of its needs. For example, one tribe in our analysis, which reported having more than $100 million of its own revenues in 1996, also reported receiving over $300 million in federal assistance that year. Although the receipt of such a large amount of federal assistance might indicate that the tribe’s needs far exceed its own revenues, many other factors could negate or support such a conclusion. Among the many factors that may come into play in assessing need are the size and composition of a tribe’s population and land base, the type and extent of its natural resources and governance experience, as well as various program-specific criteria. As reported in 1994 by the Joint Tribal/BIA/DOI Task Force, gathering the information necessary to assess tribes’ needs would be a “monumental undertaking.” There are over 550 federally recognized tribes located throughout the United States. Their reported populations range from 0 to over 225,000, and their reservations vary in size from a few acres for some rancherias in California to 17.5 million acres for the Navajo reservation in Arizona, New Mexico, and Utah. And finally, to obtain an understanding of the federal government’s responsibility to each tribe, the provisions of applicable treaties, laws, executive orders, and court decisions would need to be reviewed. There are over 360 ratified Indian treaties, dating back to 1778, as well as hundreds of relevant executive orders, court decisions, and laws. In developing Interior’s appropriations for fiscal year 1998, the Senate Committee on Appropriations inserted a provision that would have required BIA to develop a formula to allocate TPA funds on the basis of need, taking into account the tribes’ own revenues. The provision also would have required the tribes to report their complete financial information to BIA as a precondition of receiving TPA funding. Much of the debate over this provision focused on whether the federal government had an overriding legal responsibility to provide TPA funding regardless of the tribes’ own revenues. Although this provision was not retained in the final version of the bill approved by the Senate, or in the version enacted, we expect that questions regarding BIA’s distribution of TPA funds will continue. Whether comprehensive financial reporting should be required for the tribes and how that information should be used in determining the distribution of TPA funds are policy questions for the Congress and other federal decisionmakers to address. Furthermore, our review addressed only the distribution of TPA funding, which is just a small part of the overall federal funding for Indian programs. Although TPA was nearly half of BIA’s 1998 appropriation, it represented just 10 percent of the $7.5 billion in federal funding appropriated for Indian programs in 1998. Such funding comes from various federal agencies: the Department of Health and Human Services (HHS), which provides funding for Indian health programs, accounted for about 37 percent of the federal funding for Indian programs; DOI (including BIA), about 26 percent; the Department of Education, which provides funding for Indian education programs, about 18 percent; the Department of Housing and Urban Development, which provides funding for Indian housing programs, about 9 percent; and other agencies, the remaining 10 percent. Given the extent and the amount of federal funding, it is clear that the question of whether federal funds target the greatest needs among tribes extends beyond TPA and BIA; the question of equity might also be asked about funds provided by other federal agencies. In our analysis, for example, the tribe with the most revenues of its own received more than twice as much funding from HHS as it did from BIA. In several previous reviews, we have questioned whether HHS’ funding distributions are appropriate. We provided a copy of a draft of this report to BIA for its review and comment. Concerning the distribution of TPA funds, BIA commented that there is no statutory or regulatory basis for adjusting TPA distributions on the basis of tribal revenues. Furthermore, BIA said that in some cases it is specifically prohibited from taking into account certain types of tribal revenues in deciding how to distribute TPA funds. We agree with those comments. But we disagree with BIA’s characterization as “meaningless” our statement that, under the current method for distributing TPA funds, there is no assurance that the funds are effectively targeting the most pressing needs among tribes. Without assessing the relative needs of the tribes, BIA cannot know if the most pressing needs are effectively being met. And while the relative needs of the tribes have changed over time, BIA’s distribution of TPA base funds—which is based on historical levels—has not changed to accommodate them. We continue to believe that tribal needs assessments would assist BIA in developing criteria by which to equitably allocate TPA funds. Our report recognizes that a tribe’s revenues do not, in and of themselves, serve as the only measure of its needs. The information on tribal revenues is presented to illustrate one of the changing factors that is currently not being addressed in the distribution of TPA funds and to provide information for the policy debate on this issue currently before the Congress. Setting aside the policy question of whether tribal revenues should be considered in the distribution of TPA funds, the underlying issue remains: whether TPA funds are appropriately distributed among the tribes under existing Indian policy. Even without considering tribal revenues, improvements in the distribution of TPA funds could be made under existing Indian policy. Much of the criticism of TPA distributions over the last 20 years has been that relying on historical distribution levels has resulted in inequitable funding among the tribes. BIA acknowledged that the TPA process has been faulted in many areas over the years, including the inequities in the historical distribution method. In response to these criticisms, BIA has made a number of key improvements in the TPA process but, after repeated attempts, has yet to implement a more equitable TPA distribution method. As noted in the report and in BIA’s comments, BIA has created another working group to address this issue. Regarding the additional information that could be helpful in determining a revised distribution method, BIA emphasized that the purpose of the Single Audit Act was to safeguard federal funds, not to require complete financial reporting on tribal businesses. We agree that, if the Congress wishes to consider the tribes’ own revenues in distributing TPA funds, more complete and reliable financial information on the tribes would be required than that currently available in the reports filed under the Single Audit Act. Furthermore, BIA commented that there would be costs associated with “compiling, reporting, analyzing, and making funding decisions” on the basis of the additional types of information discussed in this report. We agree that the costs associated with implementing and maintaining a revised TPA distribution method would be an important factor to consider in developing a new approach. However, BIA has not yet developed a revised approach, so it is not possible to estimate what those costs would be. BIA also provided several technical clarifications, which we incorporated into the report where appropriate. BIA’s comments and our specific responses appear in appendix III. We performed our review from November 1997 through May 1998 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is contained in appendix IV. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-3841. Major contributors to this report are listed in appendix V. The following table shows the fiscal year 1998 distribution of Tribal Priority Allocations (TPA) base funds among the Bureau of Indian Affairs (BIA) 12 area offices and our per capita analysis of that distribution. Our per capita analysis is for informational purposes only; BIA does not distribute TPA funds on a per capita basis, nor does BIA recommend that such a distribution method be used. Interior officials noted that they do not consider the service population figures, which BIA last reported in 1995, to be reliable. They also noted that TPA funds are distributed to tribes, rather than individuals, and that a lower per capita figure may reflect that tribes in one area have larger memberships but smaller land bases than tribes in another area. We did not independently verify the distribution amounts or population estimates. Non-base and other self-governance TPA funds Includes base funds for only those tribes without self-governance agreements. Funds not distributed include TPA funds for other BIA offices or nontribal entities (e.g., funds for BIA’s central office, funds for employees displaced because of tribal contracting, and education funds for nontribal entities), as well as funds that will be but have not yet been distributed to tribes or area/agency offices (e.g., funds for contract support and welfare assistance). Because not all of these funds have been distributed, a per capita analysis is not applicable. The following tables show the fiscal year 1998 distribution of TPA base funds among tribes, by area and agency office, and our per capita analysis of that distribution. Our per capita analysis is for informational purposes only; BIA does not distribute TPA funds on a per capita basis, nor does BIA recommend that such a distribution method be used. According to Interior officials, there are reasons for differences in TPA distributions. For example, BIA is required to fund law enforcement and detention in states that do not have jurisdiction over crimes occurring on Indian lands, so tribes located in those states may receive more TPA funds for these purposes than tribes located in other states. Similarly, BIA has a trust responsibility for natural resources on reservations, so tribes that have large land bases may receive more TPA funds for this purpose than tribes with small land bases. Interior officials also noted that they do not consider the service population figures, which BIA last reported in 1995, to be reliable. They also noted that TPA funds are distributed to the tribes, rather than individuals, and that a lower per capita figure may reflect that tribes in one area have larger memberships but smaller land bases than tribes in another area. We did not independently verify the distribution amounts or population estimates. Includes TPA funds and service population for the Trenton, North Dakota, location of the Turtle Mountain Chippewa. Tribe not affiliated with an agency office. These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Pueblo of Nambe Pueblo of Picuris Pueblo of Pojoaque Pueblo of San Ildefonso Pueblo of San Juan Pueblo of Santa ClaraPueblo of Taos Pueblo of Tesuque Part of the Navajo Nation Pueblo of Acoma Pueblo of Cochiti Pueblo of Isleta Pueblo of Jemez Pueblo of San Felipe Pueblo of Sandia Pueblo of Santa Ana Pueblo of Santo Domingo Pueblo of Zia Ysleta del Sur Pueblo (Table notes on next page) These area office funds are used to provide services to some or all the tribes in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Apache Caddo Comanche Delaware of Western Oklahoma Fort Sill Apache Kiowa Wichita and Affiliated Tribes Iowa Tribe of Kansas and Nebraska Kickapoo Tribe of Indians of the Kickapoo Reservation in Kansas Prairie Band of Potawatomi Sac & Fox Nation of Missouri in Kansas and Nebraska Kaw NationOtoe-Missouria Pawnee Ponca Tribe of Oklahoma Tonkawa Absentee-ShawneeCitizen Potawatomi Nation Iowa of Oklahoma Kickapoo of OklahomaKickapoo Traditional of Texas Sac & Fox Nation, Oklahoma(Table notes on next page) Tribe not affiliated with an agency office. These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Cayuga NationOneida Nation of New York Onondaga NationSeneca Nation of New York St. Regis Band of Mohawk Tonawanda Band of SenecaTuscarora NationPoarch Band of Creek Indians Wampanoag Tribe of Gay Head (Table notes on next page) Tribe not affiliated with an agency office. These area office funds are used to provide services to some or all the tribes in the area. They are included in the per capita analysis for the area total. These area office funds are used to provide services to some or all the tribes in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Bad River Band of Lake Superior Chippewa Indians Forest County Potawatomi Community of Wisconsin Potawatomi IndiansHo-Chunk Nation Lac Courte Oreilles Band of Lake Superior Chippewa Indians Lac du Flambeau Band of Lake Superior Chippewa Indians Oneida Tribe of WisconsinRed Cliff Band of Lake Superior Chippewa Indians St. Croix Chippewa Sokaogon Chippewa Community of the Mole Lake Band of Chippewa IndiansStockbridge-Munsee Community of Wisconsin Bay Mills Indian Community Grand Traverse Band of Ottawa & Chippewa IndiansHannahville Indian Community Huron Potawatomi, Inc. Keweenaw Bay Indian Community Lac Vieux Desert Band of Lake Superior Chippewa Indians Little River Band of Ottawa Indians Little Traverse Bay Bands of Odawa Indians Pokagon Band of Potawatomi Indians Saginaw Chippewa Indian Tribe Sault Ste. Marie Tribe of Chippewa IndiansMinnesota Chippewa Tribe - Bois Forte Band (Nett Lake)Grand Portage BandLeech Lake BandMille Lacs BandWhite Earth Band Lower Sioux Indian CommunityPrairie Island Indian CommunityShakopee Mdewakanton Sioux CommunityUpper Sioux Indian Community Red Lake Band of Chippewa IndiansSac & Fox Tribe of the Mississippi in Iowa (Table notes on next page) These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. Tribes not affilated with an agency office. These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. These area office funds are used to provide services in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Chemehuevi Colorado River Indian Tribes Fort Mojave Confederated Tribes of the Goshute Reservation Duckwater ShoshoneEly ShoshoneShoshone-Paiute Tribes of the Duck Valley ReservationTe-Moak Tribe of Western Shoshone Indians Tohono O’odham Nation Ak Chin Indian Community Gila River Pima-Maricopa Indian Community Fort McDowell Mohave-Apache Indian Community Pascua Yaqui Salt River Pima-Maricopa Indian CommunityKaibab Band of Paiute Indians Las Vegas Tribe of Paiute Indians Moapa Band of Paiute Indians Paiute Tribe of Utah San Juan Southern Paiute Havasupai Hualapai Tonto Apache Yavapai-Apache Nation Yavapai-Prescott Skull Valley Band of Goshute IndiansUte Indian Tribe Fort McDermitt Paiute and Shoshone Tribes Lovelock PaiutePaiute-Shoshone Tribe of the Fallon Reservation and Colony Pyramid Lake Paiute Tribe Reno-Sparks Indian Colony Summit Lake PaiuteWalker River Paiute Washoe Winnemucca Indian ColonyYerington Paiute Yomba Shoshone(continued) These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. Tribe(s) served by agency office Confederated Tribes of the Colville Reservation Confederated Salish and Kootenai Tribes of the Flathead ReservationNorthwestern Band of Shoshoni Nation of Utah Shoshone-Bannock Tribes of the Fort Hall Reservation Metlakatla Indian Community, Annette Island ReserveCoeur D’Alene Kootenai Tribe of IdahoNez Perce Confederated Tribe of the Chehalis Reservation Hoh Jamestown S’Klallam TribeLower Elwha Tribal CommunityMakahQuileute QuinaultShoalwater BaySkokomishSquaxin IslandLummiMuckleshootNisquallyNooksack Port Gamble Indian CommunityPuyallup SamishSauk-Suiattle Stillaguamish SuquamishSwinomishTulalip Upper Skagit Confederated Tribes of the Coos, Lower Umpqua and Siuslaw Indians Confederated Tribes of the Grand Ronde CommunityConfederated Tribes of the Siletz ReservationCoquille Cow Creek Band of Umpqua Indians Confederated Tribe of the Umatilla Reservation Burns Paiute Confederated Tribes of the Warm Springs Reservation Klamath Confederated Tribes and Bands of the Yakama Indian Nation (continued) These area office funds are used to provide services to some or all tribes in the area. They are included in the per capita analysis for the area total. These funds are used to provide services to some or all tribes in the field office or area. They are included in the per capita analysis for the area total. 1. Our earlier report entitled Tax Policy: A Profile of the Indian Gaming Industry (GAO/GGD-97-91, May 5, 1997) stated that the total revenues reported from Indian gaming operations during 1995 were $4.5 billion, with a total net income of $1.9 billion. About 40 percent ($1.85 billion) of all gaming revenues was generated by eight tribes. In its comments, BIA incorrectly compared the revenue figure for these eight tribes ($1.8 billion) with the total net income figure for all the gaming tribes ($1.9 billion). Nevertheless, we agree with BIA’s overall point that these eight tribes account for a substantial portion of the gaming revenues and that these eight tribes receive only a small fraction of the overall TPA funds. 2. The data on tribal revenues presented in this report are provided for informational purposes only, for the policy debate currently before the Congress. We chose to include the complete range of tribal revenue figures, from negative to positive, to provide a balanced picture of the financial position of as many tribes as possible. Some tribes reported substantial revenues, while others reported losses from their business enterprises. We understand BIA’s concern, and we caution that no inference should be drawn from the presentation of these data that tribes with failing business enterprises should receive increased federal assistance. We obtained information on BIA’s basis for distributing 1998 TPA funds and on the tribes’ own self-reported revenues under the Single Audit Act. We contacted officials of the Department of the Interior’s Bureau of Indian Affairs, Office of Audit and Evaluation, and Office of Self-Governance in Washington, D.C., and its Office of Audit and Evaluation in Lakewood, Colorado. We analyzed distribution data provided by BIA and the Office of Self-Governance to determine the amounts distributed to BIA area and agency offices and to tribes in fiscal year 1998. At Interior’s Office of Audit and Evaluation in Washington, D.C., and Lakewood, Colorado, we examined all 326 of the most recent financial statements that were submitted as of March 1998 under the Single Audit Act by tribes, tribal associations, and tribal enterprises. These statements generally covered 1995 or 1996. We did not examine statements submitted for some entities, such as tribal housing authorities and community colleges, because these entities are financially separate from the tribes. From each of the financial statements we examined, we obtained information about the independent auditor’s opinion, revenues for all fund types reported, and operating income for the tribes that included tribal business information in their statements. Our analyses of 299 tribes’ own revenues were derived from the information contained in the financial statements. Of the 326 financial statements we reviewed, 299 were for tribes (2 of which were not federally recognized). Another 13 were for tribal businesses or components of tribes, and we merged their financial information into that of the cognizant tribes for analysis. The remaining 14 financial statements were for consortia or associations representing multiple tribes, primarily those in Alaska; we excluded the financial information from these statements from our analysis of tribes’ own revenues. We performed our review from November 1997 through May 1998 in accordance with generally accepted government auditing standards. Jennifer Duncan Ann Fruik Barry Hill Diane Lund Jeffery Malcolm Sue Naiberk Pam Tumler The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the: (1) Bureau of Indian Affairs' (BIA) method for distributing Tribal Priority Allocation (TPA) funds; and (2) other revenues available to the tribes. GAO noted that: (1) under the current method for distributing TPA funds, there is no assurance that the funds are effectively targeting the most pressing needs among the tribes; (2) currently, BIA distributes two-thirds of TPA funds, referred to as base funds, largely on the basis of historical funding levels; (3) in distributing these base funds, BIA does not take into consideration changing conditions, such as the tribes' levels of need or the tribes' own revenues from nongovernmental sources; (4) the remaining one-third of TPA funds, known as non-base funds, are used for such activities as road maintenance and housing improvement and are generally distributed on the basis of specific program criteria; (5) BIA's distribution of TPA base funds has been widely criticized over the last 20 years for, among other things, not being responsive to changes in the relative needs of the tribes; (6) furthermore, because the tribes' own revenues are not considered in the distribution of TPA base funds, the tribes with the highest revenues receive TPA base funds just as the tribes with the lowest revenues do; (7) GAO's analysis showed that each of the 6 tribes with the highest reported revenues received more TPA base funds than did each of the 16 tribes with no reported revenues or with losses; (8) in addition, 62 small tribes reported having revenues of their own yet received the same amount of TPA base funds as small tribes that reported no revenues of their own; (9) a decision about whether and in what way to redistribute TPA funds is as complex as it is controversial; (10) as long as BIA continues to distribute TPA base funds on a historical basis, it cannot be certain that the distribution accommodates the changing needs of the tribes; (11) to determine an equitable distribution among the tribes, several types of data may be considered, however, much of this information is not currently or readily available in a consistent and reliable form; (12) furthermore, questions of equity in federal financial assistance extend beyond BIA and TPA funds; (13) although TPA was nearly half of BIA's 1998 appropriation, it represented just 10 percent of the $7.5 billion in federal funding appropriated for Indian programs in 1998; and (14) ultimately, however, the issues of how TPA and other federal funds should be distributed and what information should be considered in that process are policy questions for Congress and other federal decisionmakers to address.
|
The United States has supported Colombia’s efforts to reduce drug- trafficking activities and stem the flow of illegal drugs entering the United States for more than 2 decades. Despite Colombian and U.S. efforts to disrupt drug-trafficking activities, the U.S. government has not reported any net reduction in the processing or export of refined cocaine to the United States. According to State, Colombia provides 90 percent of the cocaine and approximately 40 percent of the heroin entering the United States. To further complicate matters, the country’s two largest insurgent groups—the Revolutionary Armed Forces of Colombia and the National Liberation Army—and paramilitary groups have expanded their involvement in drug trafficking. According to a State official, the Revolutionary Armed Forces of Colombia and the paramilitary United Self- Defense Forces of Colombia are involved in every facet of narcotics trafficking, including cultivating, processing, and transporting. The insurgents exercise some degree of control over 40 percent of Colombia’s territory east and south of the Andes—which, as illustrated in figure 1, includes the primary coca-growing regions of Colombia. According to the Drug Enforcement Administration, several billion dollars flow into Colombia each year from the cocaine trade alone. This vast amount of drug money has made it possible for these organizations to gain unprecedented economic, political, and social power and influence. In an effort to address the influx of cocaine and heroin from Colombia, the United States has funded a counternarcotics strategy in Colombia that includes programs for interdiction, eradication, and alternative development which must be carefully coordinated to achieve mutually reinforcing results. Besides assistance for the Colombian Army counternarcotics brigade and the Colombian National Police aerial eradication program, the United States has supported Colombian efforts to interdict illicit-drug trafficking along rivers and in the air and alternative development, judicial sector reform, and internally displaced persons programs. State and Defense have provided most of the counternarcotics funding and State, through its Bureau for International Narcotics and Law Enforcement Affairs and Narcotics Affairs Section (NAS) in the U.S. Embassy Bogotá, oversees the program. In addition, the Military Group in the U.S. Embassy Bogotá manages much of the assistance to the Colombian military. Since the introduction of Plan Colombia in fiscal year 2000, the United States has provided more than $2.5 billion in assistance. (See table 1.) In response to increased violence in Colombia during early 2002 and the recognition that the insurgents and illicit drug activities are inextricably linked, the Congress provided “expanded authority” for the use of the U.S. assistance to Colombia. This authority enables the government of Colombia to use the U.S.-trained and -equipped counternarcotics brigade, the U.S.-provided helicopters, and other U.S.-provided counternarcotics assistance to fight groups designated as terrorist organizations as well as to fight drug trafficking. Similar authority was provided for fiscal year 2003 and is being sought for fiscal year 2004. For fiscal year 2004, the administration has requested about $700 million in funding for Colombia. During fiscal years 2000-03, the United States provided about $640 million in assistance to the Colombian Army for initial training and equipment for the counternarcotics brigade and for 72 helicopters and related operational, maintenance, and training support. These helicopters were intended to transport the counternarcotics brigade on counternarcotics missions. Nearly all this assistance has been delivered and is being utilized by the counternarcotics brigade in conducting operations. However, both the United States and the Colombian Army experienced some unanticipated problems that delayed the operational use of the helicopters. In addition, U.S. support will be needed for the foreseeable future to sustain operations. The United States originally agreed to provide training and equipment for a Colombian Army counternarcotics brigade made up of three battalions and a headquarters staff with a total of about 2,285 professional and conscripted soldiers. The battalions became operational in December 1999, December 2000, and May 2001, respectively. The counternarcotics brigade was assigned to the Colombian military’s Joint Task Force-South, which was headquartered at Tres Esquinas in Caqueta—one of the principal coca- growing regions of Colombia. The task force comprised units from the Colombian Army, Air Force, and Marine Corps and was tasked with the overall military mission of regaining government control over southern Colombia, primarily in the Putumayo and Caqueta departments. The United States provided the counternarcotics brigade with about $45 million in training and equipment—from weapons and ammunition to rations, uniforms, and canteens. The brigade’s primary mission was to plan and conduct interdiction operations against drug trafficking activities, including destroying illicit drug-producing facilities, and, when called upon, to provide security in insurgent-controlled areas where aerial eradication operations were planned. Although the Colombian Army’s counternarcotics brigade has achieved some success, the Colombian military has not regained control over large parts of the country where coca and opium poppy are grown. According to U.S. and Colombian officials, the counternarcotics brigade was highly effective during 2001 but somewhat less effective during 2002. For example, during 2001 the brigade destroyed 25 cocaine hydrochloride laboratories while in 2002 it destroyed only 4 laboratories. U.S. embassy officials stated that the brigade became less effective because the insurgents moved their drug producing activities, such as the laboratories, beyond the reach of the brigade. In addition, according to these officials, the brigade became more involved in protecting infrastructure, such as bridges and power stations, and performing base security. Moreover, the aerial eradication program did not call on the brigade to provide ground security on very many occasions, essentially planning spray missions in the less threatening areas. In August 2002, U.S. embassy and Colombian military officials agreed to restructure the brigade to make it a rapid reaction force capable of making quick, tactical strikes on a few days’ notice. As part of this restructuring, the Colombian Army designated the brigade a national asset capable of operating anywhere in Colombia rather than just in its prior area of responsibility in southern Colombia. The newly restructured brigade consists of three combat battalions and a support battalion with a total of about 1,900 soldiers, all of whom are professional. Two of the combat battalions have been retrained. The third combat battalion should be retrained by mid-June 2003. This change, according to NAS, Military Group, and Colombian Army officials, will improve the brigade’s ability to conduct operations against high-value, drug-trafficking targets, such as laboratories containing cocaine and the leadership of insurgent groups involved in drug- trafficking activities. One of the retrained battalions has been operating in Narino department since early May. A key component of U.S. assistance for Plan Colombia was enhancing the air mobility of the counternarcotics brigade. To accomplish this, the United States provided the Colombian Army with 33 UH-1N helicopters, 14 UH-60 Black Hawk helicopters, and 25 UH-II helicopters. The helicopters were provided to give the brigade the airlift needed to transport its personnel in the Joint Task Force-South’s area of responsibility in southern Colombia. Both the UH-1Ns and the UH-60 Black Hawks are operational; the UH-IIs are scheduled for full operations later this year. However, the Colombian Army continues to need U.S. assistance and contractor pilots and mechanics to fly the aircraft. In September 1999, State and Defense initiated a plan to provide the Colombian Army with 33 UH-1N helicopters that State had purchased from Canada to support the counternarcotics brigade. The helicopters were intended to serve as interim aircraft until the UH-60 and UH-II helicopters funded by the United States as part of Plan Colombia were delivered. The UH-1N helicopters were delivered in various stages between November 1999 and March 2001. According to the U.S. embassy, the helicopters flew their first mission in December 2000. Since then, the helicopters have flown 19,500 hours in combat and have supported more than 430 counternarcotics operations for the brigade. Although Colombian Army personnel are qualified as pilots and mechanics, many of the experienced pilots and mechanics who operate and maintain the helicopters are provided through a U.S.-funded contractor. For example, 20 contractor personnel serve as pilots-in-command when flying operations. With the $208 million provided as U.S. assistance under Plan Colombia for UH-60 Black Hawk helicopters, State and Defense procured 14 helicopters, a 2-year spare parts package, and a 1-year contractor support package. The helicopters were delivered between July 2001 and December 2001. However, the helicopters did not begin to support operations of the counternarcotics brigade until November 2002 because of the lack of Colombian Army pilots who met the minimum qualifications needed to operate the helicopters. Forty-two Colombian Army personnel have completed the minimum UH-60 pilot training, 13 have qualified as pilot-in- command. U.S.-funded contract pilots fill in as pilots-in-command. In addition, a U.S.-funded contractor continues to maintain the helicopters and provide maintenance training. With the $60 million provided as U.S. assistance under Plan Colombia for UH-II helicopters, State procured 25 aircraft. The original plan was to deliver the UH-II helicopters to the Colombian Army between November 2001 and June 2002. However, the 25 helicopters were delivered between March 2002 and November 2002. This 5-month delay occurred because the Colombian military considered using a different engine than the one usually installed because it may have been easier to maintain. After numerous discussions, Colombia decided to use the more commonly used engine. According to NAS officials, although some of the UH-II helicopters are being used for missions, all the helicopters will not be operational until June 2003. As of January 2003, 25 Colombian Army pilots had completed their initial training and 21 of these pilots are completing the training needed to qualify for operational missions. However, contractor pilots continue to supplement Colombian Army pilots and a U.S.-funded contractor continues to provide maintenance support. Although all the U.S.-provided helicopters are in Colombia, a number of unanticipated problems were encountered in training Colombian Army pilots and mechanics to operate and maintain the helicopters. Some of these problems continue to limit the Colombian Army’s ability to operate and maintain the aircraft. Primarily, the Colombian Army will have to continue to rely on contractor support because it will not have enough trained pilots-in-command and senior mechanics for the foreseeable future. When the United States agreed to provide the UH-60 and UH-II helicopters for the Colombian Army in July 2000, the assistance for Plan Colombia did not include any funds to train the Colombian pilots and mechanics needed to operate and maintain the helicopters. In October 2000—about 3 months after passage of U.S. assistance for Plan Colombia—State reported that, although the Colombian military had qualified pilots and support personnel, it did not have the numbers of personnel required to field and operate the new helicopters. State requested that Defense provide the training needed for the pilots and mechanics. Although Defense agreed to provide the training, it took an additional 3 months to decide that the U.S. Army would be responsible and to identify a funding source. In February 2001, Defense reported that it would transfer up to $20 million from other counternarcotics projects in Colombia for this training. A training plan was approved in mid-2001. Although the plan provided training for Colombian Army personnel to meet the minimum qualifications for a pilot and mechanic, it did not include the additional training necessary to fly missions in a unit or to become a senior mechanic. Basic training for 117 helicopter pilots—known as initial entry rotary wing training—began in November 2001 and is projected to be completed by December 2004. This training is intended to provide a pool or pipeline of pilots for more advanced training to fly specific helicopters. In addition, according to NAS officials, a new pilot takes an average of 2 to 3 years to progress to pilot-in- command. Specific UH-60 pilot training for 42 personnel began in August 2001 and was completed in September 2002. Specific UH-II pilot training for 75 personnel began in May 2002 and is projected to be completed in December 2003. In addition, according to NAS and U.S. contractor officials, 105 out of 159 Colombian Army personnel have completed the basic UH-60 and UH-II maintenance training and are taking more advanced training to qualify as senior mechanics. These officials told us that the remaining 54 personnel will receive the contractor-provided basic training in the near future, but they did not know when it would begin. NAS and U.S. contractor officials also told us that it typically takes 3 to 5 years for mechanics to gain the experience necessary to become fully qualified on specific helicopter systems, in particular the UH-60 Black Hawks. The Colombian Army Aviation Battalion is responsible for providing helicopters and other aircraft and personnel for all Colombian Army missions with an aviation component, including counternarcotics and counterinsurgency operations throughout Colombia. Information provided by the Colombian Aviation Battalion shows that it is staffed at only 80 percent of its required levels and, over the past several years, it has received between 60 percent to 70 percent of its requested budget for logistics and maintenance. According to Colombian Army personnel, current plans indicate that the missions the battalion needs to support will be expanding, but they do not know if they will have sufficient resources to meet these demands. The decision by the Colombian military to continue using the UH-1N helicopters in addition to the UH-60 and UH-II helicopters will make it more difficult for the Aviation Battalion to provide the numbers of personnel needed to operate and maintain the helicopters. State originally intended that the UH-1N helicopters would only be used by the counternarcotics brigade until the UH-60 and UH-II helicopters were available to support operations. However, in 2002, the Colombian military requested and received approval from the United States to continue using these helicopters. NAS and Military Group officials stated that this means the number of pilots and mechanics needed to operate all the aircraft increases the total requirement for the Aviation Battalion. For example, the battalion will have to have a total of 84 additional Colombian Army personnel qualified to serve as pilots-in-command (42) and co-pilots (42). Even though the U.S.-funded contractor has trained Colombian Army personnel since the UH-1N’s initial delivery in 1999, only 61 Colombian Army personnel remain in the program. According to bilateral agreements between Colombia and the United States, the Colombian Army must ensure that pilots and mechanics who receive U.S. training be assigned to positions using their training for a minimum of 2 years. This has not always been the case. For example, According to U.S. embassy data, at least 105 Colombian Army personnel have completed the basic helicopter maintenance course. As of January 2003, 65 of these individuals were scheduled to receive additional training that would enable them to become fully qualified mechanics who can perform maintenance without U.S.-contractor oversight. Of these, 22 had not reported for training. Neither the Military Group nor the Aviation Battalion could provide us the location of these individuals. According to U.S. contractor personnel, at least 10 pilots-in-command should be available to fly missions. Although 19 Colombian Army personnel were qualified to serve as pilots-in-command on UH-1N helicopters, as of January 2003, only one pilot was assigned to serve in this position. The remaining nine pilots-in-command were provided by the U.S. contractor. Again, neither the Military Group nor the Aviation Battalion could provide us the location of these individuals. Of the funds appropriated for fiscal year 2002, $140 million was used to support Colombian Army counternarcotics efforts. Most of this went to support U.S.-provided helicopter operations, maintenance, logistical, and training support. However, not all the funding could be released until the Secretary of State certified, in two separate reports to appropriate congressional committees, that the Colombian military was making progress meeting certain human rights conditions. According to U.S. embassy political section personnel, they encountered difficulties developing the information required to make the human rights determination and certification. Because State was late in providing these reports, the U.S. embassy could not use this funding for operations and training on two occasions for a total of about 5 months during 2002. According to NAS, these delays resulted in fewer counternarcotics operations and limited the training and experience Colombian Army pilots could obtain to qualify as pilots-in-command. U.S. assistance to support the helicopters provided as part of Plan Colombia was originally planned to end in 2006 with the Colombian Army taking over these responsibilities. However, NAS, Military Group, and Colombian Army officials stated that a continued level of U.S. contractor presence will be needed beyond this date because the Aviation Battalion is not expected to have the personnel trained or the resources necessary. Although Military Group officials stated that they have not officially estimated what this assistance level will be, they tentatively projected that it would cost between $100 million and $150 million annually to sustain the U.S.-supported counternarcotics programs. Moreover, other recently initiated U.S. programs will likely require U.S. assistance and contractor support, but the long-term costs of sustaining such programs are not known. In 2002, the United States agreed to provide $104 million in training and equipment to Colombian Army units whose primary mission is to protect important infrastructure but whose initial mission is to minimize terrorist attacks along 110 miles of the Cano Limon pipeline in the Arauca department. The units will focus on patrolling, reconnaissance, and immediate reaction in the area of the pipeline and key facilities. Of the $104 million, $6 million is for ongoing U.S. Special Forces training and $98 million is for procuring 2 UH-60 and 4 UH-II helicopters and associated training and ground support. NAS and Military Group officials indicated that some level of contractor support will likely be needed for the foreseeable future because the Colombian Army Aviation Battalion does not have sufficient numbers of trained pilots and mechanics to operate and maintain the helicopters. In 2002, the Colombian military decided to form a Commando Battalion whose mission will be to conduct operations against high-value targets including the capture of high-level leaders of insurgent and paramilitary units. The United States has agreed to provide the battalion with training and equipment. Although the costs of training are not readily available, Military Group officials estimated that the United States will provide about $5 million in equipment, including weapons and ammunition, communication equipment, night-vision devices, and other individual equipment. Also in early 2003, the United States began assigning U.S. military personnel to selected Colombian military units for up to 179 days. These personnel advise the commander and help plan attacks on drug trafficking and related insurgent targets. Military Group officials did not know when— or if—personnel or funds would be approved for all the planned teams because of other priorities, such as deployments to Afghanistan and Iraq. According to Military Group officials, these teams could cost about $8 million annually if all become operational. Since the early 1990s, State’s Bureau for International Narcotics and Law Enforcement Affairs (through the U.S. Embassy Bogotá NAS and the bureau’s Office of Aviation) has supported the Colombian National Police’s efforts to significantly reduce, if not eliminate, the cultivation of coca and opium poppy. However, for the most part, the net hectares of coca under cultivation in Colombia continued to rise until 2002, and the net hectares of opium poppy under cultivation remained relatively steady until 2001-02. In addition, the U.S. Embassy Bogotá has made little progress in having the Colombian National Police assume more responsibility for the aerial eradication program, which requires costly U.S. contractor assistance to carry out. As shown in figure 2, the number of hectares under coca cultivation rose more than threefold from 1995 to 2001—from 50,900 hectares to 169,800 hectares—despite substantially increased eradication efforts. But in 2002, the Office of Aviation estimated that the program eradicated 102,225 hectares of coca—a record high. In March 2003, the Office of National Drug Control Policy reported for the first time since before 1995 a net reduction in coca cultivation in Colombia—from 169,800 hectares to 144,450 hectaresa 15 percent decline. As shown in figure 3, the net hectares of opium poppy under cultivation varied between 6,100 and 6,600 for the period 1995-98 but rose to 7,500 hectares in 1999 and 2000. In 2001, the net hectares of poppy estimated under cultivation declined to 6,500 and, in 2002, further declined to 4,900— nearly a 35 percent reduction in net cultivation over the past 2 years. NAS and Office of Aviation officials attributed the recent unprecedented reductions in both coca and poppy cultivation primarily to the current Colombian government’s willingness to allow the aerial eradication program to operate in all areas of the country. They also noted that the number of spray aircraft had increased from 10 in July 2001 to 17 in recently acquired spray aircraft can carry up to twice the herbicide as the older aircraft; and as of January 2003, aircraft were flying spray missions from three forward operating locations—a first for the program, according to NAS officials. The ability to keep an increased number of spray aircraft operating out of three bases was made possible, at least in part, because NAS hired a contractor to work with the Colombian National Police to, among other things, help maintain their aircraft. As a result, the availability of the police aircraft needed for the spray program increased. Moreover, in August 2002, the Colombian government allowed the police to return to a higher strength herbicide mixture which, according to NAS officials, improved the spray’s effectiveness. NAS officials project that the aerial eradication program can reduce the amount of coca and poppy cultivation to 30,000 hectares and 5,000 hectares, respectively, by 2005 or 2006, assuming the police continue the current pace and can spray in all areas of Colombia. As we reported in 2000, beginning in 1998, U.S. embassy officials became concerned with the rising U.S. presence in Colombia and associated costs of the aerial eradication program. At the time, the embassy began developing a 3-year plan to have the Colombian National Police assume increased operational control over the program. But for various reasons, the police never agreed to the plan. Since then, contractor involvement and the associated costs have continued to rise and the Colombian National Police are not yet able to assume more control of the aerial eradication program. As shown in table 2, in fiscal year 1998, the Office of Aviation reported that the direct cost for the U.S. contractor providing aircraft maintenance and logistical support and many of the pilots was $37.8 million. In addition, NAS provided $10.7 million for fuel, herbicide, and related support for a total of $48.5 million. For fiscal year 2003, the comparable estimates for contractor and NAS-provided support were $41.5 million and $44.8 million, respectively, for a total of $86.3 million. Most of this increase occurred between fiscal years 2002 and 2003 to support the additional spray aircraft, multiple operating locations, and the anticipated continuation of spray operations throughout Colombia. According to NAS and Office of Aviation officials, these costs are expected to remain relatively constant for the next several years. The Colombian National Police do not provide funding per se for the aerial eradication program and, therefore, the value of its contributions are more difficult to quantify. In recent years, the police have provided helicopters and fixed-wing aircraft for spray mission support and the use of many of its facilities throughout Colombia. In addition, the police have about 3,600 personnel assigned to counternarcotics missions and estimate that 84 are directly supporting the aerial eradication program. To help the Colombian National Police increase its capacity to assume more responsibility for the aerial eradication program, NAS has initiated several efforts. In addition to hiring a contractor to help with the Aviation Service’s operations, NAS has initiated a program to train T-65 spray plane pilots and plans to begin training search and rescue personnel so they can accompany the aerial eradication missions. NAS officials stated that the contractor presence should decline and the police should be able to take over more of the eradication program by 2006, when NAS estimates that coca and poppy cultivation will be reduced to “maintenance levels”— 30,000 hectares and 5,000 hectares, respectively. In February and March 2002, the Office of Aviation conducted an Aviation Resource Management Survey of the Colombian National Police Aviation Service. According to Office of Aviation officials, these surveys are intended to provide a stringent on-site assessment of flight operations from management and safety to logistics and maintenance. The study noted that the Aviation Service has some unique circumstances that have made its operations difficult to manage. In particular, it grew from 579 personnel in 1995 to 1,232 in 2002 and operates 8 different types of rotary-wing and 9 different types of fixed-wing aircraft. Nevertheless, the team made a number of critical observations. For example, The Aviation Service’s organizational structure, lines of authority, and levels of responsibility were not clear. In most cases, only the commanding general was allowed to commit resources and make operational decisions. This reliance on an overly centralized command structure resulted in unnecessary delays and, NAS officials told us, the cancellation of some planned aerial eradication missions because the commanding general could not be reached. The Aviation Service did not have a formal flying hour program. A flying hour program is used to forecast budgetary requirements. It takes into account the operational use and training requirements for each aircraft and the various missions it performs and equates each flight hour to a cost average for fuel and spare parts, which constitute the majority of an aviation organization's annual expenses. The lack of a flying hour program has prevented the police from more accurately forecasting budgetary requirements. Moreover, according to NAS, maintenance scheduling is enhanced when the number of flight hours can be projected, which contributes to higher aircraft availability rates. About 35 percent of the maintenance staff were inexperienced. According to the survey team, this could result in improper maintenance procedures being performed, which could adversely affect flight safety and endanger lives. In addition, all locations the team visited had deficiencies in standard maintenance procedures and practices. For example, the survey team found that a UH-60 Black Hawk with gunshot damage to a fuel cell was used in several local area flights. While fuel cells are self-sealing to enable an aircraft to return to base for repairs after sustaining damage, aircraft are not supposed to be routinely flown in this condition. Management of items needing repair and control of spare parts was deficient. The survey team found 236 items awaiting repair—some from August 1998. The team also found more than $4 million in UH-1H helicopter blades and parts stored outside and unprotected. The Aviation Service’s safety program did not have formal risk management practices to ensure that all risk factors—such as weather, crew experience, and mission complexity—are taken into account. In addition, the team observed a majority of helicopter gunners failing to take basic safety precautions, such as ensuring that their machine guns and mini-guns were rendered harmless when personnel were around the aircraft, especially during refueling and rearming operations. To help correct these and other deficiencies, the survey team made numerous recommendations for specific improvements. Overall, the team rated the Aviation Service’s operational and maintenance procedures as poor but concluded that it had an excellent chance for improvement over the next 2 to 3 years due to the dedication of its young officers. As a result of the survey, in July 2002, a NAS contractor (a $38.8 million, 1-year contract with options for 4 additional years) began providing on-the- job maintenance and logistical training to the Aviation Service and helping the police address many of the issues raised by the Aviation Resource Management Survey team. NAS officials already noted that a more formal flying hour program has improved the availability rates of many of the aircraft in the Aviation Service’s inventory. For example, the availability rate of the Aviation Service’s UH-II helicopters—often used to support aerial eradication missions—increased from 67 percent in January 2002 to 87 percent in December 2002. Similar improvements also occurred for other Aviation Service aircraft, such as UH-60 Black Hawk and Bell 212 helicopters. According to NAS, the improved availability rates made it easier to schedule and conduct spray missions. According to NAS officials, the police managed the T-65 pilot program prior to July 2002, but the police repeatedly violated Office of Aviation standard operating procedures by requiring pilots to fly without adequate rest and in poor weather. As a result, NAS took tighter control of the program in April 2003. As currently planned, the program will train 21 Colombian pilots, 4 of whom will eventually be hired to fly the T-65s. The training will enable pilots to fly T-65 spray missions in both flat and mountainous areas. NAS is also planning to initiate a program in mid-2003 to standardize and modernize the police’s search and rescue capabilities. Currently, the Office of Aviation contractor provides all search and rescue coverage for the aerial eradication program. The training will make it possible for the police to provide search and rescue coverage for some spray missions by standardizing its operating procedures to make them compatible with the Office of Aviation’s. The program will also allow the police to replace much of its current equipment, which is antiquated or not standard. According to NAS officials, the program should be fully operational in about a year and self-sufficient in about 3 to 5 years. The U.S.-supported counternarcotics program in Colombia has recently begun to achieve some of the results envisioned in 1999-2000. However, Colombia and the United States must continue to deal with financial and management challenges. In addition, Colombia faces continuing challenges associated with its long-standing insurgency. Moreover, for U.S. assistance to continue, Colombia needs to ensure that the army and police comply with human rights standards, that the aerial eradication program meets certain environmental conditions, and that alternative development is provided in areas subject to aerial eradication. In 2000, we noted that the Colombian government had not finalized plans for funding, sequencing, and managing activities included in Plan Colombia and that State and Defense had not completed their implementation plans to support Plan Colombia. We concluded that if Colombia or the United States did not follow through on its portion of Plan Colombia, including identifying sources of funding, Plan Colombia could not succeed as envisioned. Nearly 3 years later, Colombia and the United States still have not defined performance measures or identified specific time frames for completing ongoing counternarcotics programs. After the new Colombian administration was inaugurated in August 2002, it drafted a National Security Strategy to define Colombia’s vital interests, principal threats, and short- and long-term objectives. According to State officials, as of April 2003, the National Security Strategy had not been finalized and was being held up while the Colombian military and police complete their strategy for dealing with the insurgents, including reclaiming the insurgent-controlled areas of Colombia and stemming illicit drug activities. As for the United States, we were told that in 2002, the President tasked State to prepare a comprehensive, fully integrated political-military implementation plan to reflect appropriate U.S. support for Colombia’s National Security Strategy. The plan is supposed to include a statement of the overall mission, goals, objectives, performance standards, timelines, measures of effectiveness, and desired end state and outcomes. However, according to State officials, development of this plan has not begun because Colombia has not released its National Security Strategy and the related military and police strategy. Under the original concept of Plan Colombia, the Colombian government pledged $4 billion and called on the international community to provide $3.5 billion. Until recently, Colombia had not provided any significant new funding for Plan Colombia and, according to U.S. embassy and Colombian government officials, anticipated international assistance for Plan Colombia—apart from that provided by the United States—did not materialize as envisioned. But because of overall poor economic conditions, the government of Colombia’s ability to contribute more is limited. Since 1999, a combination of domestic and foreign events has limited Colombia’s economic growth. Domestically, insurgent and paramilitary organizations remained active and derailed the peace process. According to the International Monetary Fund, the insurgency’s threats and attacks displaced thousands of people, hindered investment, affected oil production, and forced the government to increase military expenditures. Externally, the price of coffee—a traditionally major Colombian export— reached historically low levels, trade with some neighboring countries fell as their economies under performed, and foreign private financing to Colombia was limited by the continuing insurgency and political developments in the region during 2002. By mid-2002, Colombian finance officials estimated that Colombia’s economic growth was below 2 percent and its combined public sector deficit would likely exceed 5 percent of gross domestic product. In August 2002, the new Colombian administration announced a series of decrees and proposals to increase defense expenditures and strengthen the overall economy. Initially, the administration issued a decree establishing a one-time tax on wealth that was supposed to raise about $860 million. According to State, about $320 million of this amount would likely be spent on the military. To help maintain this increased revenue, the administration also submitted to the Colombian Congress a package of economic and administrative reforms. Most were approved in December 2002, but some reforms also require approval through a public referendum planned for later in 2003. The overall reform program calls for tax measures to raise revenues and a freeze on most current expenditures for 2 years. In addition, structural reforms, particularly changes in the government pension system and organizational streamlining, are planned to reduce expenditures. However, passage of the reforms subject to referendum is far from certain and, according to U.S. Embassy Bogotá and Colombian government officials, Colombia’s ability to provide additional funding to sustain the counternarcotics programs without a greatly improved economy is virtually nonexistent. The Colombian government has stated that ending the civil conflict is central to solving Colombia’s problems—from improving economic conditions to stemming illicit drug activities. A peaceful resolution to the long-standing insurgency would help stabilize the nation, speed economic recovery, help ensure the protection of human rights, and restore the authority and control of the Colombian government in the coca-growing regions. The continuing violence limits the government’s ability to institute economic, social, and political improvements. The Colombian government has stated that it is committed to protecting the human rights of its citizens. State and Defense officials reiterated that they will not assist those who violate the basic tenets of human rights, and State officials said they will apply the strictest human rights standards before approving the provision of assistance to Colombian military and police units. Nevertheless, human rights organizations continue to allege that individuals in the Colombian armed forces have been involved with or condoned human rights violations and that they do so with impunity. If this is the case, Colombia’s failure to adhere to U.S. human rights policies could delay or derail planned counternarcotics activities. The appropriations act for fiscal year 2003 makes $700 million available for Colombia and other Andean ridge countries, but it imposed some restrictions on the availability of 25 percent of the funds provided for the Colombian armed forces until the Secretary of State makes certain certifications. The Secretary of State must certify that Colombia’s armed forces are making progress in meeting human rights standards and, among other things, executing orders to capture paramilitary leaders to lift the restriction on 12.5 percent of the funds. To obligate the remaining 12.5 percent, the Secretary must certify after July 31, 2003, that Colombia continues to make progress in meeting the conditions in the initial certification. The appropriations act for fiscal year 2003 also requires that the aerial eradication program meet certain environmental conditions in its use of herbicide and that alternative development programs be available in the areas affected by the spray program. Otherwise, funds provided in the act that are used to purchase herbicide for the aerial eradication program may not be spent. State officials are still trying to determine the ramifications of the restrictions, but State and NAS officials are concerned that these requirements could delay funding needed to purchase herbicide and result in a temporary suspension of the program, making it more difficult for the program to achieve its ambitious goals. Such a suspension would also likely undermine the progress made in 2002 by allowing the coca and poppy farmers to reestablish their fields. The 2003 appropriations act’s environmental conditions require the Secretary of State, after consultation with the Administrator of the Environmental Protection Agency (EPA), to certify that (1) the herbicide mixture is being used in accordance with EPA requirements, the Colombian Environmental Management Plan, and any additional controls that EPA may recommend; (2) the mixture does not pose unreasonable risks or adverse effects to humans or the environment; and (3) complaints of harm to health or licit crops are evaluated and fair compensation is paid for meritorious claims. According to NAS and Office of Aviation officials, similar conditions in the fiscal year 2002 appropriations act almost resulted in a suspension of the aerial eradication program in October 2002 because of delays in finalizing the required reports. The program was able to continue operations by using prior-year funds but, at one point, had only a 10-day supply of herbicide available. The 2003 appropriations act’s alternative development conditions require that, in areas where security permits, USAID, Colombian government, or other organizations implement alternative development programs for small growers whose coca and poppy plants are targeted for spraying. According to State, NAS, and USAID officials, alternative development programs are not being implemented in all the specific areas sprayed because of concerns about physical security and the economic feasibility of implementing such programs in some locations. As of March 31, 2003, USAID reported accrued expenditures of about $51.6 million for alternative development projects and projected that expenditures for April through June 2003 would exceed $13.5 million. USAID officials also said that the agency had 247 alternative development projects benefiting more than 22,800 families in 9 departments where coca or opium poppy are grown. Colombia is a long-time ally and significant trading partner of the United States; therefore, its economic and political stability is important to the United States as well as the Andean region. Colombia’s long-standing insurgency and the insurgents’ links to the illicit drug trade complicate its efforts to tap its natural resources and make systemic economic reforms. Solving these problems is important to Colombia’s future stability. Colombia and the United States continue to face financial and management challenges in implementing and sustaining counternarcotics and counter- insurgency programs in Colombia. Neither the Colombian Army nor the Colombian National Police have the capacity to manage ongoing counternarcotics programs without continued U.S. funding and contractor support. Colombia’s financial resources are limited and its economy is weak and thus will need U.S. assistance for the foreseeable future. According to U.S. embassy officials, these programs alone may cost up to $230 million per year, and future costs for some recently initiated army and police programs have not been determined. In addition, we note that this estimate does not include future funding needed for other U.S. programs in Colombia, including other aerial and ground interdiction efforts; the police Aviation Service’s U.S.-funded contractor; and alternative development, judicial sector reform, and internally displaced persons programs. In recent years, world events—from the global war on terrorism to the wars in Afghanistan and Iraq—have diverted scarce U.S. resources and made it paramount that the United States fully consider the resources committed to its overseas assistance programs. As we noted in 2000, the total costs of the counternarcotics programs in Colombia were unknown. Nearly 3 years later, the Departments of State and Defense have still not developed estimates of future program costs, defined their future roles in Colombia, identified a proposed end state, or determined how they plan to achieve it. Because Colombia continues to face serious obstacles in substantially curtailing illicit narcotics activities and resolving its long-standing insurgency, we recommend that the Secretary of State, in consultation with the Secretary of Defense, examine the U.S. assistance programs to the Colombian Army and the Colombian National Police to (1) establish clear objectives for the programs reflecting these obstacles and (2) estimate future annual funding requirements for U.S. support. This analysis should designate specific performance measures for assessing progress, define the roles of U.S. personnel and contractors, and develop a timeline for achieving the stated objectives. The Secretary should provide this information to the Congress for consideration in the fiscal year 2005 appropriations cycle. State and Defense provided written comments on a draft of this report. See appendixes I and II, respectively. Both concurred with our recommendation. State said it very much agreed with the overall findings and, in particular, the recognition that continued U.S. programs will be needed for the foreseeable future to sustain operations in Colombia and achieve U.S. foreign policy goals. It further said that the time is appropriate for a comprehensive review of U.S. programs with the Colombian Army and the Colombian National Police and intends to address our recommendation for providing key program information to the Congress beginning in the fiscal year 2005 appropriations cycle. Defense stated that it would work with State to establish clear objectives and would coordinate with State and other agencies involved to develop performance measures. Defense added that, once performance measures are established, it would augment staff at the U.S. Embassy Bogotá Military Group to collect information for measuring progress. To determine the status of U.S. counternarcotics assistance provided to the Colombian Army in fiscal years 2000-03, and how this assistance has been used, we reviewed pertinent planning, implementation, and related documentation and met with cognizant U.S. officials at the Departments of State and Defense, Washington, D.C.; the U.S. Southern Command headquarters, Miami, Florida; and the U.S. Embassy in Bogotá, Colombia. We also met with U.S.-funded contractor representatives at various Colombian Army bases; the Colombian Army Aviation Battalion commander and his staff at Tolemaida; and the counternarcotics brigade commander and his staff at Larandia and Tres Esquinas. In addition, we observed a Colombian Army counternarcotics brigade airlift operation. To determine what the U.S.-supported Colombian National Police aerial eradication program has accomplished in recent years, we reviewed pertinent documentation and met with cognizant officials at the Department of State, Bureau for International Narcotics and Law Enforcement Affairs in Washington, D.C., and the Office of Aviation headquarters office at Patrick Air Force Base, Florida. In Colombia, we met with Office of Aviation officials and contractor representatives at the Office of Aviation headquarters office at the El Dorado International Airport in Bogotá; the Colombian National Police base at Guaymaral; and operational sites at Larandia, San Jose del Guaviare, Santa Ana, and Villa Garzon in the primary coca-growing regions of Colombia. We also met with the Colombian National Police deputy commander and other police officials. In addition, we observed several aerial eradication operations—from loading the herbicide and refueling the spray planes to the actual spray missions. To determine what challenges Colombia and the United States face in sustaining these programs, we met with numerous U.S. and Colombian officials to obtain their views on the issues discussed in this report. In Colombia, we interviewed U.S. embassy officials, including the Ambassador; Deputy Chief of Mission; and others from the Narcotics Affairs Section, the Military Group, the U.S. Agency for International Development, and the Drug Enforcement Administration. We also interviewed Colombian Army, police, and other government officials, including officials from the Colombian Ministries of Defense and Finance and Colombia’s National Planning Department. We conducted our work between July 2002 and May 2003 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the interested congressional committees and the Secretaries of State and Defense. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-4268 or contact me at FordJ@gao.gov. An additional contact and staff acknowledgments are listed in appendix III. In addition to the above named individual, Jocelyn Cortese, Allen Fleener, Ronald Hughes, Jose Pena, George Taylor, Kaya Taylor, and Janey Cohen. Rick Barrett and Ernie Jackson provided technical assistance. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
The United States has been providing assistance to Colombia since the early 1970s to help reduce illegal drug activities. In fiscal years 2000-03 alone, the United States provided over $2.5 billion. Despite this assistance, Colombia remains the world's leading producer and distributor of cocaine and a major source of the heroin used in the United States. The report discusses the status of U.S. counternarcotics assistance to the Colombian Army and for a U.S.-supported Colombian police aerial eradication program. It also addresses challenges Colombia and the United States face in sustaining these programs. In fiscal years 2000-03, the United States provided about $640 million in assistance to train and equip a Colombian Army counternarcotics brigade and supply the army with 72 helicopters and related support. Nearly all this assistance has been delivered and is being used for counternarcotics operations. However, the Colombian Army cannot operate and maintain the U.S.-provided helicopters at current levels without U.S. support because it does not yet have sufficient numbers of qualified pilots and mechanics. U.S. officials estimate that up to $150 million a year is needed to sustain the ongoing programs. In recent years, the Colombian National Police aerial eradication program has had mixed results. Since 1995, coca cultivation rose in every year until 2002 and opium poppy cultivation remained relatively steady until 2001. But, for 2002, the U.S. Office of National Drug Control Policy reported that net coca cultivation in Colombia decreased 15 percent, and net opium poppy cultivation decreased 25 percent--the second yearly decline in a row. U.S. officials attributed this success primarily to the Colombian government's willingness to spray coca and poppy plants without restriction. These officials estimate that about $80 million a year is needed to continue the program at its current pace. Although the U.S.-backed counternarcotics program in Colombia has begun to achieve some of the results originally envisioned, Colombia and the United States must deal with financial and management challenges. As GAO noted in 2000, the total costs of the counternarcotics programs in Colombia were unknown. Nearly 3 years later, the Departments of State and Defense have still not developed estimates of future program costs, defined their future roles in Colombia, identified a proposed end state, or determined how they plan to achieve it. Colombia's ability to contribute more is limited, and it continues to face challenges associated with its long-standing insurgency and the need to ensure it complies with human rights standards and other requirements in order for U.S. assistance to continue.
|
Congress created FDIC in 1933 to restore and maintain public confidence in the nation’s banking system. In 1989 the Financial Institutions Reform, Recovery, and Enforcement Act was enacted to reform, recapitalize, and consolidate the federal deposit insurance system. It created the Bank Insurance Fund and the Savings Association Insurance Fund, which are responsible for protecting insured bank and thrift depositors, respectively, from loss due to institution failures. The act also created the FSLIC Resolution Fund to finalize the affairs of the former FSLIC and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. It also designated FDIC as the administrator of these funds. As part of this function FDIC has an examination and supervision program to monitor the safety of deposits held in member institutions. FDIC insures deposits in excess of $3.2 trillion for about 10,000 institutions. Together the three funds have about $49 billion in assets. FDIC had a budget of about $1.2 billion for calendar year 2001 to support its activities in managing the three funds. For that year, it processed more than 2.7 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information it collects. These systems are interconnected by FDIC’s local and wide area networks. To support its financial management functions, it relies on several financial systems to process and track financial transactions that include premiums paid by its member institutions and disbursements made to support operations. In addition, FDIC supports other systems that maintain personnel information on its employees, examination data on selected financial institutions, and legal information on closed institutions. At the time of our review, there were about 5,400 authorized users on FDIC’s systems. Our objective was to evaluate the effectiveness of information systems general controls over the financial systems maintained and operated by FDIC during our 2001 financial statement audits. These information systems controls also affect the security and reliability of other sensitive data, including personnel, legal, and bank examination information maintained on the same computer systems as the corporation’s financial information. Specifically, we evaluated information systems controls intended to protect data and application programs from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, information security, and quality assurance; ensure recovery of computer processing operations in case of disaster or other unexpected interruption; and ensure an adequate information security management program. To evaluate these controls, we identified and reviewed FDIC’s policies and procedures, conducted tests and observations of controls in operation, and held discussions with FDIC staff to determine whether information systems controls were in place, adequately designed, and operating effectively. In addition, we reviewed corrective actions taken by FDIC to address vulnerabilities identified in our calendar year 2000 audit. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information systems controls that affect the integrity, confidentiality, and availability of computerized data; and (2) our May 1998 report on security management best practices at leading organizations, which identifies key elements of an effective information security program. We performed our work at FDIC from October 2001 through April 2002. Our work was performed in accordance with generally accepted government auditing standards. In our audit of FDIC’s calendar year 2001 financial statements, we found that FDIC made progress in correcting previously identified weaknesses. For instance, in our 2000 financial statement audits, we determined that FDIC had not adequately limited access of authorized users, restricted physical access to computer facilities, performed comprehensive tests of the disaster recovery plan, implemented a computer security incident response process, established a security awareness program, developed security plans, and performed independent security reviews. These weaknesses placed critical corporation operations, such as financial management, personnel, and other operations, at greater risk of misuse and disruption. Except for actions still needed to fully implement a computer security management program, which are discussed later in this report, FDIC made progress in addressing our previously reported computer security weaknesses. For example, in our 2001 audits, we found that FDIC has limited access of its system programmers and security staff to certain developed corporate access authorization procedures; restricted modem connections and use of generic log on IDs to its improved physical security to its computer center by limiting access through the adjoining FDIC hotel; developed and performed tests of its computer center disaster recovery plans, including its network and designated remote facilities, to provide backup support for the corporation’s network and other operations; established a computer security awareness program for its employees developed security plans for its general support systems and implemented a requirement and process for independent security reviews to be performed at least every 3 years. In addition to correcting previously identified weaknesses, FDIC initiated other steps to improve computer security. These efforts included (1) reviews of system software, (2) improvements in physical security, including the use of guard service to provide security surveillance to its computer rooms, (3) completed management authorizations for major financial applications and general support systems, and (4) assessments of the sensitivity of corporate data to determine the level of security needed to protect it. However, we found additional control weaknesses in FDIC’s information systems in connection with our calendar year 2001 financial statement audits. Specifically, FDIC has not adequately limited access to data and programs by controlling mainframe access authority, providing sufficient network security, or establishing a comprehensive program to monitor access activities. Other information system control weaknesses were also identified that could likewise hinder FDIC’s ability to provide adequate physical security for its computer facility, appropriate segregation of computer functions, effective control of system software changes, or ensure continuity of operations. Consequently, financial, and personnel programs and data maintained by FDIC are at risk of inadvertent or deliberate misuse, fraudulent use, and unauthorized alteration or destruction, which may occur without detection. The following sections summarize the results of our review. A separate report designated for “Limited Official Use Only” details specific weaknesses in information systems controls that we identified, provides our recommendations for correcting each weakness, and indicates FDIC’s planned actions or those already taken for each weakness. An evaluation of the adequacy of this action plan will be part of our planned work at FDIC. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modifications, disclosure, or deletion. Organizations can protect this critical information by granting employees the authority to read or modify only those programs and data that they need to perform their duties and by periodically reviewing access granted to ensure that it is appropriate. In addition, effective network security controls should be established to authenticate local and remote users and include a program to monitor the access activities of the network and mainframe systems. Although progress was made in limiting access, FDIC’s information systems controls were not adequately protecting financial and sensitive information. Specifically, FDIC had not appropriately limited mainframe access authority, sufficiently secured its network, or established a comprehensive program to monitor access activities. These weaknesses place the corporation’s information systems at risk of unauthorized access, which could lead to the improper disclosure, modification, or deletion of sensitive information and the disruption of critical operations. Effective mainframe access controls should be designed to prevent, limit, and detect access to computer programs and data. These controls include access rights and permissions, system software controls, and software library management. While FDIC restricted access to many users who previously had broad access to critical programs, software, and data, we identified instances in which the corporation had not sufficiently restricted access to legitimate users. A key weakness in FDIC’s controls was that its data center did not sufficiently restrict user access, as described below. Hundreds of users had access privileges that allowed them to modify financial software and read, modify, or copy financial data. This risk was further heightened because the corporation was not actively monitoring the access activities of these users. Many users had unnecessary access to powerful commands. About 55 users had access to a specific transaction command that could be used to circumvent the security of sensitive FDIC information, including its bank examination files. These users included 26 help-desk employees and 14 database staff, users who do not need this access to perform their daily job functions. About 15 users outside of the system programming function had access privileges to one sensitive system software library that is allowed to perform system functions that can be used to circumvent all security controls. Such access increases the risk that users can bypass security controls to alter or delete any computer data or programs on the system. Typically such access privileges are limited to system programmers. About 30 users had access to powerful operator commands that could be used to circumvent system security or compromise the operational integrity of the system. Prior to the completion of our work, the acting CIO told us that this access privilege had been removed for these users. One reason for FDIC’s user access vulnerabilities was that not all access authority granted based on job responsibility was being collectively reviewed. Instead, individual access privileges were reviewed by data owners but only to determine the appropriateness of each user’s access to a data owner’s resource. As a result, there was no comprehensive review to determine the appropriateness of all access granted to any one user. Such reviews would have allowed FDIC to identify and correct inappropriate access. FDIC said that it was reviewing staff access and would limit this access to that required to carry out job responsibilities. Further, the corporation plans to develop and implement procedures to comprehensively review all access granted and ensure that access remains appropriate. Network security controls are key to ensuring that only authorized individuals gain access to sensitive and critical agency data. These controls include a variety of tools such as user passwords, intended to authenticate authorized users who access the network from local and remote locations. In addition, network controls provide safeguards to ensure that the system software is adequately configured to prevent users from bypassing network access controls or causing network failures. The risks introduced by the weaknesses we identified in access controls were compounded by network security weaknesses. While FDIC had taken major steps to secure its network through the installation of a firewall and other security measures, weaknesses in the way the corporation configured its network servers, managed user IDs and passwords, provided network services, and secured its network connectivity were nonetheless still present. As a result, financial information processed on the network is at increased risk that unauthorized modification or disclosure could occur without detection. Because of FDIC’s interconnected environment, these network control weaknesses also increase the risk of unauthorized access to financial and sensitive information (such as bank examination, personnel, and financial management information) maintained on the FDIC mainframe computer. For example: One system had default accounts that were not removed during installation of remote access software. Information on default settings and passwords is available in vendor-supplied manuals, which are available to hackers. Other systems had dormant accounts that could be used by hackers with a lower risk of detection. The network had system software configuration weaknesses that could allow users to bypass access controls and gain unauthorized access to FDIC’s networks or cause network system failures. For instance, certain network system configuration settings allowed unauthorized users to connect to the network without entering a valid user ID and password combination. This could allow unauthorized individuals to obtain access to system information describing the network environment, including user IDs and password information. Potentially dangerous services were available on several network systems. Because of the availability of these services, a greater risk exists that an unauthorized user could exploit them to gain high-level access to the system and applications, obtain information about the system, or deny system services. Further, FDIC did not have a process in place to actively review the network connections maintained by its contractors to ensure that only authorized network access paths were being used. Such network security weaknesses increase the risk that those with malicious intent could misuse, improperly disclose, or destroy financial and other sensitive information. In response to our findings, FDIC’s acting CIO said that the corporation had developed and implemented policies and procedures to periodically review (1) user accounts on all servers to ensure that they are required and appropriately used, (2) system configuration settings for vulnerabilities, and (3) services used on the network to ensure that only those that are needed are maintained. She further said that FDIC had taken steps to tighten network security for its contractor connections and was in the process of reviewing all new contractor connections to the network to ensure appropriate access. The risks created by these access control problems were heightened because FDIC did not fully establish a comprehensive program to monitor user access. A monitoring program is essential to ensuring that unauthorized attempts to access critical program and data are detected and investigated. Such a program would include routinely reviewing user access activity and investigating failed attempts to access sensitive data and resources, as well as unusual and suspicious patterns of successful access to sensitive data and resources. Such a program is critical to ensuring that improper access to sensitive information is detected. To effectively monitor user access, it is critical that logs of user activity be maintained for all critical system processing activities. This includes collecting and monitoring access activities on all critical systems, including mainframes, network servers, and routers. Because the volume of security information is likely to be too large to review routinely, the most effective monitoring techniques selectively target specific actions. These efforts should include provisions to identify unusual activities, such as changes to sensitive system files that were not made by system programmers, or updates to security files that were not made by security staff. A comprehensive monitoring program should, further, include an intrusion- detection system to automatically log unusual activity, provide necessary alerts, and terminate sessions when necessary. While FDIC logged access activity for many of its systems and developed programs to target unusual or suspicious activities, it did not take sufficient steps to ensure that it was recording or monitoring the access activities of all key systems, including the following: Special system services on the FDIC mainframe were not being logged because the audit trail that records the access activity was not enabled. As a consequence, adverse access events may not be detected that could potentially disrupt system operations or result in information system being unavailable to the corporation. Logging was not enabled to monitor successful or unsuccessful attempts to access sensitive router and switch configuration files on the network. Unauthorized access to these resources could enable an intruder or unauthorized user to read or modify configuration files containing security settings such as router passwords, user names, or access control listings. With the ability to read or write to these files, a malicious user could seriously disable or disrupt network operations by taking control of the routers and switches. While FDIC has installed and implemented a network-based intrusion- detection system to monitor for unusual or suspicious access activities, it has not yet configured the host-based system parameters so that notifications (such as e-mail and/or pager) are sent to the computer security incident response team. FDIC is in the process of testing the host- based system to determine the most appropriate parameter configuration. Without full implementation of such a system and more effective logging and monitoring of system access activities, FDIC reduces its ability to identify and investigate unusual or suspicious access to its financial and sensitive information. According to the acting CIO, the corporation has implemented security reporting for its test environment. In addition, it established procedures to provide for system logging and review of these logs for unusual or suspicious activities. Further, FDIC plans to have its intrusion-detection system fully implemented by July 31 of this year. In addition to the information system access controls discussed, other important controls should be in place to ensure the integrity and reliability of an organization’s data. These controls include policies, procedures, and control techniques to physically protect computer resources and restrict access to sensitive information, provide appropriate segregation of duties of computer personnel, prevent unauthorized changes to system software, and ensure the continuation of computer processing operations in case of disaster. FDIC had weaknesses in each of these areas. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed and periodically reviewing access granted to ensure that it continues to be appropriate based on criteria established for granting such access. At FDIC, physical access control measures (such as guards, badges, and alarms, used alone or in combination) are vital to safeguarding critical financial and sensitive personnel and banking information and computer operations from internal and external threats. Although FDIC took measures to improve its physical perimeter security and access to its computer rooms, its process for granting and reviewing physical access to the computer center is not adequately controlled. For example, there were instances in which records of access granted to staff were not available. Further, staff who no longer required access to the computer center still retained such access. This included personnel who (1) had transferred out of computer operations, (2) no longer worked for FDIC, or (3) never or rarely visited the computer room. FDIC has neither established criteria for granting physical access to its computer center, nor developed procedures to periodically review staff access to determine continued need. Without adequate criteria and periodic review, FDIC increases the risk of unauthorized access to the corporation’s systems and disruption of services. At our request, FDIC reviewed its list of staff with access to the computer center, reducing the number of authorized staff from 270 to 227. Specifically, it determined that it had no record of access granted to 18 staff, and that access was no longer needed by 25 individuals. According to the acting CIO, the corporation has revised its computer center access procedures to include criteria for granting and retaining access to the center, and established other procedures to provide access to information on employee reassignments and other actions that could affect the need for access to the computer center. Further, she said, the corporation has developed reports on employee access activities to further assist it in monitoring physical access to the computer center. Another fundamental technique for safeguarding programs and data is to segregate the duties and responsibilities of computer personnel to reduce the risk that errors or fraud will occur and go undetected. Incompatible duties that should be separated include application and system programming, production control, database administration, computer operations, and data security. Once policies and job descriptions supporting the principles of segregation of duties have been developed, it is important to ensure that adequate supervision is provided or mitigating controls established to provide the necessary monitoring and oversight to ensure that employees perform only those tasks that have been authorized for their job functions. Although computer duties are generally properly segregated at FDIC, we identified instances in which duties were not adequately segregated. For example, 24 application developers were authorized to make modifications to financial programs and data that were in production. Typically, developer access is limited to program code in the development environment. While it may be appropriate at times to grant developers access to both production programs and data, it should only be done when mitigating controls have been established. However, the corporation had not established mitigating controls, such as logging and monitoring system access activities of the developers to ensure that they were performing only authorized actions. Similarly, FDIC assigned two staff members to monitor and review the access activities on its production platforms; they were also authorized to make changes to programs and data that they were responsible for reviewing. Yet, FDIC did not provide supervisory oversight or establish other mitigating controls to ensure that these staff members performed only authorized functions. Because adequate mitigating controls had not been established in either instance, the risk is increased that FDIC financial or other sensitive information could be inadvertently or intentionally modified, or unauthorized transactions processed. FDIC plans to enhance its system monitoring of developers by targeting logging and monitoring activities to sensitive production data and programs by December 31 of this year. Further, FDIC will augment its monitoring and review of access to its production environment by designating a security person to independently review these activities. A standard information systems control practice is to ensure that only authorized and fully tested system software or related modifications are placed in operation. To ensure that newly developed system software or changes are needed, work as intended, and do not result in the loss of data and program integrity, the system software or changes should be documented, authorized, tested, and independently reviewed. Strong security practices provide that a structured approach be used to control the development, review, and approval of system software exits. This process includes requirements for documenting the purpose of the exit, performing a technical review of the software, and approving the implementation of this software. System software exits are used to provide installations with additional processing capabilities. These exits increase the risk of integrity exposures, since the code is usually implemented with authorized privileges that allow it to bypass security and gain access to financial programs or data. However, we identified weaknesses in the system software development and change control process at FDIC. System software exits developed by FDIC were not adequately controlled. None of the nine locally developed system software exits maintained by FDIC were documented to reflect their purpose. Further, there was no documented evidence of review by technical management or formal approval for these exits. FDIC did not develop procedures for documenting, reviewing, or approving locally developed system software exits. Without a formally documented review and approval process, an increased risk exists that the exit will not work as intended, and could result in the loss of data or program integrity. In addition, although FDIC established a process for system software change control and used an automated system to document changes, it did not establish procedures for performing and approving tests of system software changes or develop minimum documentation requirements for tests performed. In a sample of 20 system software changes reviewed, none had documentation of the tests performed or evidence that tests performed had been approved. As a result, the risk increases that unauthorized or not adequately tested system software could be placed into operation. FDIC’s acting CIO said that the corporation would develop a process for documenting, reviewing, and approving locally developed system software exits. Further, the corporation plans to revise its requirements for documenting system software changes, provide specific requirements for testing these changes, and establish a process, by August 31 of this year to ensure compliance. An organization must take steps to ensure that it is adequately prepared to cope with the loss of operational capability due to earthquake, fire, accident, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested service continuity plan covering all key computer operations, and including plans for business continuity. Such a plan is critical for helping to ensure that information system operations and data, such as financial processing and related records, can be promptly restored in the event of a disaster. To ensure that it is complete and fully understood by all key staff, the service continuity plan should be tested, to include surprise tests, and the test plans and results documented to provide a basis for improvement. In addition, backup sites should be reviewed and selected on the basis of their ability to provide assurance that an organization will be able to maintain continuity of operations. While FDIC has updated and conducted tests of its service continuity plan, improvements are still needed in some areas. Service continuity weaknesses include the following: The lack of unannounced tests or walk-throughs of its service continuity plan. Instead, all tests have been planned, with participants fully aware of the disaster recovery scenario. In an actual disaster, of course, there is usually little or no warning. The lack of a business continuity plan for all its facilities. While FDIC has implemented a plan for its Washington, D.C., facility, it has yet to implement similar plans for its suburban computer center and eight regional offices. The potential unavailability of one of FDIC’s designated computer backup facilities. This facility is in an area that could have limited accessibility in an event like September 11, 2001. FDIC plans to develop and implement procedures for performing unannounced walk-throughs of its disaster recovery plan by September 30, 2002, and conduct and complete tests of its business recovery plans by December 31, 2002. Further, FDIC has moved all disaster recovery hardware and software from Washington, D.C., to a regional office. A key reason for FDIC’s continuing weaknesses in information systems controls is that it has not yet fully developed and implemented a comprehensive security management program to ensure that effective controls are established and maintained, and that computer security receives adequate attention. Our May 1998 study of security management best practices determined that a comprehensive computer security management program is essential to ensuring that information system controls work effectively on a continuing basis. Specifically, an effective computer security management program includes establishing a central security management structure with clearly delineated security roles and responsibilities; performing periodic risk assessments; establishing appropriate policies, procedures, and technical standards; raising security awareness; and establishing an ongoing program of tests and evaluations of the effectiveness of policies and controls. FDIC has taken action related to each of the key elements described above, including the implementation of a comprehensive security awareness program for all its employees. However, aside from security awareness, the steps taken to address the other key elements of a comprehensive computer security management program were not sufficient to ensure continuing success. The first key element of effective computer security management is the establishment of a central security group with clearly defined roles and responsibilities. This provides overall security policy and guidance, along with the oversight to ensure compliance with established policies and procedures; further, it reviews the effectiveness of the security environment. The central security group often is supplemented by individual security staff designated to assist in the implementation and management of the organizations security program. To ensure the effectiveness of the security program, clearly defined roles and responsibilities for all security staff should be established, and coordination responsibilities between individual security staff and central security should be developed. While FDIC has established a central security function and is in the process of designating information security managers for each of its divisions, it has not clearly defined these managers’ roles and responsibilities. Further, FDIC has not established guidance to ensure that these managers coordinate and collaborate with the central security function in addressing security related issues. Without a formally defined and coordinated program, FDIC’s computer security program risks fragmentation and the lack of a corporate focus, which is needed to adequately secure its highly interconnected computer environment. The second key aspect of computer security management is periodic risk assessment. Regular risk assessments assist management in making decisions on necessary controls by helping to ensure that security resources are effectively distributed to minimize potential loss. And, by increasing awareness of risks, these assessments generate support for the adopted policies and controls, which help ensure that the policies and controls operate as intended. Further, the Office of Management and Budget Circular A-130, appendix III, prescribes that risk be assessed when significant changes are made to the system or at least every 3 years. FDIC has not yet fully implemented a risk assessment process. While it requires a risk-based approach to security management, to date it has focused on conducting independent security reviews of its key applications and general support systems. However, these reviews do not address certain key elements for managing risk, such as identifying, analyzing, and understanding the threats to the computer environment; determining business impact when risks are exploited; and mitigating risks in a cost- effective manner. Also, FDIC has not developed a complete framework for assessing risk when significant changes are made to a facility or its computer systems. During the past year, FDIC replaced its mainframe hardware and upgraded its mainframe operating system. Either of the changes could have introduced new vulnerabilities into FDIC’s computer system thus warranting a need for a risk assessment. A third key element of effective security management is having established policies, procedures, and technical standards governing a complete computer security program. Such policies and procedures should integrate all security aspects of an organization’s interconnected environment, including local area network, wide area network, and mainframe security. In addition, technical security standards are needed to provide a consistent control framework for each computer environment. The integration of network and mainframe security is particularly important as computer systems become more interconnected. FDIC has completed security plans for its general support systems and major financial applications. It has also developed and implemented overall security policies and procedures for its computer environment. While it has established technical security standards for several of its network platforms and its mainframe security software, it has not developed technical security standards for implementing network routers and maintaining operating system integrity on its mainframe system. Such standards would not only help ensure that appropriate computer controls are established consistently for these systems, but would also facilitate periodic reviews of the controls. A fourth key area of security management is promoting security awareness. Computer attacks and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees who use computer systems in their day-to-day operations be aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining confidentiality and integrity. In accepting responsibility for security, employees should, for example, devise effective passwords, change them frequently, and protect them from disclosure. In addition, employees should help maintain physical security over their assigned areas. FDIC has established a comprehensive security awareness program for all employees. Specifically, it developed a computer-based security awareness program that all employees were required to complete annually. FDIC has also established procedures to monitor compliance with this requirement. The final key area of an overall computer security management program is an ongoing program of tests and evaluations of the effectiveness of policies and controls. Such a program includes processes for (1) monitoring compliance with established information system control policies and procedures, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities. While FDIC established an independent security program to review compliance with application and general support system security plans on a 3-year cycle, it has not established a program to routinely monitor and test the effectiveness of information systems controls. Such a program would allow FDIC to ensure that policies remain appropriate and that controls accomplish their intended purpose. Monitoring is key. Weaknesses discussed in this report could have been identified and corrected if the corporation had been monitoring compliance with established procedures. For example, if FDIC had a process to review all access authority granted to each user to ensure that the access was limited to that needed to complete job responsibilities, it would have been able to discover and limit the inappropriate access authority granted to hundreds of users, as discussed in this report. A program to regularly test information systems controls would also have allowed FDIC to detect additional network security weaknesses. For example, using network analysis software designed to detect network vulnerabilities, we identified user accounts and services that could provide hackers with information to exploit the network and launch an attack on FDIC systems. Corporation staff could have identified this exposure using similar network analysis software already available to them. In response, FDIC’s acting CIO said that the corporation would develop policies and procedures to define the roles and responsibilities of its information security managers. These procedures would include requirements for coordinating security activities with the central security function. In addition, the corporation is updating its risk management directive to address the need to perform periodic risk assessments and to conduct these assessments when significant changes occur. FDIC also intends to develop and implement technical security standards for its mainframe operating system and network routers. In addition, it expects to develop and implement an ongoing security oversight program to include provisions for monitoring compliance with established procedures and testing the effectiveness of the corporation’s controls. All of these initiatives are expected to be completed no later than December 31 of this year. While FDIC has made progress in correcting previously identified computer security weaknesses, additional ones have been identified in its information systems control environment. Specifically, FDIC had not appropriately limited user access authority, sufficiently secured its network, or established a program to monitor access activity. Also, FDIC was not adequately providing physical security, segregating computer duties, controlling system software, or ensuring that all aspects of its service continuity needs were addressed. Such weaknesses place sensitive FDIC information at risk of disclosure, financial operations at risk of disruption, and assets at risk of loss. A primary reason for FDIC’s information systems control problems is that it has not yet fully implemented a comprehensive program to manage computer security. While FDIC has clearly taken steps in many of these areas, more remains to be done. A comprehensive program for computer security management is essential for achieving an effective information system general control environment. Effective implementation of such a program provides for (1) periodically assessing risks; (2) implementing effective controls for restricting access based on job requirements and proactively reviewing access activities; (3) communicating the established policies and controls to those who are responsible for their implementation; and, perhaps most important, (4) evaluating the effectiveness of policies and controls to ensure that they remain appropriate and accomplish their intended purpose. To establish an effective information systems control environment, we recommend that you instruct the acting CIO, as the corporation’s key official responsible for computer security, to ensure that the following actions are completed. Correct the information systems control weaknesses related to access authority, network security, access monitoring, physical access, segregation of duties, system software, service continuity, and security management. These specific weaknesses are described in a separate report designated for “Limited Official Use Only,” also issued today. Fully develop and implement a computer security management program. Specifically, this would include (1) establishing clearly defined roles and responsibilities for FDIC’s information security managers and guidance for coordinating and collaborating with central security, (2) developing a program for performing periodic risk assessments to determine computer security needs, (3) developing and implementing technical security standards for all computer platforms, and (4) establishing an ongoing program of tests and evaluations to ensure that policies and controls are appropriate and effective. In addition, we recommend that you instruct the acting CIO to report periodically to you, or your designee, on progress in implementing FDIC’s corrective action plans. In providing written comments on a draft of this report, the Acting Chief Financial Officer of FDIC agreed with our recommendations. His comments are reprinted in appendix I of this report. He reported that significant progress has already been made in addressing the weaknesses identified. Specifically, FDIC plans to correct the information systems control weaknesses related to access authority, network security access monitoring, physical access, segregation of duties, systems software, service continuity, and security management by December 31, 2002. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; the members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, and finance; and the FDIC inspector general. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-3317 or David W. Irvin, assistant director, at (214) 777-5716. We can also be reached by e-mail at daceyr@gao.gov and irvind@gao.gov, respectively. Key contributors to this report are listed in appendix II. In addition to the person named above, Edward Alexander, Gerald Barnes, Nicole Carpenter, Lon Chin, West Coile, Debra Conner, Kristi Dorsey, Denise Fitzpatrick, Edward Glagola, Brian Howe, Jeffrey Knott, Harold Lewis, Suzanne Lightman, Duc Ngo, Tracy Pierson, Rosanna Villa, and Charles Vrabel made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
|
GAO reviewed information systems general controls in the calendar year 2001 financial statement audits of the Federal Deposit Insurance Corporation's (FDIC) Bank Insurance Fund, Savings Association Insurance Fund, and Federal Savings and Loan Insurance Corporation Resolution Fund. FDIC made progress in correcting information security weaknesses previously identified and has taken steps to improve security. Nevertheless, GAO identified new weaknesses in its information systems controls that affect the corporation's ability to safeguard electronic access to critical financial and other sensitive information. FDIC did not adequately limit access to data and programs by controlling mainframe access authority, providing sufficient network security, or establishing a comprehensive program to monitor access activities. Further, other information systems control weaknesses were identified that could hinder FDIC's ability to provide physical security for its computer facility, appropriate segregation of computer functions, effective control of system software changes, or continuity of operations.
|
HUD is the principal federal agency responsible for programs dealing with housing, community development, and fair housing opportunities. Its mission includes making housing affordable through the Federal Housing Administration’s (FHA) mortgage insurance for multifamily housing, providing rental assistance for about 4.5 million lower income residents, helping to revitalize over 4,000 localities through community development programs, and encouraging home ownership by providing mortgage insurance. HUD is one of the nation’s largest financial institutions, responsible for managing more than a reported $454 billion in mortgage insurance and, as of September 30, 1997, a reported $531 billion in guarantees of mortgage-backed securities. For fiscal year 1998, the agency’s budget authority was about $24 billion, and its information technology budget was $222 million. HUD’s major program areas are managed by the Office of Housing, which includes FHA’s insurance and project-based rental assistance programs; the Office of Community Planning and Development, which includes programs for Community Development Block Grants, empowerment zones/enterprise communities, and assistance for the homeless; the Office of Public and Indian Housing, which provides funds to help operate and modernize public and Indian housing and administers tenant-based rental assistance programs; and the Office of Fair Housing and Equal Opportunity, which is responsible for investigating complaints and ensuring compliance with fair housing laws. In 1984, we reported that HUD lacked adequate information and financial management systems necessary to ensure accountability for, and control over, departmental programs. In 1989, HUD was involved in highly publicized scandals that included instances in which private real estate agents were able to steal millions of dollars by retaining the proceeds from the sale of FHA-owned properties, rather than transferring the funds to the Treasury. In 1992, we reported that these scandals were attributed, in large part, to fundamental deficiencies in the department’s information and financial management systems. In particular, HUD’s systems were inadequate, lacked credibility and internal controls, and failed to meet program managers’ needs or provide adequate support for oversight of housing and community development programs. To address fundamental deficiencies in the department’s information and financial systems and meet the requirements of the Chief Financial Officers Act of 1990, which called for financial management reform across the federal government, the Secretary of Housing and Urban Development initiated a number of actions. These actions included the appointment of a Chief Financial Officer (CFO) to oversee the department’s financial operations and initiation of a major Financial Systems Integration (FSI) effort to strengthen its financial management systems. Although HUD proceeded with this high priority effort, it continued to be affected by poorly integrated, ineffective, and generally unreliable information systems that did not satisfy management needs or provide adequate support to control housing and community development programs. In 1994, we designated the department a high-risk area, in part because of its inadequate information and financial management systems and slow progress in correcting fundamental management weaknesses that had allowed the 1989 scandals to occur. By 1997, we reported that HUD had formulated approaches and initiated actions to address departmentwide deficiencies, including information and financial management systems problems, but many of these actions were far from being completed. In the meantime, HUD continues to rely on unintegrated and inadequate program and financial management systems, some of which are not yet Year 2000 compliant. Recognizing the need to better manage information technology, recent legislative reforms—the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, and the Federal Acquisition Streamlining Act of 1994—provide guidance to federal agencies on how to plan, manage, and acquire information technology as part of their overall information resources management (IRM) responsibilities. These legislative reforms highlight the need to ensure that IRM programs and decisions are integrated with organizational planning, budgeting, and financial management. While HUD revised its FSI plan in 1993 and again in 1997, its primary objective—implementing an integrated financial management system to meet the department’s program and financial management needs—remained unchanged. At the same time, HUD’s implementation strategy and the cost and schedule estimates to develop and deploy FSI continue to change. HUD has not yet finalized the cost and schedule estimates for its 1997 FSI strategy and has not performed the detailed analyses needed to determine whether the strategy is cost beneficial. HUD adopted its first Financial Management Systems Strategic Integration Plan in November 1991 to address and resolve material weaknesses in its financial systems. In this plan, HUD acknowledged that inadequate and unintegrated financial management systems rendered it unable to properly manage its programs and financial resources. The plan’s primary objective was, therefore, to implement an integrated financial management system that would meet the department’s program and financial management needs. The 1991 plan contained specific objectives to establish sound financial management controls, correct material weaknesses, improve financial management, provide timely and accurate information to managers to enable them to meet their organizational objectives, meet the goals of section 4 of the Federal Managers’ Financial Integrity Act (FMFIA) of 1982 and comply with the Office of Management and Budget’s (OMB) Circular A-127. HUD’s strategy for achieving the FSI objectives was to replace about 100 separate financial and mixed systems with nine new fully integrated systems. This strategy was based on an analysis which concluded that HUD did not have the basic financial management systems to serve as the foundation for an integrated systems environment. The design of the 1991 FSI plan required that eight financial systems be integrated with the new core accounting system and this design was consistent with the Joint Financial Management Improvement Program’s (JFMIP) framework for financial management systems. Once the nine new standard systems were deployed, program offices were to use them to support their business operations. In addition, the plan noted that it would be necessary to make interim improvements to existing systems, since the integration effort would be a long-term project. The interim improvements would be needed to manage programs, comply with legal mandates, and correct material weaknesses until the nine new systems became available. The department estimated that it would cost about $103 million to develop and deploy the nine systems called for in the 1991 FSI plan by September 1998. Table 1 shows the objectives, estimated development and deployment costs, and scheduled deployment dates for the nine planned integrated systems. According to HUD’s CFO, in fiscal years 1992 and 1993, HUD spent about $58 million on FSI. Specifically, $48 million was spent on interim improvements to legacy systems, and $10 million was spent on the Core Accounting System and Mortgage Insurance System projects. During this period, HUD (1) procured Federal Financial System (FFS), a commercial off-the-shelf software package to serve as the department’s Core Accounting System, (2) halted its efforts to develop the Mortgage Insurance System as a result of poor planning, and (3) terminated the Grants, Subsidies, and Loans project because the department could not streamline its grants process. Work on the remaining six projects was not scheduled to begin until 1994. In September 1993, HUD fundamentally changed its FSI strategy. The revised strategy was in response to (1) slow progress in implementing systems integration, (2) the need to comply with revisions to OMB Circular A-127, and (3) senior management’s serious doubt about the viability of creating nine new fully integrated systems and having program offices adapt their business operations to meet the requirements of these systems by 1998. The 1993 FSI plan included the same primary objective of implementing an integrated financial management system to meet the department’s program and financial management needs (which was delineated in the 1991 FSI plan), as well as objectives for improving HUD’s financial systems and bringing HUD into compliance with the provisions of FMFIA and the revised OMB Circular A-127. In addition, the 1993 plan included new objectives, such as eliminating the department’s financial systems from OMB’s list of high-risk areas for management improvement, being consistent with HUD’s reform plan, and meeting the requirements of the Government Performance and Results Act of 1993. The implementation strategy for the 1993 FSI plan was markedly different from that of the 1991 plan. Under the 1993 FSI plan, the CFO’s office was required to complete the core financial system project initiated under the 1991 strategy and program offices were required to develop (1) new systems that would support program management priorities, financial and management information needs, and business needs and (2) integrate these systems with the core financial system. Plans for the remaining eight standard systems called for in the 1991 strategy were cancelled. As in 1991, the conceptual design of the 1993 FSI strategy was consistent with the Joint Financial Management Improvement Program’s requirement that the core financial system receive data from other financial and mixed systems. By 1996, HUD had initiated 10 systems integration projects. Despite differences in the strategies, HUD did not perform a cost-benefit analysis on the 1993 strategy. Therefore, HUD had no assurance that it had selected the most cost-beneficial solution for FSI. In 1995, the department estimated that the development and deployment cost of the 1993 strategy would be about $209 million. Also, the department extended the deployment date to December 1998, 3 months after the initial scheduled completion date of September 1998. Table 2 describes the 10 major projects under the 1993 FSI plan, estimated development and deployment costs, and initial scheduled deployment dates. From fiscal years 1994 through 1997, the department spent about $181 million to develop, deploy, and maintain various functions of the 10 major FSI projects, according to the CFO. These expenditures are in addition to the $58 million reportedly spent on FSI between fiscal years 1992 and 1993 and do not include additional FSI costs that may have been incurred by program offices. According to the Office of the CFO and FSI project managers, the status of the 10 systems integration projects under the 1993 FSI plan is as follows: The Office of Public and Indian Housing deployed the HUD Central Accounting and Program System (HUDCAPS) in fiscal year 1995 to support its tenant-based Section 8 program. The Office of Community Planning and Development’s Integrated Disbursement and Information System (IDIS) was deployed and was being used to monitor an estimated 950 community development grantees as of September 1998. The Office of Housing deployed three of the four Tenants Rental Assistance Certification System (TRACS) modules since fiscal year 1997. These modules are being used for contract processing, tenant voucher processing, and budget development and analysis. The Office of Housing also developed and deployed a computer income-matching module to verify the income of tenants receiving rental subsidy from the department. The departmental Grants Management System (GMS) project was terminated in 1997.In April 1997, the Office of Administration deployed the HUD Procurement System (HPS), which is being used to track and manage the department’s procurement activities. The Office of Public and Indian Housing (PIH) deployed five of six modules of the Integrated Business System (IBS) as of February 1998. This system is being used by PIH and the Office of Native Americans to monitor their programs, including information related to all housing authorities in the country. The Office of Fair Housing and Equal Opportunity deployed one of the three modules for its Grants Evaluation Management System (GEMS) in fiscal year 1996. The Office of Budget deployed the first module of its Budget Formulation System in May 1997. This system is used by the CFO to formulate, prepare and monitor the annual budget. The CFO’s Office deployed a portion of HUDCAPS to support the department’s administrative accounting functions in fiscal year 1994. The Office of Housing’s Federal Housing Administration Mortgage Insurance System (FHAMIS) deployed a data warehouse for multifamily data in March 1997, developed information strategy plans and performed business process reviews to lay the foundation for future FHAMIS systems development efforts. In 1997, HUD again revised its FSI strategy after concluding that (1) it could not fully deploy HUDCAPS—the core financial system—by September 1998 and (2) the systems integration effort had to conform to the HUD 2020 Management Reform Plan. According to the CFO, HUDCAPS could not be completed as scheduled. Specifically, the system had been deployed on schedule to support the department’s administrative and Section 8 accounting functions, but it had not been deployed to replace the three remaining general ledger systems as planned. Also, program offices had not yet developed the interfaces between the mixed systems and the core financial system. The CFO also stated that the revised FSI strategy had to conform with the HUD 2020 Management Reform Plan, which established an October 1999 deadline for fully deploying an integrated core financial system and called for repairing or replacing the department’s existing mixed systems and developing new systems to support the reforms. The primary objective of the 1997 FSI plan was consistent with the 1991 and 1993 FSI plans—to implement an integrated financial management system, consisting of both financial and mixed systems, that would provide the information necessary to carry out financial and programmatic missions of the department. However, as in 1993, HUD did not perform any cost-benefit analyses despite the additional systems and schedule changes required by the plan. As a result, HUD cannot assure that the 1997 strategy is the most cost-beneficial alternative. As of May 1998, the CFO had identified nine projects for the 1997 FSI strategy, including five that were started under the 1993 FSI plan (i.e., HUDCAPS, FHAMIS, GEMS, IDIS, and IBS), and four new projects required to support HUD’s 2020 Management Reform Plan. The objective of the HUDCAPS project, however, was expanded to centralize the development of interfaces between mixed systems and the core financial management system. New projects in the 1997 plan include developing interfaces between HUD’s geographic information system and data warehouses, and deploying an executive information system, Real Estate Management System, and FHA’s Financial Data Warehouse. The cost and schedule estimates to complete the 1997 FSI strategy have not been finalized, and the FSI cost estimate through September 1999 has fluctuated considerably. For example, HUD’s FSI estimate has varied from $540 million in June 1998, to $255 million on November 12, 1998, to $239 million a week later. However, we found that the $255 million and the $239 million estimates do not include at least $132 million associated with maintaining FSI systems. Until HUD finalizes its plans and cost and schedule estimates to complete the 1997 strategy, the expected FSI cost will remain uncertain. In May 1998, the Office of the CFO said that the cost and schedule estimates to complete the 1997 FSI strategy would be finalized by September 30, 1998. However, as of October 19, 1998, these estimates had not been approved. Table 3 displays the 9 projects included in the 1997 FSI strategy as of May 1998, as well as their corresponding initial development and deployment cost and schedule estimates. HUD has been working to implement the 1997 FSI strategy. The CFO stated that a large number of FSI systems or system modules have been deployed and are being used to manage and monitor the department’s programs. According to the CFO, the status of the nine 1997 systems integration projects as of October 1998 is as follows: In addition to the work completed under the 1993 FSI plan, the Office of Community Planning and Development has also deployed the Integrated Disbursement and Information System (IDIS) in nine states and the District of Columbia.In addition to the work completed under the 1993 FSI plan, the Office of Public and Indian Housing has developed and deployed the sixth module to support the 1993 Integrated Business System (IBS) requirements and implemented a new module to support the business requirements of the Office of Native American programs. The FHA Financial Data Warehouse is still being developed by the Office of Housing. Therefore, the July 1998 estimated deployment date was not met. In addition to the work completed under the 1993 FSI plan, the Office of Fair Housing and Equal Opportunity (FHEO) deployed an enhanced version of the first module for the Grants Evaluation Management System (GEMS) that supports the pre-award process and completed the development of the second module to support grantee tracking in fiscal year 1998. This system is being used by FHEO to monitor its two major grant programs. In addition to the work completed under the 1993 FSI plan and to carry out Federal Housing Administration Mortgage Insurance System (FHAMIS) goals in the information strategy plans and resulting from business process reviews, the Office of Housing deployed and is using the Single Family Premium Collection Subsystem to collect and account for premiums. The Office of Housing also deployed the Single Family Data Warehouse and a multifamily data quality system to support its FHAMIS project. The Office of Housing deployed the first phase of the Real Estate Management System (REMS) in March 1998. The Office of Housing is using REMS to collect and monitor data related to all multifamily structures in the department. In addition to the work completed under the 1993 FSI plan, the Office of the CFO has developed and deployed a consolidated HUD-wide general ledger for fiscal year 1999 that will include summary transactions for the department, including FHA and the Government National Mortgage Association; and developed an interface to the Office of Public and Indian Housing’s Section 8 HUDCAPS. The Office of the CFO developed and deployed the first phase of the department’s Executive Information System (EIS). This system was prototyped using selected data from HUD’s program and financial systems. The Office of the CFO deployed HUD’s Community 2020 geographic information system (GIS) to provide program and management information in a geographical referenced format to users of HUD’s programs. During a November 9, 1998 meeting, the Deputy Secretary and CFO told us that HUD is in the process of assessing the conceptual design for a departmental grants management system. While the study has not yet been completed, the officials stated that if a departmental grants management system is deployed, some of the functions performed by IDIS may be replaced by the new system. Project management plans help managers monitor projects and ensure that activities are completed within specified costs and schedules. HUD’s system development methodology specifically requires that project plans be developed to document project activities and cost and schedule estimates before a project is initiated. The methodology also requires that project plans be updated if project objectives change or significant budget or schedule variances occur. Although HUD extended the HUDCAPS implementation date by 13 months from September 1998 to October 1999, the department does not yet have a final project plan that shows whether it can successfully deploy the core financial system and integrate it with mixed systems by the new target date. The 1998 HUDCAPS project plan included tasks, costs, and schedules for fiscal year 1998 activities, but the 1999 plan does not include a schedule that shows key milestones, tasks, task dependencies, and a critical path demonstrating how and when fiscal year 1999 activities necessary to integrate HUDCAPS with the mixed systems will be completed by October 1999. For example, the HUDCAPS project plan does not show how or when the FHA Financial Data Warehouse will be interfaced to the core financial system. In addition, HUD has not yet finalized project plans that are necessary to establish new milestones for FSI projects, such as (1) GEMS, which missed its initial scheduled completion date, and (2) FHAMIS, which will not meet its initial scheduled completion date.These plans should include tasks, task dependencies, and a critical path, as well as development and deployment cost and schedule estimates for individual FSI projects. According to the Director of IRM Planning and Management, the department required that detailed project plans be developed for each FSI project by September 30, 1998. However, as of October 19, 1998, these plans had not yet been finalized. In addition, the Director of IRM Planning and Management expressed concern over the quality of the project plans that had been submitted. This is an important matter since the department has spent hundreds of millions of dollars on FSI and expects to deploy an integrated core financial management system that will rely extensively on data from the mixed systems by October 1999. Ineffective project management and oversight have contributed to numerous problems resulting in FSI cost increases and schedule delays. In 1994, we reported that HUD did not adequately oversee the planning and development of individual FSI projects. As a result, the first two FSI projects suffered delays and rising project costs. To resolve these problems, we recommended that HUD strengthen the management and oversight of individual FSI projects to ensure that significant problems would be brought to the attention of senior managers and corrected in a timely manner. We stated that these measures must continue throughout the integration effort. Between 1993 and 1997, HUD formed various committees to strengthen project management and the oversight of projects initiated under the 1993 FSI plan. Nevertheless, we found that ineffective project management and oversight continued to contribute to cost increases and schedule delays on individual projects. For example, the Single Family Acquired Asset Management System (SAMS) replacement system which was developed and deployed as part of the FHAMIS 1993 FSI project, was delivered late and over budget and did not meet critical user needs because it was poorly managed. HUD estimated that the SAMS replacement system would be developed and deployed for about $3.2 million in 6 months. However, HUD awarded the contract to develop SAMS before adequately defining the system’s requirements. As a result, the cost of SAMS grew tenfold to over $32 million, the system was deployed 10 months late, and the system did not meet some critical user needs. To meet these needs, HUD was forced to spend an additional $8 million to enhance the system. In April 1997, the HUD OIG also cited inadequate project management and oversight as factors that contributed to the cost increases and schedule delays for several projects initiated under the 1993 FSI plan. The OIG recommended that the Deputy Secretary take over the direction of FSI to provide the needed management oversight and ensure that project managers receive adequate project management training. In February 1998, HUD responded to these recommendations by establishing new management teams to strengthen the oversight of FSI projects and increasing its project management training program. However, as discussed below, HUD lacks (1) the essential disciplined processes required to effectively manage and oversee FSI projects and other information technology investments and (2) objective data to identify and resolve problems as they arise. The department is not using recognized best practices for selecting, controlling, and evaluating its investments as required by the Clinger-Cohen Act of 1996 and the Paperwork Reduction Act of 1995. The problems HUD has experienced in developing and deploying an integrated financial management system are a direct result of not managing information technology projects properly as investments. HUD’s investment selection process is not complete and has not provided decisionmakers with key information necessary to make investment decisions and monitor investments. For example, decisionmakers have not had reliable, up-to-date information on project costs, benefits, and risks to make well-informed decisions. Further, HUD lacks an adequate process for monitoring and controlling its FSI investments and does not have a process for evaluating FSI information technology investments once they have been completed. Therefore, the department cannot fully (1) determine whether its investments have achieved expected benefits, (2) identify whether major differences have occurred between actual and expected results in terms of cost, schedule, and risks, or (3) revise its investment management processes on the basis of lessons learned. As a result, the department does not know whether it is making the right investments, how to control these investments effectively, or whether these investments have provided expected mission-related benefits within estimated costs. In reviewing HUD’s investment management process, we also found that the preparation of software cost estimates—key data required to make good investment decisions—is not consistent with best practices. Also, HUD does not follow best practices requiring that cost-benefit analyses be updated to reflect the current status of investments. The Clinger-Cohen Act of 1996 and the Paperwork Reduction Act of 1995 require agency heads to implement an approach that maximizes the value and assesses and manages the risks of information technology investments. The acts stipulate that this approach be integrated with the agency’s budget, financial, and program management processes. An information technology investment process is an integrated approach that provides for data-driven selection, control, and evaluation of information technology investments. The investment process is comprised of three phases. The first phase involves selecting investments using quantitative and qualitative criteria for comparing and setting priorities for information technology projects. The second phase includes monitoring and controlling selected projects through progress reviews at key milestones to compare the expected costs, risks, and benefits of earlier phases with the actual costs incurred, risks encountered, and performance benefits realized to date. These progress reviews are essential for senior managers to decide whether to continue, accelerate, modify, or terminate a selected project. The third phase involves a post-implementation review or evaluation of fully implemented projects to compare actuals against estimates, assess performance, and identify areas where future decision-making can be improved. Overall, information from one phase is used to support activities in other phases. Reliable cost estimates are also needed to allow effective investment decision-making. OMB’s Circular A-130 requires agencies to prepare cost-benefit analyses for system development projects and update them as necessary throughout the life of the systems. As stated in our investment guide, proposed investments should be screened to ensure that they meet minimum acceptance criteria, such as a return-on-investment thresholds, linkage to an organization’s strategic objectives and compliance with an organization’s information technology architecture. Projects that pass the screening process undergo an in-depth analysis. To help make good decisions on information technology investments, best practices require that the in-depth analysis be based on accurate, reliable, up-to-date project information. This information includes cost-benefit analyses, risk assessments, and implementation plans for both new and ongoing projects. Once the information is analyzed, projects are ranked based on their relative benefits, costs, and risks. This ranking should determine which projects should be funded and is the essence of information technology portfolio analysis. After investment decisions have been made, schedules should be established at key milestones to regularly monitor and track the cost, schedule, benefits, and risks of selected projects. In 1997, HUD implemented a new process to improve how the department screens, ranks, and selects information technology investments. First, proposed investments were screened to determine if they met explicit criteria described in the department’s strategic ranking mechanism document. Although these criteria called for information on the duration of the project, cost for fiscal year 1997 through full deployment, technical risks, and impact on HUD’s mission and customer needs, the screening criteria did not include return-on-investment thresholds or full life-cycle cost estimates as required by OMB’s guidance on evaluating information technology investments. Investment proposals were then analyzed, scored and ranked using the same criteria and data that were used in the screening process. However, the screening criteria did not require accurate and complete data on life-cycle costs, benefits, risks, project schedules or the corresponding analyses that were conducted to develop these estimates. For example, after reviewing investment proposal data for six FSI projects, we found that only two included life-cycle cost estimates and none included cost-benefit analyses or risk mitigation plans. Therefore, HUD made investment decisions without the information needed for a thorough understanding of the projects to make the necessary trade-offs among them. Finally, HUD’s selection process was also insufficient since it did not establish project review schedules for selected projects as required by best practices. In fiscal year 1998, HUD did not use its selection process because the Secretary required that investment decisions be based on whether the proposed projects supported the department management and organizational changes called for in the HUD 2020 Management Reform Plan. As in fiscal year 1997, decisions made in 1998 were not based on reliable estimates of life-cycle costs, benefits, and return on investment. HUD is using its fiscal year 1997 selection process to make fiscal year 1999 investment decisions. In addition, HUD has deployed the Information Technology Investment Portfolio System (I-TIPS)—a generic system developed by the Department of Energy—to automate and support the management of information technology capital planning and integrate this planning with the department’s budget process. HUD is using I-TIPS to support its processes to screen, score, and select information technology proposals for fiscal year 1999 and plans to use I-TIPS when making investment decisions for fiscal year 2000. However, because HUD has not corrected its selection process and still does not require complete, accurate, and current data to select information technology investments, there is no assurance that HUD’s 1999 investments decisions will be better than they have been in the past. Once information technology projects are selected, they should be consistently monitored and controlled through progress reviews at key milestone dates. Progress reviews should assess several aspects of the project, including deliverables, methodology, technical issues, schedule, costs, benefits, and risks. Further, once a project has been fully implemented, it should be evaluated through post-implementation reviews. The post-implementation reviews should provide (1) a project assessment, including an evaluation of customer/user satisfaction and how well the project met its estimated cost and schedule and provided mission-related benefits, and (2) lessons learned so that the investment decision-making processes can be improved. HUD does not have an adequate process to control investments or a process to evaluate investments. In 1997, the Technology Investment Board working group was established to monitor approved projects and advise the Technology Investment Board executive committee whether to continue, modify, or terminate them. However, the Director for IRM Planning and Management stated that the working group mostly monitors annual project expenditures and the rate of expenditure for any given fiscal year. This degree of oversight is not adequate because it is not based on the project-specific measures required to effectively monitor and control information technology projects. These measures include (1) an accumulation of actual cost data and comparisons to estimated cost levels, (2) a comparison of the estimated and actual schedule, (3) a comparison of expected and actual benefits realized, and (4) an assessment of risks. The information should be regularly collected, updated, and provided to decisionmakers to support effective project monitoring. The department also lacks a method for evaluating investments and thus does not perform post-implementation reviews or use lessons learned to improve the investment process. HUD’s Director for IRM Planning and Management acknowledged these weaknesses in both the control and evaluation phases of the investment process and added that HUD plans to define these processes by the spring of 1999, before it deploys future releases of I-TIPS. Without processes to control and evaluate investments, HUD cannot (1) determine if projects should be modified, continued, accelerated, or terminated, (2) determine whether a project has met its objectives, (3) compare projected costs and schedules to actual costs incurred and implementation dates, and (4) identify ways to modify or improve its investment management process. Reliable cost estimates are essential for making effective information technology investment decisions. The reliability of cost estimates is dependent on the thoroughness and discipline of an organization’s estimating processes. Consistently producing reliable estimates requires defined institutional processes for deriving cost estimates, archiving them, and measuring actual performance against them. Based on its research of leading government and private-sector estimating practices, Carnegie Mellon University’s Software Engineering Institute(SEI) identified six requisites for developing cost estimates. According to SEI, an organization must have all six requisite processes to consistently produce reliable cost estimates. These requisites are the following: a corporate memory (or historical database), which includes cost estimates, revisions, reasons for revisions, actuals, and relevant contextual information; structured processes for estimating software size and the amount and complexity of existing software that can be reused; cost models calibrated and tuned to reflect demonstrated accomplishments on similar past projects; audit trails that record and explain the values used as cost model inputs; processes for dealing with externally imposed cost or schedule constraints in order to ensure the integrity of the estimating process; and data collection and feedback processes that foster capturing and correctly interpreting data from work performed. The Director of HUD’s Systems Engineering Group stated that the department’s processes do not satisfy SEI’s software cost estimating criteria. As shown in table 4, HUD’s cost estimating processes for FSI projects partially meet one, but do not meet the remaining five institutional process requisites that experts say are embedded in leading information technology development and acquisition organizations. According to the Director of the Systems Engineering Group, HUD uses its experience in working with the program offices on software development efforts, rather than cost models, to develop cost estimates. The director acknowledged that HUD does not have an automated historical database to use when developing estimates for new FSI projects; instead, separate project files are kept with historical data on individual projects. The director was unsure of the usefulness of these files because they are not updated to identify and correct inconsistencies. Finally, HUD does not update or regularly review its initial cost estimates. As a result, HUD does not have adequate assurance that FSI cost estimates are consistently reliable. This increases the risk of poor FSI investment decisions throughout the project’s life cycle and the likelihood of additional cost overruns. OMB Circular A-130 requires agencies to prepare cost-benefit analyses for systems development projects and update them as necessary throughout the life of the systems. HUD’s systems development methodology has required the preparation and updating of cost-benefit analyses since at least September 1992. In reviewing three cost-benefit analyses for ongoing FSI projects, we found that none had been updated as required in OMB Circular A-130. According to several FSI project managers, cost-benefit analyses are performed only once—to initiate a new information technology project. The project managers stated that the analyses are not updated, although they do prepare yearly project funding requests as part of the budget process. These requests, however, do not reflect any changes to the costs or benefits of a project. Therefore, HUD cannot compare current cost estimates and actual expenditures to determine whether unfavorable cost or benefit variances exist. As a result, HUD may continue to invest in a system without knowing whether costs or benefits have changed enough to warrant discontinuing further investment. The Office of the Inspector General found similar problems in 1996 and recommended that HUD’s Office of Information Technology establish guidance and define management responsibilities for updating the cost-benefit analysis at appropriate intervals. HUD responded to this recommendation by stating that its September 1995 Benefit/Cost Analysis Methodology, Volume I and Benefit/Cost Analysis Workbook, Volume II define and guide the development of the required components of a cost-benefit analysis and management’s responsibility for periodically updating an analysis. The requirement to use and document cost-benefit analyses in accordance with the methodology and workbook was included in the March 1997 revision of HUD’s system development methodology. According to HUD’s IRM Director for Planning and Management, although the system development methodology requires the use of both the cost-benefit analysis methodology and the workbook, the department has not officially mandated the use of either one. In addition, several FSI project managers told us that these standards are generally not followed. The director added that the quality, depth, and documentation supporting cost-benefit analyses for FSI projects have been inconsistent. For example, we found the IDIS cost-benefit analysis was well documented and included a discussion of the assumptions and constraints used in performing the analysis, information on recurring and nonrecurring costs, and the estimated life-cycle cost of the system. In contrast, we found that the cost-benefit analysis for the Office of Fair Housing and Equal Opportunity Grants Evaluation Management System was inadequate because it did not quantify benefits. FSI cost and schedule estimates may be impacted by HUD’s Year 2000 program, a priority effort that must be completed on time. In March 1998, we reviewed the status of HUD’s Year 2000 effort and reported that 42 of 63 mission-critical systems were not yet Year 2000 compliant. HUD has attempted to mitigate its Year 2000 risks, but three mission-critical FHAMIS systems undergoing renovations, testing, and certification are behind schedule. To better ensure that these mission-critical systems are corrected on time, HUD suspended systems integration work on these systems so that the department could focus its resources on completing Year 2000 software renovations. According to the project manager, this will cause a major impact to the schedule for completing the FHAMIS systems integration work. In commenting on this report, HUD stated that it successfully completed all of its Year 2000 renovations for both mission-critical and nonmission-critical systems. HUD expects to complete the Year 2000 certification and validation process by January 31, 1999. HUD has spent hundreds of millions of dollars on its efforts to develop and deploy an integrated financial management system over the past 7 years. While this effort has not yet been completed, the department has developed and deployed various modules and systems for 12 of the 14 different projects initiated under the 1993 and 1997 FSI strategies. The department, however, does not have the rigorous processes needed to accurately determine how much more it will cost or how much longer it will take to achieve the FSI objective, whether its efforts to date have achieved expected results, or whether its latest strategy is cost beneficial. HUD has not yet finalized project plans or cost and schedule estimates for completing all of the components of the latest FSI plan. Without such plans, the department is likely to continue to spend millions of dollars more, miss milestones, and still not fully meet its objective of developing and fully deploying an integrated financial management system. Cost increases and schedule delays have been caused by (1) changes to the FSI strategy that were not supported by thorough analyses and (2) inadequate project management and oversight. In addition, the Year 2000 computing crisis has impacted the schedule for the FHAMIS effort. Further, HUD’s latest actions to establish new FSI management teams and increase its project management training program do not address and cannot correct the root cause of the problems—the lack of a data-driven management process to properly oversee and control information technology investments such as FSI. HUD has not yet implemented a disciplined investment management process to select, control, and evaluate FSI projects in accordance with industry best practices and as required by the Clinger-Cohen Act and the Paperwork Reduction Act. In the absence of such a process, HUD decisionmakers (1) continue to make FSI investment decisions without reliable, complete, and up-to-date data on expected and actual costs, benefits, and risks, (2) cannot adequately monitor and control investments and detect and correct problems early, and (3) cannot evaluate completed projects to determine whether they have achieved expected benefits and improve the investment management process based on lessons learned. Also, HUD does not have well-defined, structured cost estimating processes that are in accordance with industry best practices for developing reliable software cost estimates. Finally, HUD does not follow best practices since it does not require that cost estimates or cost-benefit analyses be updated periodically for decision-making purposes. In order to strengthen FSI management and oversight and HUD’s information technology investment management decisions, we recommend that the Secretary of Housing and Urban Development ensure that the department takes the following actions: Prepare complete and reliable estimates of the life-cycle costs and benefits of the overall 1997 FSI strategy and individual FSI projects. In addition, HUD should finalize the detailed project plan for the core financial management system (HUDCAPS) to establish the milestones, tasks, task dependencies, a critical path, and staffing requirements and demonstrate that it is cost-effective to meet the October 1999 scheduled implementation date called for in HUD’s 2020 Management Reform Plan and finalize detailed project plans for individual FSI projects (mixed systems) that establish the milestones, tasks, task dependencies and critical paths, and staffing requirements to complete the 1997 FSI strategy. Fully implement and institutionalize a disciplined and documented process consistent with provisions of the Clinger-Cohen Act and the Paperwork Reduction Act, as well as our and OMB’s guidance for selecting, controlling, and evaluating information technology investments. This process should, at a minimum, include steps to select information technology investments based on complete, accurate, reliable, and up-to-date project-level information, including estimated life-cycle costs, expected benefits, projected schedule, and risks; conduct formal in-process reviews at key milestones in a project’s life cycle—including comparing actual and estimated project costs, benefits, schedule, and risks—and provide these results to decisionmakers, who will determine whether to continue, accelerate, modify, or terminate FSI projects; and initiate post-implementation reviews within 12 months of deployment to compare completed project cost, schedule, and benefits with original estimates and provide the results of these reviews to decisionmakers so that improvements can be made to HUD’s information technology investment and management processes. Develop and use defined processes for estimating FSI costs. At a minimum, these processes should include the following SEI requisites: a corporate memory (or historical database), which includes cost and schedule estimates, revisions, reasons for revisions, actuals, and relevant contextual information; structured processes for estimating software size and the amount and complexity of existing software that can be reused; cost models calibrated to reflect demonstrated accomplishments on audit trails that record and explain the values used as cost model inputs; processes for dealing with externally imposed cost or schedule constraints in order to ensure the integrity of the estimating process; and data collection and feedback processes that foster capturing and correctly interpreting data from work performed. In commenting on a draft of this report, HUD agreed that the management and oversight of FSI could be improved by fully implementing and institutionalizing the provisions of the Clinger-Cohen Act and the Paperwork Reduction Act. In this regard, HUD agreed with our recommendations to implement defined processes for selecting, controlling, and evaluating its information technology investments and for estimating costs. The department also said it agreed that it needs to prepare complete life-cycle costs and benefits estimates for its systems strategy, but it did not specifically address our recommendation to finalize the detailed project plans for HUDCAPS and other individual FSI projects included in the 1997 strategy. HUD expressed concern that the $540 million FSI estimate through fiscal year 1999 mentioned in our draft report included non-FSI costs and that a more accurate FSI estimate would be approximately $255 million. As noted in our report, HUD has not yet finalized the plans, cost, and schedule to complete its current FSI strategy and, therefore, FSI costs continue to be uncertain. Accordingly, HUD’s estimates through September 1999 have fluctuated considerably, as reflected in various documents received from the CFO and his staff. For example, cost estimates have changed from $540 million reported by HUD in June 1998, to $255 million cited in the department’s November 12, 1998 comments to our draft report, to $239 million that HUD reported a week later. However, we found that the $255 million and the $239 million estimates do not include at least $132 million associated with maintaining FSI systems. HUD’s continuing uncertainty as to what is the FSI cost estimate through September 1999 further demonstrates the department’s need to develop and use well-defined cost estimating processes to prepare reliable cost estimates. HUD said our report does not properly compare like systems when making year-to-year comparisons. The question we were asked to address was to identify the initial objectives, development, deployment and maintenance costs, and completion dates for HUD’s FSI effort and how they have changed. In order to respond to that question, we describe the systems and the estimated systems costs that were included as part of the three plans and strategies for achieving integrated financial management systems and carefully explain that HUD’s underlying strategy to implement an integrated financial management system has changed three times. In addition, to avoid any misunderstandings, we added language to clarify what the estimates for the FSI strategies and the expected FSI costs include through fiscal year 1999. Finally, HUD described its FSI accomplishments and stated that our conclusions do not summarize or emphasize the importance of actions taken to improve its mission-critical financial management systems. To address this issue, we noted the actions taken by HUD to date and added information to our discussion of various FSI systems throughout the report. We are sending copies of this report to the Vice Chair and the Ranking Minority Member of the Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, and the Chairman and Ranking Minority Member of the Subcommittee on Human Resources, House Committee on Government Reform and Oversight. We are also providing copies to the Secretary of Housing and Urban Development and the Director of the Office of Management and Budget. We will make copies available to others upon request. Please contact me at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors to this report are listed in appendix III. Our objectives were to identify (1) the initial objectives, development, deployment and maintenance costs, and completion dates for HUD’s FSI effort and how they have changed, (2) the factors that have contributed to FSI cost increases and schedule delays, and (3) whether HUD is following industry best practices and has implemented provisions of the Clinger-Cohen Act of 1996 and the Paperwork Reduction Act of 1995 required to manage FSI projects as investments. We were also asked to identify whether HUD’s Year 2000 program would impact its FSI activities. To identify the objectives, development and deployment costs, and completion dates for HUD’s initial FSI effort and how they have changed, we reviewed the 1991, 1993, and 1997 FSI plans. To identify initial and revised cost and schedule estimates for major FSI projects, we reviewed initial cost-benefit analyses, project plans, budget documents provided to the OMB and FSI cost estimates for fiscal years 1998 and 1999 provided by HUD’s CFO. To identify cost and schedule estimates to complete HUD’s 1997 FSI strategy, we met with program managers for each of the major systems integration projects and representatives from the Office of the CFO. We also reviewed the fiscal years 1998 and 1999 project management plans to deploy an integrated core financial management system (HUDCAPS). To determine whether FSI costs had increased, we reviewed the (1) initial FSI development and deployment cost estimates reported in both the 1991 and 1993 FSI plans, (2) OMB budget submissions, and (3) CFO’s reports on actual systems integration expenditures between fiscal years 1992 and 1997 and development, deployment and maintenance cost estimates for fiscal years 1998 and 1999. To determine whether schedule delays had occurred, we identified the initial scheduled deployment dates for major FSI projects—including HUDCAPS, FHAMIS, GEMS, IBS, IDIS, and TRACS—and met with their respective project managers to determine whether those dates had been or would be met. We met with project managers for FHAMIS, GEMS, HUDCAPS, IBS, IDIS, and TRACS and officials from the Office of the CFO and reviewed audit reports to determine what factors had contributed to FSI cost increases and schedule delays. We reviewed HUD’s responses to audit recommendations to determine whether HUD had taken any actions to address management problems. Further, we discussed these actions with FSI project managers and with OIG officials to determine whether or not they had been effectively implemented. To determine whether addressing Year 2000 requirements would impact FSI cost and schedule estimates, we met with project managers for individual FSI projects. We also reviewed reports on the status of HUD’s Year 2000 effort to determine whether this effort would affect the development and deployment schedule of any FSI project. To determine whether HUD was following best practices in managing FSI projects as investments, we compared HUD’s information technology investment procedures and information resources management policies with criteria in our guidance Assessing Risks and Returns: A Guide for Evaluating Federal Agencies’ Information Technology Investment Decision-making (GAO/AIMD-10.1.13, February 1997), OMB’s guidance Evaluating Information Technology Investments: A Practical Guide (November 1995), and OMB’s Capital Programming Guide (July 1997), as well as provisions of the Clinger-Cohen Act of 1996 and the Paperwork Reduction Act of 1995. We determined whether HUD was following best practices for selecting investments by reviewing (1) criteria used to make information technology investment decisions during fiscal years 1997 through 1999, (2) documents for individual FSI projects that were used to make investment decisions for fiscal years 1997 and 1998, (3) HUD’s information technology investment portfolio for fiscal years 1997 and 1998, and (4) minutes from the Technology Investment Board working group, which document meetings on FSI investments. Further, we met with key officials of HUD’s Office of the CFO and Office of Information Technology to obtain additional details on the investment management process and the department’s plans to implement the Information Technology Investment Portfolio System. In addition, we compared the processes and practices HUD used to develop FSI project cost estimates with the key components of cost estimating practices publicized by Carnegie Mellon University’s SEI. We also reviewed cost-benefit analyses for several major FSI projects and met with FSI project managers to determine whether these analyses had been updated, as required by OMB Circular A-130. We did not independently verify the accuracy of FSI cost or schedule data provided by HUD. Also, the scope of our review was not intended to, and does not, provide a basis for concluding whether or not HUD’s FSI efforts will achieve their intended results. The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated November 12, 1998. 1. As discussed in the “Agency Comments and Our Evaluation” section, HUD agreed with most of our recommendations. HUD also stated that the department has had a structured process in place since 1990 for selecting information and technology investments and monitoring the major system development through the Technology Investment Board Executive Committee, which is chaired by the Secretary. We reviewed HUD’s recent selection and control processes beginning with fiscal year 1997 and found that both processes are incomplete and inadequate to make sound investment decisions and properly manage selected investments. The major deficiencies we found with HUD’s processes were that (1) investment decisions were made without reliable, complete, up-to-date project level information and (2) project oversight was not based on project-specific measures required to effectively monitor and control information technology projects. 2. HUD provided us with a copy of the HUDCAPS project plan for fiscal year 1998 activities and a plan for fiscal year 1999 activities, but the second year project plan was not presented to us as a draft. Furthermore, as we discuss in the report, the fiscal year 1999 HUDCAPS plan was not complete because it did not include a schedule that showed key milestones, tasks, task dependencies, and a critical path demonstrating how HUDCAPS would be completed and interfaced with the mixed systems by October 1999. 3. We added a sentence to the conclusions that summarizes the status of HUD’s FSI effort to date, and we expanded the report’s discussion of individual FSI projects to reflect the new information provided by HUD. 4. We revised the report to indicate that HUD reported that it completed Year 2000 renovation work for all of its mission-critical and nonmission-critical systems. 5. We incorporated additional language in our report to avoid any misunderstanding between what is included in (1) estimates for the FSI plans and (2) expected FSI costs through fiscal year 1999. 6. Discussed in the “Agency Comments and Our Evaluation” section. As noted in our report, HUD has not yet finalized the plans, cost, and schedule to complete its current FSI strategy and, therefore, FSI costs continue to be uncertain. In addition, HUD’s FSI cost estimate through September 1999 has varied considerably, as reflected in various letters received from the CFO. For example, FSI cost estimates have changed from $540 million reported by HUD in June 1998, to $255 million reported on November 12, 1998, to $239 million reported a week later. However, the $255 million and $239 million estimates do not include at least $132 million in maintenance costs. HUD’s continuing uncertainty regarding the FSI cost estimate through September 1999 further demonstrates the department’s need to develop and use well-defined cost estimating processes for preparing reliable FSI cost estimates. Finally, as we note in appendix I, we did not independently verify the accuracy of FSI cost data provided by HUD. HUD’s statement that the $103 million for the 1991 FSI strategy includes development costs only is inconsistent with its 1991 FSI plan, which states that the $103 million included both development and deployment costs. David G. Gill, Assistant Director Yvette R. Banks, ADP Telecommunications Analyst-in-Charge Madhav S. Panwar, Senior Technical Advisor Teresa L. Jones, Issue Area Assistant The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO identified: (1) the initial objectives, development, deployment and maintenance costs, and completion dates for the Department of Housing and Urban Development's (HUD) Financial Systems Integration (FSI) effort and how they have changed; (2) the factors that have contributed to FSI cost increases and schedule delays; (3) whether HUD is following industry best practices and has implemented provisions of the Clinger-Cohen Act of 1996 and the Paperwork Reduction Act of 1995 required to manage FSI projects as investments; and (4) whether HUD's Year 2000 program will impact its FSI activities. GAO noted that: (1) while HUD's primary FSI objective of implementing an integrated financial management system has remained the same, the underlying strategy for achieving this objective and completion dates have changed significantly; (2) in 1991, HUD approved a plan to replace about 100 financial and mixed systems with nine standard integrated systems, estimating that it would cost about $103 million to develop and deploy the systems by September 1998; (3) in 1993, HUD abandoned its plan to develop nine new systems and significantly revised its FSI strategy; (4) HUD estimated that it would cost about $209 million to develop and deploy the new system by December 1998; (5) in 1997, HUD revised its FSI strategy again, extending the date for fully deploying the core financial management system to October 1999 and incorporating the development and deployment of additional new systems required to meet the department's latest management reforms and organizational changes; (6) the department did not adequately assess the costs or benefits of the 1997 FSI strategy; (7) as a result, HUD has no assurance that it has selected the most cost-beneficial solution to accomplish its FSI objectives; (8) until HUD finalizes its plans and cost and schedule estimates to complete the 1997 strategy, the expected FSI cost will remain uncertain; (9) nine systems included in the 1997 FSI strategy are in various stages of development and deployment; (10) revisions to the systems integration strategy and management and oversight problems associated with individual projects are factors that have contributed to FSI cost increases and schedule delays to date; (11) HUD has not yet fully implemented a complete, disciplined information technology investment management process, which includes selecting, controlling, and evaluating FSI projects and conforms with best practices and related requirements in the Clinger-Cohen Act and the Paperwork Reduction Act; (12) in addition, HUD has not implemented: (a) an adequate process to control information technology products once they have been selected for implementation; or (b) a process to evaluate information technology projects and determine whether they have achieved expected benefits; and (13) HUD's Year 2000 program, a top priority effort that must be completed on time, may further impact the FSI effort.
|
AMEC provides a forum for Norway, Russia, the United States, and the United Kingdom to collaborate in addressing military-related environmental concerns in the Arctic region. The AMEC Declaration and “Terms of Reference” established the framework and organization for sharing information and technology and implementing projects. The Declaration focuses AMEC activities on radioactive and chemical contamination issues resulting from past military activities in the Arctic region and stresses cooperation between the military organizations. AMEC’s “Terms of Reference” establishes the organizational structure and possible ways of financing the AMEC program. It identifies representatives (principals) from each member country’s respective department or ministry of defense. These representatives approve their countries’ participation in AMEC activities and are responsible for obtaining resources from their respective governments to ensure that AMEC objectives are achieved. An AMEC steering group recommends specific projects to the representatives from each country, prioritizes approved work, provides project management, and determines which member country will take the lead on each project. DOD’s Deputy Undersecretary of Defense for Installations and Environment provides policy oversight for U.S. participation in AMEC. Within the United States, the Department of the Navy, which was named as the executive agent in 1998, manages the AMEC national program office. All contracting functions are managed by the Naval Facilities Engineering Command. Although DOD is the lead U.S. agency for AMEC, the Departments of Energy and State and EPA provide technical and policy support. In a 1999 program plan to the Congress, DOD stated that AMEC projects would support the goals of the CTR program. However, our analysis of these projects shows that only one of the eight projects established to support CTR objectives of dismantling Russia’s ballistic missile nuclear submarines did so. The remaining seven projects were either completed too late, terminated or suspended, or implemented at shipyards or sites not directly associated with CTR’s dismantlement program. Despite their limited impact on the CTR program, most of these projects can be used to support dismantlement of Russia’s general purpose nuclear submarines, according to DOD officials. Furthermore, U.S. and foreign representatives asserted that AMEC has achieved other important benefits and that continued U.S. participation in the program is critical because the United States provides significant technical support. Only one of eight AMEC projects established to support and complement CTR’s program for the dismantlement of Russia’s ballistic missile nuclear submarines has directly benefited the program. According to a program plan that DOD submitted to the Congress in 1999, AMEC was being conducted in close cooperation with the CTR program so that the two programs would benefit each other. The program plan stated that AMEC projects supported CTR submarine dismantlement activities. Some of the projects were expected to provide design and engineering support, while other projects were designed to fill gaps in the CTR program. According to CTR officials, however, only one AMEC project, the development of a prototype 40-metric ton container used to store and transport spent nuclear fuel from dismantled Russian ballistic missile nuclear submarines, was able to meet CTR program objectives. U.S. expenditures for this project totaled about $2.9 million, and the Navy chose EPA’s Office of International Programs to manage the project. The containers helped solve an immediate problem—finding adequate storage capacity for the spent nuclear fuel removed from the submarines. CTR and EPA officials told us that the storage containers solved a “bottleneck,” enabling CTR to remove more spent fuel and facilitate dismantlement efforts. According to DOD and EPA, when serially produced the AMEC container costs 80 percent less than the cost of a Russian manufactured storage container. CTR has purchased 25 containers and plans to purchase an additional 35 to transport and store the spent fuel from dismantled ballistic nuclear submarines in Russia. Russia is also using the containers to store and transport spent nuclear fuel from general purpose nuclear submarines. Figure 3 shows an AMEC-designed storage container. Regarding the other seven AMEC projects that were established to support or complement the CTR program, we found the following: A project, also managed by EPA, to develop a storage pad to hold the storage containers was completed too late to support CTR’s dismantlement efforts associated with a Russian shipyard that had been used as a CTR dismantlement site. According to AMEC and EPA officials, the storage pad’s completion was delayed due to problems identifying and obtaining all required Russian clearances and licenses to operate the storage pad; in the intervening time Russia decided it would no longer dismantle ballistic missile submarines at the shipyard. As a result, the storage pad is not used to support the CTR program but will be used for temporary storage of spent nuclear fuel from Russia’s general purpose nuclear submarines. U.S. expenditures for this project totaled $2.9 million. One project, involving development of technology to prevent corrosion inside the spent nuclear fuel storage containers, was terminated before completion because the CTR program withdrew its support and did not provide liability protection. In April 2002, CTR directed AMEC to develop and manufacture a spent nuclear fuel storage container dehydration system. The dehydration system was needed to extract water from the storage containers to inhibit corrosion and increase the containers’ service life. However, in December 2003, the CTR program terminated AMEC’s participation in the project and selected a U.S. contractor, instead of working through AMEC, to design a larger dehydration system. U.S. expenditures for this project totaled $396,000. Two projects involving solid radioactive waste treatment and solid radioactive waste storage were implemented at a site where CTR is not dismantling ballistic missile nuclear submarines. These projects were designed to assist the Russian navy manage the large volume of waste generated by dismantlement of nuclear submarines. The waste treatment project identified, among other things, technologies that could reduce the volume of solid waste from decommissioned nuclear submarines and make it easier and more economical to store the material. The second project supported the development and production of 400 steel containers for the Russian navy to transport and store solid radioactive waste. Prior to the project, no Russian-designed and manufactured container had ever been certified to transport solid radioactive waste. According to the AMEC project manager, the projects introduced Russian representatives to western business practices, including improved contract management techniques. U.S. expenditures for these projects, which have been completed and consolidated at a mobile solid waste treatment facility built at a Russian shipyard, totaled about $12 million, including the cost of the facility. AMEC’s project to develop a demonstration radiation detection system to protect the health and safety of workers who dismantle submarines does not directly benefit the CTR program. The demonstration system is installed at the interim storage pad site, which is not being used to support the CTR program. U.S. AMEC and CTR officials were uncertain if the radiation detection system would be deployed at any of the CTR dismantlement sites in Russia. CTR officials said that while they support projects that protect workers’ heath and safety, they would not have funded this project and are uncertain how it promotes CTR dismantlement goals. U.S. expenditures for this project totaled $1.7 million. A related project that supplied about 125 DOE surplus dosimeters (radiation detection devices) to the Russian navy was described as a failure by the AMEC project manager. He told us that the navy would not use these dosimeters due to, among other things, technical concerns and had put the equipment in storage for a couple of years. We brought this matter to the attention of a U.S. AMEC official who subsequently contacted the Russian AMEC representative and was informed that the dosimeters would be distributed. In July 2004, Russia’s representative to AMEC notified DOD that the dosimeters were now being used. Finally, an AMEC project to develop a mobile liquid waste processing facility that could be used in remote locations in Russia was suspended because CTR did not support it. A CTR official told us that CTR never endorsed the project because adequate capacity for liquid radioactive waste treatment already existed at the facilities where submarines were being dismantled. As a result, CTR would not extend liability protection for the project. EPA, which was chosen by the Department of the Navy to manage the project, still has about $700,000 in unspent project funds that were transferred from the Navy beginning in 1999. EPA officials told us that the funds must be reprogrammed by December 31, 2004, unless the Navy provides an extension, or they will be returned to the U.S. Treasury. U.S. AMEC officials told us that ultimately several of the projects that were established to meet CTR objectives did not do so because of changing requirements and plans. However, they asserted that the projects were planned with the full cooperation and approval of the CTR program and the appropriate Russian government agencies. CTR officials told us they have no further need for AMEC assistance in carrying out their plans to continue dismantling Russian ballistic missile nuclear submarines until 2013. These officials asserted, however, that AMEC plays a useful role in helping address environmental issues and technology development and that this role should be continued. Although only one AMEC project that was established to support CTR did so, these officials believed that most of these projects can be used to support dismantlement of Russia’s general purpose submarines. The storage pad, for example, can hold spent nuclear fuel from all types of Russian nuclear submarines and will facilitate the shipment of the fuel to the centralized storage facility at Mayak. Similarly, the steel containers for solid waste are already being used to store radioactive waste from dismantled general purpose submarines, according to U.S. and Russian officials. A DOE official told us that Russia also plans to use the steel containers to store waste from older ballistic missile submarines that are not scheduled to be dismantled with CTR assistance. Figure 4 shows the storage pad, and figure 5 depicts the solid waste steel containers funded by AMEC. Despite AMEC’s limited impact on the CTR program, U.S. and foreign officials told us that AMEC has achieved other benefits as well and that continued U.S. participation in the program is critical. DOD and Department of State officials said that one of AMEC’s most important benefits is promoting U.S. foreign policy objectives, particularly with Norway, a long-standing NATO ally, and with other nations in the Arctic region. The U.S. Ambassador to Norway told us that while AMEC is a very modest program in terms of expenditures, Norway views it as (1) a critically important part of the U.S-Norwegian bilateral relationship, and (2) an effective multilateral effort to address one of its primary policy concerns—environmental protection in the Barents Sea region. The participation of the United States and the United Kingdom gives Norway political clout and technical expertise that Norway would not have working on a bilateral basis with Russia. Norwegian officials from the ministry of defense and ministry of foreign affairs reinforced these views. The U.S. Ambassador to Russia also gave us his views about AMEC. In a May 24, 2004, letter to GAO, he noted that AMEC’s accomplishments include the construction of the solid waste treatment and storage facility where there are a large number of Russian nuclear submarines awaiting dismantlement. Furthermore, he recommended that the United States continue to participate in AMEC and consider expanding the program to Russia’s Pacific fleet. U.S. and foreign officials also asserted that another important aspect of AMEC is that it facilitates military-to-military cooperation with Russia. Officials noted that AMEC has enabled military personnel from the United States, Norway, and United Kingdom to visit Russian naval facilities that they had previously been unable to visit. According to these officials, access to the facilities enables AMEC to better understand the environmental conditions and technologies required to assist with dismantlement efforts. Russia’s AMEC representative told us that AMEC is a useful way to improve communications among the member countries’ military organizations. He also noted, however, that Russia would find other ways to promote cooperation on environmental security issues if AMEC did not exist. DOE officials told us that AMEC has produced tangible benefits in its efforts to plan an emergency exercise in the Murmansk region in late 2004. The exercise, which will be conducted as an AMEC project, entails staging an accident involving spent nuclear fuel from a Russian nuclear submarine. Participants in the exercise will include representatives from the Russian navy and emergency responders from various Russian organizations, including the Federal Agency for Atomic Energy, Ministry of Defense, and the Institute for Nuclear Safety. In addition, nuclear emergency management personnel from neighboring countries as well as the International Atomic Energy Agency are expected to participate. According to DOE officials, this exercise will be the first time that DOE can simulate an accident involving spent nuclear fuel from a Russian submarine. From 1996 to April 2004, AMEC member countries contributed about $56 million to the program. The United States has been the largest contributor, providing about $31 million or about 56 percent of the total, with Russia, Norway, and the United Kingdom contributing the remainder. Within the U.S. government, although DOD has provided over 90 percent of all funds, DOE and EPA have also contributed. U.S. contributions have declined from 1999 to 2004 as U.S.- funded projects have been completed and as other member countries increased their contributions. According to DOD officials, U.S. contributions to AMEC are planned to be about $3 million per year from fiscal year 2006 to fiscal year 2011. From 1996 until April 2004, AMEC member countries contributed about $56 million to the program. Figure 6 provides a breakout of AMEC members’ contributions. As figure 6 shows, the United States has contributed the greatest amount of any AMEC member country—about 56 percent of the total. According to available data, Russia contributed about $13 million; Norway contributed about $12 million; and the United Kingdom provided about $100,000 because it only recently joined AMEC. Norway’s contributions were initially limited because it did not have an agreement with Russia that provided liability protection for the Norwegian government or its contractors who would be providing assistance through AMEC. In May 1998, Norway signed an agreement with Russia that included liability protection, and since then Norway has contributed funds to several projects, including the development of a radiation detection system and steel storage containers for solid radioactive waste. Norway plans to contribute an additional $8 million to AMEC over the next few years, and Norwegian officials told us that they are committed to an equitable sharing of costs with the other AMEC member countries. Russia’s contributions to AMEC were used to support, among other things, development of the storage container for spent nuclear fuel, the interim storage pad, and the solid waste treatment and storage technologies. A U.S. AMEC official told us that he reviewed Russia’s itemized list of project costs and was satisfied that the costs were a fair representation of Russia’s financial contributions. However, Russia’s future contributions are uncertain. A Russian representative to AMEC told us that Russia will continue to contribute financially to projects but noted that there are limited resources available. Other member countries told us that Russia would probably make mostly “in kind” contributions to the program, including labor and materials for specific projects. The United Kingdom, which joined AMEC in June 2003, has contributed about $100,000 for preliminary planning related to projects focusing on buoyancy and the safe towing of nuclear submarines. The United Kingdom has pledged an initial contribution of $9 million to AMEC in order to fund a preliminary group of projects. DOD has provided the majority of U.S. funding to AMEC—about $28 million, or 91 percent of the total U.S. contribution. DOE and EPA have provided the remaining funds, about $2.6 million and $200,000, respectively. Figure 7 depicts the breakdown of U.S. funds for AMEC by each agency. U.S. funds have been used to support a variety of AMEC activities. About $24 million of the U.S. contributions to AMEC were used to fund projects, such as the storage container for spent nuclear fuel from ballistic missile submarines and the storage pad. The remainder funded program management (about $5.4 million), studies (about $1.0 million), and meetings (about $0.5 million). Figure 8 provides a breakdown of these amounts. The overall U.S. contribution to AMEC decreased from fiscal year 1999 to fiscal year 2004, as U.S.-funded projects have been completed and as other AMEC member countries have increased their assistance. During the period when U.S. contributions started to decline, Norway and Russia increased their contributions. As figure 9 shows, U.S. funding peaked at almost $6 million in fiscal year 1998 when large scale projects such as the spent nuclear fuel storage container and storage pad were moving into implementation. Since fiscal year 2001, U.S. contributions have steadily declined and in fiscal year 2004, DOD allocated $2.5 million to AMEC. AMEC program officials stated that in the future, member countries expect to share equally in AMEC project costs. U.S. AMEC officials stated that U.S. annual assistance to AMEC will be $3 million annually from fiscal year 2006 to fiscal year 2011, the latest date for which projections have been made. This projection was based on prior years’ contributions as well as matching other members’ planned contributions. AMEC’s draft strategic plan, which is currently being reviewed by AMEC partners, envisions helping to secure Russian submarines, submarine bases, shipyards, and spent nuclear fuel and represents a significant expansion and redirection of AMEC’s objectives. AMEC’s proposal to improve submarine base security may be contrary to U.S. policy. In addition, according to DOE officials, spent fuel from Russian submarines is a low priority as a nuclear proliferation threat compared to other radioactive sources, such as abandoned electrical generators containing large amounts of strontium-90. Regardless of AMEC’s plans, U.S. participation in AMEC faces an uncertain future because the United States lacks liability protection to participate in AMEC projects in Russia. In May 2004, AMEC developed a draft strategic plan to guide its future efforts through 2015 that represents a significant expansion and redirection of its program. According to the draft plan, recent world events demonstrate the need to focus on emerging issues related to safety and security, with an emphasis on nuclear nonproliferation, nuclear threat reduction, and environmental sustainability. The draft plan states that spent nuclear fuel and other radioactive wastes generated during dismantlement of Russia’s nuclear submarines are unprotected, presenting a significant proliferation risk. As a result, AMEC proposes giving priority to projects that will help secure spent nuclear fuel and other material that presents a radiological hazard and proposes addressing security problems at Russian shipyards, naval bases, support vessels, and other facilities associated with the dismantlement process. AMEC’s draft plan calls for focusing on the following program areas: nuclear security issues in support of the Group of Eight (G-8) Global management of hazardous waste generated as a result of military environmental sustainability, safety, and security of military activities and installations. According to AMEC officials, AMEC’s future direction will be closely aligned with the priorities established by the G-8 Global Partnership plan to combat the spread of weapons and materials of mass destruction. In 2002, the G-8 announced this new initiative. The United States and the other G-8 members— Canada, France, Germany, Italy, Japan, Russia, and the United Kingdom plus the European Union—pledged $20 billion over the next 10 years to fund nonproliferation activities in the former Soviet Union. One of the key areas identified by the G-8 is nuclear submarine dismantlement. All of the G-8 countries, according to the Department of State, are contributing to the dismantlement of Russia’s decommissioned general purpose nuclear submarines. Other non-G-8 Global Partnership countries are also participating in this effort. AMEC program partners—the United Kingdom and Norway—have declared that they intend to use the AMEC program as one means of fulfilling their G-8 Global Partnership obligations. According to AMEC officials, future project development should include ways to reduce the security risks posed by all types of Russian nuclear submarines. With the G-8 priorities in mind, AMEC’s nuclear security working group, which helped develop the draft strategic plan, proposed several areas of possible engagement, including: evaluating state-of-the-art technology to enhance security at Russian naval bases and shipyards, improving security of ships known as “service vessels” that are used to store spent nuclear fuel from dismantled nuclear submarines, consolidating radiological sources to improve their security, and coordinating and increasing security of fueled submarines in transit. Regarding the security of Russian naval bases, the working group proposed evaluating, among other things, whether radar systems designed to detect low-profile targets, sonar systems designed to detect subsurface threats, and systems designed to detect small quantities of nuclear materials would improve security. AMEC technical staff would then develop recommendations and present them to AMEC’s representatives for consideration as follow-on projects. To improve the security of service vessels, the working group proposed incorporating protective measures, including radiation detectors, motion detectors, and closed circuit televisions. The working group also suggested reviewing a Russian study that focuses on consolidating radiological sources at several facilities. Based on this review, AMEC may suggest additional technical areas to be included in the study to improve its usefulness as a way to improve security. Finally, the working group proposed training personnel and developing procedures to produce a vulnerability assessment for, among other things, bases, shipyards, and radioactive waste storage facilities. To date, AMEC’s draft plan to address security issues associated with Russia’s nuclear submarines and support facilities has not been coordinated with DOD’s CTR policy office, DOD’s Office of Nonproliferation Policy, or DOE’s National Nuclear Security Administration—the organization primarily responsible for securing nuclear materials in Russia. U.S. AMEC officials told us that coordinating AMEC’s draft plan with other U.S. government agencies at an earlier stage would have been useful because of the program’s planned expansion to include nuclear security. The draft plan was developed by an AMEC technical guidance group and is now being reviewed by AMEC representatives from the United Kingdom, Norway, and Russia. According to DOD, the next step will be to meet with AMEC partners in September 2004 to finalize their comments and to review project proposals. U.S. AMEC plans to submit the final draft of the strategic plan to the U.S. interagency coordination process later in 2004. Once the interagency coordination is completed, the plan will go to the representatives of the AMEC partners for final approval. A DOD Nonproliferation Policy official told us that he had not seen AMEC’s draft strategic plan. According to a CTR policy official, many of the proposed areas of engagement identified by the nuclear security working group were unnecessary because they would apply to protecting fuel within nuclear submarines, which is less vulnerable to theft or diversion. In addition, he noted that one proposed engagement—the review of security measures for Russian naval bases and shipyards— could be contrary to U.S. interagency guidelines established in 2003 that preclude the delivery of security-related assistance to most operational military sites in Russia that have nuclear weapons, including certain navy facilities. For example, the U.S. policy precludes assistance for improving security at operational sites where submarines loaded with nuclear weapons are docked. DOE officials from the National Nuclear Security Administration, who are primarily responsible for securing nuclear material in Russia, expressed concerns about AMEC’s proposed expansion to include nuclear security. These officials, which included the Director of the Office of Global Radiological Threat Reduction, told us that securing spent nuclear fuel from dismantled Russian nuclear submarines is a low priority, based on available information. DOE takes a risk-based approach to threat reduction by considering the quantity, form, transportability, and surrounding security threats posed by high-risk radiological materials. Based on these criteria, DOE has concluded that spent fuel from Russian submarines does not present a sufficiently high risk to warrant the commitment of resources. Rather, DOE places a higher priority on the highest-risk radiological materials, such as sealed radiological sources found in radioisotope thermo-electric generators, which contain strontium- 90; blood irradiators; sterilization facilities; and large radiological storage locations. As a result, DOE officials stated that DOE does not wish to participate in securing spent nuclear fuel. DOE is funding a study that will prepare site-specific analyses of spent nuclear fuel inventories and terrorism vulnerability assessments for Russian nuclear submarine dismantlement sites. This study is expected to be complete in September 2004. The Director of the Office of Global Radiological Threat Reduction told us that DOE would use the information from the study to further evaluate the risks posed by spent nuclear fuel. He asserted, however, that securing spent nuclear fuel from nuclear submarines is primarily an environmental issue—not a proliferation concern. Furthermore, he stated that AMEC’s proposed nuclear security plan, if implemented, could have significant policy implications for all U.S. nonproliferation programs. For example, countries, including Russia, could request DOE assistance for securing spent nuclear fuel from dismantled nuclear submarines. If DOE agreed to provide this assistance, its resource requirements could dramatically increase because of the amount of spent nuclear fuel in the submarines and at coastal storage facilities. Regardless of AMEC’s future plans, U.S. participation in AMEC faces an uncertain future because the United States does not have liability protection for AMEC projects in Russia other than those that were undertaken in support of CTR. From 1996 to 2002, U.S. AMEC officials worked with the other AMEC member countries to obtain liability protection through a separate agreement. According to DOD officials, this effort was suspended because the State Department is negotiating liability protection for a broad range of U.S. programs with Russia. These negotiations have not been concluded, and therefore U.S. AMEC, which does not have liability protection, has limited participation in new projects. In the interim, U.S. AMEC officials have explored other options to acquire liability protection. For example, U.S. AMEC has continued to request approval from CTR to extend liability protection for the mobile liquid waste treatment facility project. However, CTR has rejected the request because the project does not support CTR objectives. In addition, according to CTR officials, the program does not require any additional AMEC assistance and it will not extend liability protection for future AMEC projects. In the interim, U.S. AMEC officials were able to acquire limited liability protection to participate in two new projects led by the United Kingdom: (1) the safe towing of decommissioned nuclear submarines and (2) improving the buoyancy of decommissioned nuclear submarines. U.S. AMEC officials have received State Department approval to provide limited assistance to these projects using the United Kingdom’s bilateral agreement with Russia as the basis for liability protection. U.S. AMEC plans to transfer funds to a United Kingdom contractor to perform a feasibility study associated with the two projects. According to U.S. AMEC officials, the United Kingdom has offered to sign all future contracts with Russia that will “hold the United States harmless of any liability.” An agreement to implement this proposed solution to the liability problem had not been completed at the time of our review. In response to Russia’s request for assistance to address environmental problems resulting from military activities in the Pacific, DOD plans to expand its technology demonstration efforts to that region by developing a program similar to AMEC. However, DOD has neither adequately analyzed the condition of Russia’s radioactive waste problems resulting from, among other things, decommissioned and dismantled nuclear submarines in the Pacific nor their impact on the environment. Furthermore, DOD has not identified specific projects that would be needed beyond those already being done for the Arctic region. Finally, Japan, which is assisting Russia dismantle submarines in the Pacific, has no current plans to join DOD in a technology development program. In November 1998, Russia requested DOD’s assistance to establish an organization similar to AMEC in Russia’s Pacific region to address environmental problems. Russia proposed 17 technical cooperation projects to develop and manufacture such things as a mobile ecological laboratory, a marine unit for ocean oil spill cleanup, and a transportable unit for radioactive waste water treatment. DOD began exploring ways to establish a cooperative program with Russia that had the potential to expand into regional cooperation with Japan and possibly other countries in the region. According to DOD officials, Congress needed to authorize expansion of the program into the Pacific region before projects could be implemented. Within DOD’s fiscal year 2003 defense authorization bill, DOD sought to obtain congressional approval to amend AMEC’s enabling legislation to expand the program to the Pacific region. However, no congressional action was taken on the proposal. DOD proposed new legislation within the fiscal year 2004 defense authorization bill to develop a separate cooperative program in the Pacific region, but no congressional action was taken on that initiative either. Although DOD has asserted that the expansion of cooperative efforts is necessary because of serious environmental contamination in the Pacific region, its proposal is not based on an adequate analysis of the region’s environmental conditions. Furthermore, DOD has not developed a comprehensive plan that identifies priorities, resource requirements, or timeframes for accomplishing the proposed expansion. Some U.S. environmental experts have noted that a master plan is needed in the Far East to prioritize tasks. Such a master plan is currently being developed to assist G-8 submarine dismantlement efforts in the Arctic region. This master plan, which is funded by the European Bank for Reconstruction and Development, is expected to help donor countries improve coordination and reduce the likelihood of duplication of assistance efforts. DOD and State Department officials told us that while the problems in the Pacific are generally known, they have not been thoroughly documented and analyzed compared to conditions in the Arctic, which has been the focus of international assistance. However, they said that available information indicates that conditions in the Pacific pose environmental risks. For example, there are environmental problems associated with Russia’s decommissioned and dismantled nuclear submarines, and there are inadequate and unprotected storage facilities for spent nuclear fuel and radioactive waste. A 1994 report prepared by Greenpeace documented the radioactive waste situation in the Russian Pacific Fleet, including waste disposal problems, submarine decommissioning and safety, and the security of naval fuel. There have also been more recent attempts to document environmental risks posed by Russia’s nuclear submarines in the Pacific region. For example, in 2003, a study by the International Institute for Applied Systems Analysis, which was funded by AMEC, found that a release of radioactivity from an accident aboard a Russian nuclear submarine in the Russian Pacific region could, under certain conditions, reach the United States in 3 to 5 days. DOD has taken steps to develop more comprehensive data on environmental conditions in the Pacific region. It awarded a contract to a Russian organization to study the status, characteristics, radiation potential, and risks of submarine dismantlement in the Pacific. The study will include a discussion of sources of radioactive contamination and nonradioactive contamination, problems associated with monitoring and environmental remediation, and sources of hazard and risk. In addition, it will focus on (1) developing a methodology for prioritizing tasks based on safety needs, threats, and risks; (2) developing a risk-based high-priority list of urgent tasks; and (3) proposing a structure and design for a strategic plan for future actions. Once the study is completed, DOD plans to develop a plan for the proposed Pacific initiative. In the interim, DOD has created a list of projects that were developed under AMEC for the Arctic region that may be applicable to the Pacific. These projects include (1) ensuring the buoyancy of decommissioned nuclear submarines, (2) providing handling for spent nuclear fuel, and (3) developing processing technologies for solid radioactive waste. According to DOD, additional projects would have to be developed in consultation with Russia and would need to take into account different climatic conditions in the Pacific. For example, the Pacific region encompasses areas with humid summers that could affect the type of equipment used. In addition, projects would also have to make allowances for the poorly developed infrastructure found in Russia’s Far East. These factors could increase the complexity and costs associated with the projects. According to DOD officials, DOD envisions partnering with Japan to develop a master plan that will specify projects based on assessments of the environmental conditions in the Pacific region. In addition, DOD has invited Japan to participate in the ongoing DOD-funded assessment of the environmental risks posed by decommissioned nuclear submarines in the Pacific. Officials from Japan’s Embassy to the United States and Japan’s Ministry of Foreign Affairs told us that Russia’s decommissioned nuclear submarines in the Pacific pose environmental and security concerns. These officials were particularly concerned that radioactive contamination from nuclear submarines could damage Japan’s fishing industry. However, according to an official from Japan’s Ministry of Foreign Affairs, Japan has no current plans to join DOD in a technology development program in the Pacific region. The official told us that although Japan is interested in AMEC- sponsored technologies—and how they might be applied to submarine dismantlement in the Pacific—Japan prefers to work under the auspices of the G-8 Global Partnership. Japan has committed more than $200 million to the Global Partnership. Within the committed amount, Japan plans to allocate about $100 million for projects related to dismantlement of Russia’s nuclear submarines and other environmental projects in Russia. In December 2003, Japan began assisting the Russian dismantlement of a general purpose nuclear submarine, and the project is expected to be completed later this year. The project is expected to cost about $7.4 million, including upgrades to the military facility where dismantlement is taking place. Japan may fund the dismantlement of 26 additional Russian nuclear submarines over the next several years. AMEC representatives from the United Kingdom and Norway told us that their countries are not interested in funding a technology development program in the Pacific region. However, they asserted that a regional approach, similar to AMEC, might be useful to assist with submarine dismantlement efforts in that region. With the completion of projects related to the CTR program, U.S. participation in AMEC is at a crossroads. AMEC is heading in a new direction that represents a significant expansion from its original environmental charter. AMEC officials have not adequately justified the expansion of the program to secure spent nuclear fuel and other material and to address security problems at Russian shipyards, naval bases, support vessels, and other facilities associated with the dismantlement process. Regardless of AMEC’s plans, however, the U.S. role will be limited until the liability issue with Russia is resolved. The proposed expansion of AMEC’s goals to include improving security around naval bases where Russia is decommissioning and dismantling nuclear submarines is a low priority objective and may be inconsistent with U.S security policy. DOE, which is responsible for securing nuclear materials in Russia, does not believe that spent nuclear fuel and other associated radioactive materials from Russia’s nuclear submarines pose a high priority threat and therefore have told us they would not fund any initiatives in this area. Furthermore, improving security around Russian submarine bases may be inconsistent with U.S. policy, which generally precludes providing security upgrades around operational Russian naval facilities. In addition, DOD’s interest in expanding its technology development activities to Russia’s Pacific fleet of nuclear submarines is not based on an analysis that demonstrates the need to do so, although efforts are underway to study the environmental risks. Previously developed technologies for Russia’s Arctic fleet could potentially be applied to dismantling Russia’s nuclear submarines in the Pacific, and there is no assessment concluding that additional projects are needed. Furthermore, Japan, which is most concerned about contamination from aging or damaged nuclear submarines in the Pacific, has begun dismantling Russian submarines in the Pacific under the auspices of the G-8 program and has not requested DOD’s assistance in technology development. If further analysis in the Pacific shows that environmental conditions warrant assistance, DOD officials stated that congressional approval for this initiative will be required. Finally, we believe that better oversight is needed to ensure that project funds are spent on a timely basis. The approximately $700,000 in unspent funds transferred from the Department of the Navy to EPA almost 5 years ago for the mobile liquid waste project raises concerns about the adequacy of financial and management controls being exercised over the program. To help ensure that the United States’ continued participation in AMEC supports—and is consistent with—overall U.S. assistance efforts in Russia, we recommend that the Secretary of Defense, in consultation with the Secretaries of Energy and State, take the following actions: determine whether AMEC’s role should be expanded to include activities such as improving security around Russian nuclear submarine bases and ensure that AMEC’s efforts are well defined, closely coordinated, and complementary with other U.S. nuclear nonproliferation programs managed by the Departments of Defense and Energy. Regarding DOD’s proposed Pacific initiative, we recommend that the Secretary of Defense: assess whether technology development activities should be expanded to include submarine dismantlement in that region, and if determined it is necessary, request congressional approval for this expansion to the Pacific region; and determine what form U.S. participation in such a technology development program would take, such as a bilateral effort or a multilateral organization similar to AMEC. Furthermore, we recommend that the Administrator, Environmental Protection Agency determine, in consultation with the Secretary of the Navy, if the funds that were designated for AMEC-related activities are still needed for the purpose intended. If not, we recommend that the Administrator and the Secretary determine whether to reprogram the funds for other AMEC-related activities or to propose rescinding the funds. We provided the Departments of Defense and Energy and EPA with draft copies of this report for their review and comment. DOE had no comments and EPA provided technical comments, which we incorporated as appropriate. DOD provided written comments, which are presented as appendix III. DOD concurred with all of our report’s recommendations. However, in commenting on our draft report, DOD raised several concerns and observations, including: (1) AMEC’s primary role is not to support the Cooperative Threat Reduction program (CTR) but to minimize the ecological security risks associated with military activities in the Arctic; (2) DOD’s program plan submitted to the Congress in 1999 did not state that AMEC projects would support the goals of the CTR program; (3) our report did not adequately capture AMEC’s impact on and relationship to other U.S./multinational programs such as the G-8 Global Partnership initiative; (4) AMEC’s draft plan is a work in progress and is currently undergoing coordination with partner countries; and (5) our report does not capture the trend that shows increased partner country funding. Our response to DOD’s comments on the report is as follows. In our view, our draft report properly characterized AMEC’s role and gave the program credit for achieving technology benefits and promoting U.S. foreign policy objectives. As we stated in the draft report, AMEC was established to help reduce the environmental impacts of Russia’s military activities in the Arctic region. However, we also noted that U.S. participation in AMEC was hindered by the absence of liability protection. Given this lack of liability protection, the United States has, for the most part, tied its participation in AMEC projects to DOD’s CTR liability protocol. We noted, however, in the draft report that a number of AMEC projects are not linked to the CTR program. It is unclear to us why DOD asserted in its comments that its 1999 program plan does not state that AMEC was expected to support CTR projects. In fact, DOD’s program plan clearly states on page 7 that “All AMEC activities currently underway in Russia are in support of CTR Ballistic Missile Submarine Dismantlement projects and thus are governed by CTR Implementing Agreement of August 26, 1993, between DOD and the Ministry of Economics of the Russian Federation, addressing strategic offensive arms elimination.” In addition, we disagree with DOD’s assertion that we did not adequately portray AMEC’s relationship to other U.S./multinational programs, including the G-8 Global Partnership initiative. Our draft report recognized that AMEC’s future direction would be closely aligned with priorities established by the G-8 Global Partnership. We also noted that AMEC program partners have declared their intention to use AMEC as one way to fulfill their G-8 Global Partnership obligations. Furthermore, we recognized in the draft report that AMEC’s strategic plan is a draft document and is being coordinated with partner countries. Regarding member countries’ contributions to AMEC, our report addresses this matter as well. We stated in our draft report that overall U.S. funding decreased from fiscal year 1999 to fiscal year 2004 as U.S.-funded projects have been completed and as other AMEC member countries have increased their assistance. However, in response to DOD’s comment, we added this information to the highlights page of the report. DOD concurred with our recommendation that the Secretary of Defense, in consultation with the Secretaries of Energy and State, determine whether AMEC’s role should be expanded to include activities such as improving security around Russian nuclear submarine bases. However, DOD stated that AMEC’s planned expansion will not include submarine base security but will focus on activities such as the G-8 Global Partnership initiative and ecological security. DOD stated that improving security around Russian nuclear submarine bases was part of a draft strategic plan that is currently being coordinated with member countries and it is inappropriate to portray any elements of the draft plan as anything other than a plan in progress. We are encouraged that DOD now states that it will not engage in activities to improve the security at Russian nuclear submarine bases—activities that could be contrary to U.S. policy. However, we believe it is important to note that AMEC was considering improving submarine base security as part of its draft strategic plan. In our view, if AMEC provided assistance to improve the security of Russia’s submarine bases, it would have represented a significant departure from the program’s original environmental security objectives. DOD also provided technical comments, which we have incorporated into the report as appropriate. Below, we summarize several of these technical comments and provide our response. DOD incorrectly asserted in its technical comments that our draft report did not address two aspects of section 324 of the National Defense Authorization Act for Fiscal Year 2004 that required us to review AMEC: (1) the extent to which the AMEC program supports the G-8 Global Partnership Against the Spread of Weapons and Materials of Mass Destruction Initiative and (2) the current and proposed technology development and demonstration role of AMEC in U.S. nonproliferation efforts. As we previously noted, our draft report provides information on the relationship between AMEC and the G-8 Global Partnership, noting that the future direction of AMEC will be tailored to support G-8 Global Partnership goals. The draft report also identified the various technology demonstration projects that have been proposed and implemented, including recent projects focusing on the safe towing and improved buoyancy of decommissioned nuclear submarines. These projects are expected to support G-8 nonproliferation goals as well as U.S. security interests. DOD also asserted that we had mischaracterized AMEC’s contribution to CTR as “limited” because we did not factor into our analysis the financial benefits resulting from the prototype 40-metric ton spent nuclear fuel storage container developed by AMEC. DOD claims that the cost savings from these containers has essentially paid for the AMEC program. As stated in the draft report, the AMEC containers cost less to produce than the container Russia developed to store the spent nuclear fuel and we have revised the report to more accurately indicate the amount of savings per container as noted in DOD’s comments. However, we believe that DOD has not understood the larger point of our analysis. While we recognize in the report that the storage container project has proven beneficial, the other seven projects that were established to support CTR objectives have had limited impact on the CTR program. In our view, one project, regardless of its benefit, does not compensate for the shortfalls of the other projects in supporting CTR program objectives. DOD stated that the report does not capture the draft nature of the AMEC strategic plan and does not properly explain the coordination process among partner countries. We disagree with this assertion. We properly identified the plan as a draft document throughout the report. Furthermore, the draft report contained information about the coordination process that DOD officials provided to us on July 14, 2004. However, we have incorporated additional information in the report about coordination timeframes to reflect DOD’s comments. In its technical comments, DOD also stated that U.S. participation in AMEC faces an uncertain future due to changing program direction, and not because it lacks liability protection. We disagree with this assertion. U.S. AMEC officials told us that U.S. participation in new AMEC projects was hampered due to the lack of liability protection. These officials never indicated during the course of our review that changing program requirements were impacting the program. In fact, they stated in a positive vein that future U.S. participation in AMEC would be tied to the G-8 Global Partnership initiative, which was aligned with U.S. national security interests. We are sending copies of this report to the Secretary of Defense; the Secretary of Energy; the Administrator, National Nuclear Security Administration; the Administrator, Environmental Protection Agency; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, I can be reached at 202-512-3841 or aloisee@gao.gov. Key contributors to this report were Julie Chamberlain, Nancy Crothers, Robin Eddington, Glen Levis, and Jim Shafer. The following table lists AMEC projects under way, completed, newly started, or terminated. To assess the extent to which AMEC supports and complements the CTR program, we obtained and analyzed AMEC project files, reviewed pertinent supporting documentation, including project justifications, and discussed each project with program and project managers from the Departments of Defense and Energy, the Environmental Protection Agency, and Brookhaven National Laboratory. Department of State officials also provided their views about the projects. Of particular importance was an AMEC program plan that DOD submitted to the Congress in response to the National Defense Authorization Act for Fiscal Year 1999. In the plan, DOD provided information on AMEC projects’ relationship to the CTR program. We used this plan as the basis for determining how AMEC projects supported the CTR program. During our review, we also interviewed DOD’s Deputy Undersecretary of Defense for Installations and Environment, who is responsible for establishing U.S. policy for AMEC, to obtain his views on the impact of AMEC projects and the program’s overall benefits. In April 2004, we attended a meeting of the AMEC principals in Svalbard, Norway, to obtain additional information about the AMEC program, including project implementation. During the meeting, we interviewed the principals and their staff from the United Kingdom, Norway, and Russia. These principals included the Commander of U.S. Navy Installations, the Head of Environmental Safety of the Russian Armed Forces, the Deputy Director General of Norway’s Security Policy Department, and a representative from the United Kingdom’s Royal Navy responsible for environmental issues. We also interviewed U.S. embassy officials in Oslo, Norway, including the U.S. Ambassador. The U.S. Ambassador to Russia provided his perspectives about AMEC in a letter to us dated May 24, 2004. We also interviewed officials from Norway’s federal audit agency (Riksrevisjonen) and the Bellona Foundation, a Norwegian nongovernmental organization that focuses on environmental issues in the Arctic. To identify AMEC financial contributions, including those from the United States, we obtained data from the AMEC program office in DOD, which is responsible for tracking all financial activities related to U.S. participation in AMEC. In addition, the AMEC program office, at our request, obtained financial data from Norway and Russia. The United Kingdom’s data were provided to us by the AMEC Steering Group Co-Chairman. We obtained responses to a series of questions focused on data reliability covering issues such as data entry access, internal control procedures, and the accuracy and completeness of the data from a United Kingdom AMEC official. Although we did not interview AMEC officials from Russia and Norway, we discussed in detail the Russian and Norwegian financial data with U.S. AMEC officials. Based on the United Kingdom responses and these discussions with U.S. AMEC officials, we concluded that the data were sufficiently reliable for the purposes of this report. With regard to the U.S. contributions to AMEC, we reviewed the data and posed a number of questions to the AMEC program office to determine the reliability of the financial data. Specifically, we (1) met with AMEC program officials to discuss these data in detail; (2) obtained from key officials responses to a series of questions focused on data reliability covering issues such as data entry access, internal control procedures, and the accuracy and completeness of the data; and (3) added follow-up questions whenever necessary. Based on this work, we determined that the data were sufficiently reliable for the purposes of this report based on the work we performed to assure the data’s reliability. To assess AMEC’s future program objectives, we examined documents prepared by AMEC and interviewed officials responsible for developing the draft strategic plan. Specifically, in May 2004, we attended a meeting of AMEC’s Technical Guidance Group in Gettysburg, Pennsylvania, where the plan was formulated. While at the meeting we discussed AMEC’s future plans with (1) the United Kingdom’s AMEC Steering Group Co-Chairman (representing the Royal Navy), (2) representatives from Norway’s Ministry of Defense and Norway’s Defense Research Establishment, (3) a representative from Russia’s Armed Forces Environmental Safety organization, and (4) the AMEC Steering Group Co-Chairman from DOD. In addition, we used the draft strategic plan to analyze AMEC’s long-term goals and objectives, including its proposal to include nuclear security as a new program objective. We also discussed AMEC’s nuclear security focus with officials from the Office of the Secretary of Defense for Cooperative Threat Reduction Policy, DOD’s Office of Nonproliferation, and DOE’s National Nuclear Security Administration. At DOE, we interviewed the Principal Assistant Deputy Administrator, Office of Defense Nuclear Nonproliferation; Director, Office of Global Threat Reduction; and the Acting Assistant Deputy Administrator, Office of International Material Protection and Cooperation. We also discussed these matters with a Brookhaven National Laboratory official who is leading a DOE-sponsored study on the risks associated with spent nuclear fuel from dismantled Russian nuclear submarines. We obtained and analyzed pertinent program files maintained by DOD to evaluate DOD’s plan to expand its technology development activities to the Pacific region. We also obtained available studies and reports prepared by Greenpeace International and the International Institute for Applied Systems Analysis that identified the conditions and risks posed by radioactive contamination. We supplemented this information with interviews with knowledgeable officials from Vanderbilt University and the Department of State. The official from Vanderbilt University is responsible for managing an AMEC-funded project on radioactive contamination in the Far East. We also interviewed an official from Japan’s Ministry of Foreign Affairs to obtain information about Japan’s views of the environmental problems associated with radioactive waste generated by Russia’s nuclear submarines. We conducted our review from January through August 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Norway, Russia, the United Kingdom, and the United States participate in the Arctic Military Environmental Cooperation (AMEC) program, a multilateral effort that seeks to reduce the environmental impacts of Russia's military activities through technology development projects. AMEC has primarily focused on Russia's aging fleet of nuclear submarines. Section 324 of the National Defense Authorization Act for Fiscal Year 2004 required GAO to review AMEC, including its relationship to the Department of Defense's (DOD) Cooperative Threat Reduction (CTR) program. In accordance with the act, GAO (1) assessed the extent to which AMEC supports and complements the CTR program, (2) identified AMEC member countries' financial contributions to the program, (3) assessed AMEC's future program objectives, and (4) evaluated DOD's proposal to expand its technology development activities to Russia's Pacific region. In a 1999 program plan to the Congress, DOD stated that AMEC projects would support the goals of the CTR program. However, we found that only one of eight AMEC projects designed to support CTR's objective of dismantling Russia's ballistic missile nuclear submarines has done so. This project involved development of a prototype 40-metric ton container to store and transport spent (used) nuclear fuel from Russia's dismantled submarines. Despite AMEC's limited contribution to CTR, DOD officials, including CTR representatives, said that most of the projects can be used to support dismantlement of other types of Russian nuclear submarines. In addition, U.S. and foreign officials cited other benefits of U.S. participation in AMEC, including promoting U.S. foreign policy objectives, particularly with Norway, and facilitating military-to-military cooperation with Russia. From 1996, when the program was established, to April 2004, AMEC member countries had contributed about $56 million to the program. The United States has been the largest contributor, providing about $31 million, or about 56 percent of the total. However, the overall U.S. contribution has decreased from fiscal year 1999 to fiscal year 2004 as U.S. funded projects have been completed and as other AMEC member countries have increased their assistance. In May 2004, AMEC developed a draft strategic plan to guide its future efforts. The plan, which is currently being reviewed by AMEC partners, proposes improving the security of Russia's nuclear submarine bases and securing spent nuclear fuel from dismantled submarines. However, securing bases could be contrary to U.S. policy, which preclude assistance to most operational Russian military sites that contain nuclear weapons, including certain naval facilities. DOD wants to expand its dismantlement technology development efforts to Russia's Pacific region, but has not adequately analyzed the condition of Russia's decommissioned nuclear submarines in the Pacific and their impact on the environment. Furthermore, DOD has not identified specific projects that would be needed beyond those already done in the Arctic region.
|
Enormous growth in government and private biomedical research funding and in financial relationships between government-funded investigators and private industry has increased the potential for financial conflicts of interest to occur that could compromise research integrity and the safety of participants. HHS has regulations on individual investigator financial interests in federally funded or regulated research. The academic and professional communities also have developed policies and guidelines on conflicts of interest and have recently devoted resources to study this issue in more depth. The budget of NIH, the principal federal agency that funds biomedical research, grew from a little over $3 billion in fiscal year 1980 to more than $20 billion in fiscal year 2001. Most NIH grants and contracts are awarded through universities and medical centers to investigators conducting research at these institutions. Private industry funding grew even more rapidly—funding by drug companies alone rose from $1.5 billion in 1980 to $22.4 billion in 2000. Industry sponsors of biomedical research either conduct the research themselves or provide the funding to university investigators, other research institutions, contract research organizations, or private medical practices. Collaborations between government-funded research investigators and private industry also have increased, in part because of the Bayh-Dole Act. The act gave universities, nonprofit corporations, and small businesses the ability to retain patents and commercialize their federally funded inventions in order to facilitate the commercialization of new technologies. University-generated patents rose from about 250 per year before 1980 to more than 4,800 in 1998. As the boundary between academia and industry has become less distinct, concerns have been raised about the potential for financial conflicts of interest in investigators’ as well as institutions’ relationships with private industry. Investigators’ financial relationships with outside interests can include working, contracting, or consulting for a company; holding a management position or board membership or having other fiduciary relationships; or owning stock or other securities. A conflict of interest occurs when these relationships compromise, or appear to compromise, an investigator’s professional judgment and independence in the design, conduct, or publication of his or her research. For example, financial conflicts of interest may affect the recruitment of human research subjects such that inappropriate participants are enrolled. These conflicts also may influence the informed consent process—by which the risks and benefits of a study are communicated to the participants—resulting in participants who are not fully informed about a study’s potential harm to them. Furthermore, an investigator’s financial stake in a product may bias the development and reporting of research results or make the investigator reluctant to share information with other investigators in order to maintain his or her competitive edge. Financial conflicts of interest could bias the publication of research findings. For example, a corporate sponsor of research with a vested financial interest in the study outcome may try to ensure that only findings favorable to the sponsor’s product are published. Institutional financial conflicts of interest may arise because of an institution’s desire to participate in technology transfer activities and its need to remain financially sound. While companies may invest in universities by supporting positions such as endowed chairs or facilities such as research laboratories, universities also may invest financial resources in companies that sponsor research at the institution. Such investments would include owning stock in a pharmaceutical company or investing in a small start-up company formed by entrepreneurial faculty who have invented products and want to market them commercially. Start- up companies are generally nonpublicly traded enterprises. An investor’s financial stake in a start-up may result in future financial gain. Sometimes, however, an institution’s economic goals may conflict with its goals of fostering objective, unbiased research. Financial interests may color its review, approval, or monitoring of research conducted under its auspices or its allocation of equipment, facilities, and staff for research. For example, in a case that came to light in the late 1980s, the president of one large university provided venture capital equal to one-fifth of the university’s endowment (funds that support the university) to invest in a biotechnology start-up company that used technologies the university developed, with the university consequently holding more than 70 percent of the company’s equity. The company also had university officials on its board of directors and conducted research through the university. Because of these ties, university decisions about research were inappropriately commingled with financial decisions about the start-up company. Within HHS, responsibility for the oversight of federally funded or regulated biomedical research rests primarily with three entities: NIH, FDA, and OHRP. NIH is charged with ensuring that the research it funds complies with applicable HHS regulations, including a PHS regulation on individual investigators’ financial interests. This regulation, promulgated in 1995, requires PHS-funded organizations or institutions (which include all NIH-funded organizations) to maintain and enforce written policies on financial conflicts of interest; inform their investigators of these policies; and require investigators to disclose any “significant financial interests” in entities whose financial interests may be affected by the research. While the PHS regulation uses the phrase “conflict of interest” without defining it, the regulation defines a “significant financial interest” as including income of an investigator or investigator’s spouse or dependent child expected to exceed $10,000 over 12 months, or equity interests exceeding $10,000 or 5 percent ownership of a company. It is left to institutional officials to determine which significant financial interests constitute conflicts of interest. Institutions must report a financial conflict of interest to the PHS awarding component and explain whether the conflict has been “managed, reduced, or eliminated.” The PHS regulation does not define these terms but provides several examples of strategies to be used. In practice, the management of a financial conflict of interest includes strategies to monitor any effects as well as to reduce or eliminate the financial interest. FDA is responsible for ensuring that the financial interests and arrangements of clinical investigators do not interfere with the reliability of data submitted to FDA in support of marketing applications for drugs, biological products, or medical devices. Under FDA’s financial interest regulation, effective in 1999, sponsors submitting marketing applications must certify that investigators did not have certain financial interests and arrangements, or must disclose them. FDA uses this information in conjunction with information submitted on the design and purpose of the study, and information obtained through on-site inspections, to assess data reliability. In contrast to PHS, FDA’s thresholds for financial interests requiring disclosure include payments made by the sponsor of a study to the investigator or his or her institution exceeding $25,000 (beyond the costs incurred in conducting the study) or any equity interest an investigator has in a publicly held company sponsoring the research that exceeds $50,000. OHRP oversees all research conducted or funded by HHS that involves human research subjects and enforces the HHS regulations regarding the protection of human subjects. HHS’ human subjects protection regulations do not address directly the disclosure and management of investigators’ financial conflicts of interest. However, the regulations do require a university’s IRB, which reviews research proposals involving human research subjects, to weigh a study’s risks and benefits to participants, and review the study’s participant consent form, as part of its review of the research. Because financial conflicts of interest may affect the risk-benefit analysis, the purpose of the IRB review implies consideration of them. While the actual IRB review of a research proposal may not explicitly consider financial conflicts of interest, IRBs have the right to request and review information about investigators’ financial interests that might pose risks to subjects, and they may require an investigator to disclose significant financial interests to the research subjects in the consent form. The human subjects protection regulations also state that an IRB member may not participate in the initial or continuing review of any project in which he or she has a conflicting interest, except to provide information requested by the IRB. Unless biomedical research is federally funded or involves research or products that need federal approval, it is not necessarily subject to the HHS regulations and oversight pertaining to financial interests and human subjects protection. A significant and growing amount of privately funded biomedical research exists that is not under the purview of HHS regulations and oversight. The academic community and professional associations have demonstrated concern about financial conflicts of interest in biomedical research for a number of years and have taken steps to address this issue. In 1990, the Association of American Medical Colleges (AAMC) issued a document that in part defined institutional and individual responsibilities for dealing with conflicts of interest in research and provided guidance to institutions in developing policies and procedures to meet their unique situations and local requirements. In 1993, the Association of American Universities (AAU) developed a framework for managing investigators’ financial conflicts of interest. Also in 1993, the Association of Academic Health Centers convened a task force to study institutional financial conflicts of interest and their management. Although this task force produced a report, it did not develop specific guidelines on institutional financial conflicts of interest. More activity has occurred recently, partly because of concerns about reports that financial conflicts of interest were associated with harm to research participants. In April 2000, the American Society of Gene Therapy adopted a policy strongly encouraging that its members have no equity, stock options, or comparable arrangements in companies sponsoring a clinical trial. Also in 2000, AAU formed the Task Force on Research Accountability, which issued a report in June on improving the management of human subjects protection systems. In October 2001, the Task Force issued a report on the management of individual and institutional financial conflicts of interest, with specific guidelines and recommendations. In 2001, AAMC convened a task force of clinical investigators; patient representatives; medical school, teaching hospital, and university leaders; and representatives from industry, the legal community, and the media to study the issue of conflicts of interest, update AAMC’s 1990 guidelines, and develop new principles for addressing institutional financial conflicts of interest. Editors of the major medical journals also have expressed concern about the competitive economic environment in which some clinical research is conceived, study subjects are recruited, and data are analyzed and reported. In response to these concerns, the International Committee of Medical Journal Editors has revised and strengthened the section on publication ethics in Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication, which is a reference widely used by individual journals as a basis for their editorial policies. As part of the document’s revised reporting requirements, authors will need to disclose details of their own and the sponsor’s role in a study. Some journals also may require the primary authors to sign statements that they accept full responsibility for the conduct of the trial, had access to the data, and controlled the decision to publish. The five universities we visited developed written financial conflict-of- interest policies for individual investigators that, for the most part, extended to all publicly and privately funded research but varied in their content and in how they were implemented. For example, the universities differed in the kinds of financial relationships—such as paid consulting and holding equity in a company—they considered to be manageable conflicts of interest. In addition, some universities used formal monitoring committees to manage conflicts, while one university allowed investigators to develop self-management plans. The universities generally allowed investigators to self-certify compliance with financial conflict-of- interest policies. Administrative data used to oversee investigators’ research activities and financial relationships at all five universities were kept in various offices and in different databases. The universities generally acknowledged a need for better coordination, and several of the universities told us they were taking steps to develop these linkages. Officials at some of the universities told us that they would like to have access to information from HHS and other institutions that could help them improve their practices. The written financial conflict-of-interest policies at four of the universities we visited extended beyond the requirements of the PHS financial interest regulation to apply to all research conducted at the universities, whether it was funded publicly or privately. The fifth university’s written policy covered all publicly and privately funded research except research sponsored by certain foundations and other nonprofit organizations. Concern about actual, potential, or even perceived conflicts of interest has led many other research institutions to develop financial conflict-of- interest policies that are broader than what the federal regulation covers. A recently published survey of the top 100 NIH-funded research institutions reported that more than 70 percent of the 89 respondents had written policies that were more extensive than the federal regulation. Four universities we visited had policies that predated the PHS regulation, and they revised these policies following the regulation’s implementation in 1995. The fifth university developed its policy the year that PHS published its regulation. In part because of the recent focus on conflict-of- interest issues, four of the five universities were in the process of reviewing and revising their policies and procedures. These four universities had formed task forces or working groups to assess their policies and procedures and adapt them to the changing research environment. The PHS regulation is flexible to allow institutions to implement it in ways that meet their individual circumstances. The five universities had differences in threshold amounts, timetables for disclosure, and processes for disclosure. And, although they all used similar strategies to manage financial conflicts of interest, they differed in how they employed them. The extent of IRB involvement in the review of financial conflicts of interest also varied, ranging from reviewing investigators’ financial disclosure documents to obtaining verbal information from investigators and relying on informal exchanges between its members and the conflict- of-interest committee. All five universities, however, generally relied on investigators to monitor their own compliance with the schools’ financial conflict-of-interest rules. In addition to being shaped by the federal requirements, institutions’ policies and procedures also may reflect state laws, court cases, the institution’s experiences with financial conflicts, its organizational structure, and its technology transfer activity. For example, state ethics laws influenced the policies at two universities we visited, and a court case also influenced one of these universities’ policies. Four of the universities had written policies with categories and classifications of financial conflicts of interest. However, the fifth university’s written policy did not have fixed rules about potential financial conflicts of interest but instead listed 13 specific examples of activities that represented actual, possible, or no conflict of interest. Policies at the five universities required research investigators to disclose to the institution any significant financial interests. Three universities set the threshold for disclosure at the same level as the PHS requirement. Another set the threshold for publicly sponsored research at the PHS level, while, for privately sponsored research, it set a separate threshold of $250 in income or holdings. The remaining university set the overall threshold for disclosure at the PHS level but had a more stringent disclosure policy for investigators involved in clinical trials. To help protect the interests of human research subjects, this university required an investigator doing clinical research who has any financial interest in the study to disclose it to the institution. Officials at one of the other universities told us they also are considering whether to lower their threshold from the PHS level for disclosure of financial interests in clinical research. At four of the five universities, the overall proportion of clinical researchers who disclosed a significant financial relationship averaged 5 percent. At one university, these data were not readily available. The five universities differed in their timelines and processes for disclosure of significant financial interests. Three of the universities required an annual disclosure by research investigators, and two required disclosure when a research proposal was submitted. All required updates whenever there was a change in the investigator’s financial interests. Their disclosure forms also varied, ranging from simply asking whether a significant financial interest exists and what type of interest it is to asking detailed questions about the nature and amount of the financial interest. Several disclosure forms required supporting information to be provided as an attachment or to be submitted later. All of the universities took steps to preserve the confidentiality of personal information, with some taking stronger measures than others. For example, while all five limited review of disclosure forms to university officials or a designated committee, one university redacted the names of investigators in the disclosure forms before giving them to the conflict-of-interest committee for review. All five of the universities in our study had conflict-of-interest committees that were responsible for the development and implementation of financial conflict-of-interest policies and procedures. The configuration of these committees and the extent of their involvement in the review of disclosures varied. All five universities had universitywide committees that handled the review of financial conflicts of interest. Three of these universities had additional medical school conflict-of-interest committees. At two of the five universities, either the chairperson of, or staff to, the committee reviewed all disclosure forms and determined whether the financial interest was a conflict, which would then need to be managed, reduced, or eliminated; they referred complex cases to the full committee for discussion and action. At another university, each department chairperson reviewed the department’s investigator disclosures and forwarded disclosures of activities that may be allowable or are presumptively not allowable to the committee for further review. At the other two universities, the committee members reviewed each financial disclosure. We found some variation among the five universities in how their conflict- of-interest committees evaluated significant financial relationships. The committees make these determinations in response to the PHS regulation, which requires universities to decide whether a disclosed relationship constitutes a financial conflict of interest that needs to be managed, reduced, or eliminated. For example, one school’s policy stated that an investigator conducting clinical research on a product he or she developed that was licensed to an external organization in which the investigator had equity or other direct relations might be permitted to continue with the research after disclosure, with appropriate safeguards in place. But another university’s policy stated that such a relationship would present serious problems and that it would consider the relationship inappropriate unless it could be managed very closely. In addition, while one university typically allowed investigators who received grant funding to hold equity or receive consulting fees from a company for which they were conducting clinical research, another university strongly discouraged or limited this practice. IRB involvement in the review of financial conflicts of interest also varied at the five universities we visited. University officials told us that IRB members, following federal regulations, recused themselves from reviewing research protocols when they had a conflicting interest. At some of the universities, the IRBs were more aware of investigators’ financial interests than at others. The IRB members at one university reviewed faculty financial disclosure forms in detail as part of their review of the research protocol, checked to make sure that all investigators associated with the grant had filed disclosure forms, and, when appropriate, required disclosure to human research subjects. At three other universities, the conflict-of-interest committee was supposed to send the IRB a memo or report that summarized the financial conflict and recommended a management strategy. At two of these three universities, the IRB could overrule the management strategy the conflict-of-interest committee recommended. At the third university, the IRB did not have the authority to overrule a management strategy. The IRB at the remaining university had no formal communication with the conflict-of-interest committee; instead, IRB members obtained verbal information about financial interests from investigators. Officials at this university told us they also relied on the overlapping membership between the conflict-of-interest committee and the IRB to surface any issues regarding investigators’ financial conflicts of interest. The universities we visited did one or more of the following to manage financial conflicts of interest: (1) required disclosure, (2) monitored the research, and (3) required divestiture of the financial interest. The application of these strategies differed, however. Some universities had fairly formal guidelines about when each strategy should be used, while others applied the strategies on a case-by-case basis. For example, officials at one university told us that the strategy used was sometimes determined through negotiation and cooperation between the investigator and the conflict-of-interest committee. Disclosure of the financial interest can take different forms, depending on the institution. One of the five universities we visited required all investigators who reported financial interests to the institution to disclose them in publications. The four remaining universities did not have an across-the-board policy to require investigators to disclose financial interests in publications, and some of the four decided on a case-by-case basis. At two universities, if human research subjects were involved, investigators had to disclose the interests to their study subjects. One of these two universities required investigators to use specific language in the consent form that described the investigator’s financial relationship with the study sponsor. The other three universities decided on a case-by-case basis whether investigators would be required to disclose financial interests in the consent form. Monitoring the research can involve establishing a formal monitoring committee consisting of several faculty members who meet with the investigator periodically to make sure that the significant financial relationship is being handled appropriately and is not harming the integrity of the research. For example, at one university, a subcommittee of the medical school conflict-of-interest committee develops a “monitoring plan” for each case, outlining the composition, appointment, and responsibilities of the monitoring committee. The plans are contingent upon approval from the universitywide committee, and the subcommittee ensures that the plan is carried out. Conversely, monitoring can be informal and involve, for instance, an investigator designing a personal “self-management” monitoring plan that satisfies the university’s requirement for managing the financial conflict of interest. For example, at one university, an investigator with a significant financial interest in a company designed a self-management monitoring plan that included limiting the time spent with the company, keeping track of the time spent with this company, and not allowing the company to be involved with the research laboratory. Divestiture of the financial interest is also an option, but several universities told us that this strategy is infrequently imposed on investigators and not often chosen by them. Of 111 investigators at four of the universities we visited who had significant financial relationships with industry in 2000, only 3 voluntarily divested their interests; none were told to divest by their universities. Some investigators with significant financial interests may decide not to be involved in conducting the study, but if they are the only ones with a key skill or knowledge for a particular study, they may still want to play a role. For instance, an investigator in a privately funded study at one university we visited was willing to relinquish her rights as the head investigator on the project involving a new surgical procedure but insisted that she be present in the operating room during the surgery because of her expertise and understanding of the procedure. Subsequently, the informed consent form was altered to reveal the investigator’s financial interests; other investigator-initiated safeguards, such as disclosure in publications of the investigator’s financial interest, were put into place; and the investigator was permitted to be present during the surgery. Each of the five universities’ written conflict-of-interest policies stated that an investigator’s failure to comply with the policy, such as not disclosing a significant financial interest or not following the required management strategy, is cause for disciplinary action, ranging from fines to termination of employment. University officials told us that they rarely determined that sanctions were warranted. None of the five universities had formal processes for verifying that individuals fully disclosed their financial interests. Instead, some universities used informal methods for identifying apparent inconsistencies, such as comparing disclosure forms with those of prior years. They said they relied on investigators to comply voluntarily with conflict-of-interest policies because they believed it was important to have faculty support and maintain collegiality with the investigators. Furthermore, some of the universities emphasized informing faculty about their financial conflict-of-interest policies, a requirement established in the PHS regulation. To this end, for example, two of the universities had incorporated financial conflict-of-interest education modules into their investigator training. The data for overseeing various aspects of investigators’ research activities and financial interests were kept in multiple offices, files, and formats within each of the five universities, making it a challenge to ensure that conflicts of interest were appropriately managed and not overlooked. As part of our study, we asked the five universities to provide some basic data on investigators’ financial conflicts of interest in clinical research involving human research subjects. All five of the universities had difficulty providing the information requested, and one was not able to provide any of the data. University officials told us it was difficult to respond to our request because information on who received funding to conduct clinical research, their financial disclosures and any management strategies used in the event of a conflict, and the IRB’s review of the research protocol was collected in different formats and maintained in separate databases and files in various offices. In general, at these universities, the conflict-of-interest committee or staff to the committee maintained faculty disclosure forms; the grants and contracts office maintained information about who receives funding from government and nongovernment sources, and received reports when there was a financial conflict of interest related to a grant; and the technology transfer office had information about faculty relationships with industry because of its role in helping faculty patent their inventions and license them. While these entities serve distinct purposes, they have information that, collectively, is important to managing investigators’ financial conflicts of interest. Officials at the universities we visited generally acknowledged a need for better coordination among their internal offices that have information about and responsibility for investigators’ financial relationships. They also said that a centralized reporting system and integrated database for financial interest information could help ensure that potential conflicts are not overlooked and are monitored. Officials at several of the universities reported they were beginning to develop these linkages. Because the universities varied in their implementation of the federal financial interest regulation, we observed different practices for reviewing and managing financial conflicts of interest and saw that the universities used different mechanisms for internal coordination and communication. Officials at some of the universities we visited expressed interest in learning about best practices from HHS and other institutions for identifying and managing financial conflicts of interest in biomedical research, especially as they review and revise their policies. While there are no federal regulations or guidelines on institutional financial conflicts of interest or how to manage them, the universities we visited had policies and procedures that addressed aspects of these issues, such as the management of investment funds, technology transfer activities, use of licensing income, and the acceptance of equity in start-up companies. The five universities established a “firewall” between the overall management of university investments and academic affairs, including research activities, by using professional investment managers. University investments in small start-up companies, however, which sometimes occurred as part of technology transfer activities, were more closely tied to research activities. The universities had or were developing policies and practices to mitigate or manage potential institutional financial conflicts of interest in this area, but they varied considerably. One approach was to separate organizationally the technology transfer office from other research activities or to use other internal controls such as special advisory committees to make decisions that otherwise could be influenced by ties to either technology transfer or research activities. Another practice, which all five universities used to varying degrees, was to limit the amount of equity they accepted and the extent of their involvement in managing university-related start-up companies. The universities we visited established “firewalls” to keep the management of institutional investments separate from academic affairs, including research activities. One university official told us that the organizational barrier this created in large part prevented financial and academic decisions from influencing one another. The five universities used investment managers—either employees or contractors—who were responsible for the university’s portfolio and day-to-day investment decisions. The investment managers reported to an investment committee or directly to the university’s board of directors. Generally, each university’s board of directors had separate committees for investment and for academic affairs that established policies and provided oversight. In addition, these universities, in general, did not devote the university’s endowment to investments in university-related start-up companies. At four of the five universities, officials said that most investigators were not aware of institutional investments, suggesting that decisions about these major university investments were distinct from day-to-day research activities and academic affairs. However, at two of the five universities, general information on how funds are invested, without specific amounts, is available on the Internet. We were unable to readily locate such information at the remaining universities. In order to reduce opportunities for institutional financial conflicts of interest, two of the universities organizationally separated the technology transfer office from the research office, locating technology transfer directly under the provost or vice provost, the chief academic officer of the university. Officials at one of these universities said that this arrangement made it easier to manage institutional financial conflicts of interest and that they believed the office of research should not be influenced by technology transfer activities. The other universities located their technology transfer offices under the vice provost for research or vice chancellor for research. One university’s justification for locating offices together was that communication was better when these offices were organizationally aligned and that good communication would help prevent financial conflicts of interest from occurring. Officials at another one of these universities gave us an example of an internal control mechanism—establishing an interdisciplinary committee to make an impartial decision about which company is selected to license a product developed by a faculty member—in order to avoid an institutional financial conflict of interest. The five universities we visited had or were developing policies on accepting equity in university-related start-up companies, such as biotechnology companies. During the technology transfer phase, universities often accept equity from these companies in return for paying patent and licensing fees. The policies at the five universities varied in their stringency. Four restricted the amount of equity they would accept to a fixed proportion ranging from 2 percent to 20 percent. The remaining university specified only that its equity position should not be greater than 49 percent. One university’s policy stated that the university generally requires having an equity position in a company when a faculty or staff member develops technology in the course of university employment and assists a business venture in the commercialization of that idea. Four of the five universities reported that in fiscal year 1999 they spent more in legal fees for technology transfer activities than they were reimbursed through licensing agreements. Their technology transfer offices provide a service to university faculty members and staff in facilitating the transfer of technology to the private sector. As one university official said, faculty members should be able to pursue developing products from their research even if they generate little or no profit. Consequently, the universities said that they do not target opportunities for generating profit and that most of their patents and licenses do not yield substantial income. The universities do not patent or license all inventions of their faculty and staff, but they do assess whether the technology is worth the investment and assign the rights to the researcher for those they decline to patent or license. Various parties are involved in the decision to accept equity holdings in a university-related start-up. The universities we visited encouraged faculty members and staff to disclose inventions to the technology transfer office. The technology transfer staff review the disclosure to determine both its commercial potential and its ownership. Most universities own intellectual property, such as a patent, if significant university resources were used or if it was developed through research conducted at the university. The technology transfer office then attempts to find a private company to license and underwrite the cost of developing and licensing the product. At the early stage of product development, however, the commercial potential of an invention is often uncertain. If no private company is found to assume the financial risk for developing the product, the university may consider taking an invention through the patenting and licensing process itself and accept equity in payment from the company that will hold the license. At four of the universities, the vice provost or vice chancellor makes the decision to accept equity. At the remaining university, the provost makes the decision. The school or department of the university that employs the inventor also is often involved because, according to all five universities’ policies, it receives a portion of the licensing income. It also may provide funds to license and develop the product. After the decision to take equity, the university’s investment managers, who are responsible for the university endowments and investments, then manage the equity shares. University officials told us that once the equity is transferred to these managers, they have virtually no other contact or responsibilities for the equity. However, universities transfer the shares to the investment managers at different times. The technology transfer office at one university holds the equity until the company becomes public, then transfers the equity to the university investment office. Another university has guidelines for placing both individual and university equities in escrow. Other universities transfer the equity after the licensing agreement has been signed. In these cases, university officials said that they are not sure what investment managers do with these holdings—in particular, whether these proprietary holdings are managed differently from other equity holdings. The universities also restricted their involvement in the management of university-related start-ups because of potential institutional financial conflicts of interest in these ventures. Two universities we visited had written policies that specified the university would not accept representation on a start-up company’s board of directors, nor would it exercise voting rights. Another university, however, reserved the right to elect a member to the start-up’s board of directors. The member, in this case, would be required to resign if the company registered with the Securities and Exchange Commission for an initial public offering. The remaining two universities had unofficial policies and are now reexamining the appropriate roles and responsibilities of the university, such as using nonpublic information to manage equity of a university- related start-up and the role of the faculty member who established the start-up in the university’s management of the equity. In our review, we identified limitations with the HHS regulations and oversight of financial conflicts of interest in biomedical research that have implications for promoting the integrity of research and protecting human research subjects. First, no direct link exists between the HHS financial interest regulations and the human subjects protection regulations with regard to the risks to human research subjects posed by investigators’ financial conflicts of interest. Second, although the PHS and FDA regulations both address investigators’ financial interests, PHS and FDA conduct their reviews of this information at different points in the research process and have different disclosure thresholds for what constitutes a significant financial interest. Third, the universities we visited indicated some confusion about what the PHS regulation specifically required them to report to NIH. NIH and FDA have recently taken steps to improve oversight and monitoring, such as conducting site visits, taking an inventory of institutions’ financial conflict-of-interest policies, and providing guidance to reviewers of financial conflict-of- interest information. In addition, HHS has developed draft guidance on financial relationships in clinical research, which is promising. However, this guidance does not provide detailed advice on managing institutional financial conflicts of interest. No direct link exists between the HHS financial interest regulations and the human subjects protection regulations. Such a link would help ensure that IRBs are aware of financial conflicts of interest that might pose risks to study subjects and would help minimize those risks. The PHS and FDA financial interest regulations require disclosure to institutional officials and to sponsors, but there is no mechanism to ensure that the disclosed information reaches IRBs. And although the HHS human subjects protection regulations require IRBs to evaluate research proposals for any foreseeable risks the study might pose to human research subjects, they contain no explicit provision that investigators disclose to IRBs their financial interests. In our review of the five universities, we found that IRBs learned about investigators’ financial interests in various ways, ranging from reviewing financial disclosures directly or receiving reports from the conflict-of-interest committee to informally following up with investigators. Without a direct link between the HHS financial interest and human subjects protection regulations, either institutions are left to develop their own ways to ensure that IRBs have information about financial conflicts of interest or IRBs must seek out this information. The timing of the disclosure of financial interests differs between the PHS and FDA regulations. The PHS regulation requires institutions to report to PHS the existence of any financial conflicts of interest before expenditures are made, while FDA reviews investigators’ financial interests only when the sponsor submits a marketing application. The PHS regulation requires that investigators receiving NIH funding must disclose to their institutions any “significant financial interests” related to the research. The institution then must determine whether a financial interest constitutes a conflict and, if so, notify NIH that it exists and that it has been managed, reduced, or eliminated. Through the PHS regulation, therefore, institutions and funding agencies have an opportunity before research begins to protect human research subjects from potential harm from investigator conflicts of interest. But while the FDA regulation requires a clinical investigator to disclose financial interests to the sponsor of a trial before beginning to participate, FDA itself is not notified of financial interests that could present a potential conflict of interest until this information is submitted as part of a marketing application, which occurs after the research has been conducted and research subjects have already participated. Although the IRB is responsible for reviewing and minimizing risks to study subjects, the timing of the disclosure of financial interests in the FDA regulation may limit FDA’s ability to provide oversight of the process. The timing of reports to FDA regarding financial interests is geared toward the integrity of research findings. Since the objective of the FDA regulation is ensuring data integrity for the purposes of product review, the regulation focuses on payment arrangements and other financial interests of clinical investigators that could introduce bias into studies. FDA told us that it should be aware of such interests and arrangements as part of its evaluation of marketing applications. An FDA official told us that FDA expected the requirements for disclosure to help deter sponsors from hiring or working with clinical investigators who have significant financial interests that pose a conflict. PHS and FDA also differ in their threshold amounts for disclosure of financial interests. The PHS threshold—more than $10,000 in expected income over 12 months or more than $10,000 in equity or 5 percent ownership in a company—has not been updated for inflation since the regulation came into effect in 1995. Some have expressed concern that the PHS threshold was too low. For instance, in 1999, members of the NIH Regulatory Burden workgroup stated that the PHS disclosure threshold was too low and could trigger an excessive number of disclosures where there was no conflict that needed to be managed. FDA’s thresholds—more than $25,000 in payments from the sponsor of a clinical study to an investigator or an investigator holding more than $50,000 in equity in a publicly held company sponsoring the research—are significantly higher than the PHS threshold. The PHS regulation requires an institution to report that it has identified a financial conflict of interest related to PHS-funded research and that it has taken steps to manage, reduce, or eliminate it. Nevertheless, we found that officials from the five universities were confused about the conditions under which they needed to report to NIH and what they needed to report. At the universities we visited, we found very few reports to NIH about financial conflicts of interest. This could be because there were few occurrences of significant financial interests involving NIH grants that were deemed conflicts or because we could not determine from the reports whether the universities had followed the reporting requirements. One university operated under the mistaken assumption that it needed to report only financial conflicts of interest that could not be managed; therefore, it did not report them if they had been managed, minimized, or eliminated. At another university, we found a case of clinical research involving human subjects during our file review in which the university established a management strategy for a financial conflict of interest but did not report it to NIH. The university officials told us they had only reported two cases to NIH since the regulation went into effect in 1995, and neither case involved human research subjects. In some instances, confusion about the requirements and concerns about overreporting may lead to underreporting. Officials from two of the universities told us they were confused about what they needed to report to NIH. One university in our sample did not know whether it was responsible for reporting a conflict of interest if an investigator had an NIH grant and the conflict was not related to that grant. Confusion about reporting requirements also stems from the regulatory silence regarding when financial interests should be viewed as posing a potential conflict. Although the PHS regulation defines a significant financial interest, it allows university officials to determine whether such interests pose conflicts for investigators. Only those financial interests meeting the minimum thresholds that are deemed to be conflicts of interest must be reported. Thus, for example, at one of the universities, a department head deemed that a financial relationship was not a material conflict, even though it was considered a significant financial interest under the PHS regulations. NIH has taken steps recently to improve compliance with the financial conflict-of-interest regulation by centralizing institutions’ reports of conflicts of interest at the Office of Extramural Research (OER), having OER conduct site visits, and taking an inventory of institutions’ financial conflict-of-interest policies. NIH is responsible for ensuring that institutions comply with the PHS regulation on financial conflicts of interest. It may do this by reviewing an institution’s policies and procedures on financial conflicts of interest, monitoring reports of conflicts, conducting site visits, examining institutions’ files, and reviewing actions taken by institutions to manage financial conflicts of interest. Institutions’ reports of conflicts are sent to the funding institutes and centers of NIH and are kept with the grant files. Because these reports contain no details about the conflict and its management, NIH program officials have little information to follow up on. NIH is authorized to request more information about conflicts of interest from institutions, but an official at NIH told us that NIH rarely seeks further information. In late 2000, NIH’s institutes and centers began providing a copy of grantee institutions’ reports of financial conflicts of interest to OER, which maintains summary data on conflicts of interest. In fiscal year 2000, OER visited 10 institutions receiving NIH funding to assess institutional understanding of NIH policies and requirements, and in fiscal year 2001, OER visited 8 more institutions. Financial conflict of interest was one of many topics addressed. During the visits, the institutions’ officials discussed with NIH staff information in financial conflict-of-interest files, including meeting minutes, documents, and correspondence concerning how financial conflicts of interest had been managed, reduced, or eliminated. In its findings and observations on the site visits, NIH noted some of the concerns we have identified. For example, NIH found that some institutions were confused about the definition of a significant financial interest. In addition, some faculty expressed fear that full disclosure of financial interests might result in limiting their institutional salary or adversely affect NIH funding. NIH officials told us that if they discovered a weakness during the visit, they provided guidance and information to help the institution make appropriate improvements. In January 2001, NIH asked 300 institutions with the largest amount of NIH funding to send it copies of their financial conflict-of-interest policies after officials learned that not all research institutions have an investigator financial conflict-of-interest policy in place. A survey published in 2000 of the 250 medical schools and other research institutions with the highest NIH funding had found that 5 medical schools and 10 other research institutions reported they did not have such a policy. As of September 2001, NIH had received policies from 293 of 300 grantee institutions, and all of the top 100 funded institutions had a conflict of interest policy in place. Officials at NIH said they plan to review the policies they have collected to see if they contain all the required elements. FDA also recently has taken action to improve compliance with its financial interest regulation by providing guidance for FDA reviewers of drug marketing applications. FDA’s regulatory role allows it to review the information in investigator financial disclosure reports in marketing applications. If FDA determines that a financial interest of any clinical investigator raises questions about the integrity of the data, FDA may audit the data, ask the applicant to submit further analyses of the data or conduct additional independent studies, or refuse to use the data from that study in support of the product application. Each of FDA’s centers responsible for human drugs, biological products, and medical devices determines how it will implement the financial interest regulation. Until recently, FDA did not provide systematic guidance to its reviewers about evaluating investigator financial disclosure reports. One of FDA’s centers has provided guidance by creating a clinical review template for drug marketing application reviewers that includes brief guidance on reviewing financial disclosures. In December 2000 HHS developed draft guidance entitled “Financial Relationships in Clinical Research: Issues for Institutions, Clinical Investigators, and IRBs to Consider When Dealing With Issues of Financial Interests and Human Subject Protection: Draft Interim Guidance.” This guidance drew on information obtained at a conference HHS held in August 2000 on financial conflicts of interest in clinical research and comments it received. The document contains guidance for institutions, clinical investigators, and IRBs to assist in their deliberations concerning financial relationships and potential and real conflicts of interest. The document is also intended to facilitate disclosure of such conflicts in consent forms. This document was posted on the OHRP Web site on January 2001 but has not been published in the Federal Register. According to HHS officials, the draft is being revised and will be published as “points for consideration.” While it provides promising guidance for identifying and managing individual investigator financial conflicts of interest, it is limited in its discussion of institutional financial conflicts of interest. The draft guidance states that institutions should have policies and procedures on institutional financial conflicts of interest; establish an institutional conflict-of-interest committee to review potential conflicts and their management when considering entering into business agreements; and document and disclose to the IRB institutional financial relationships with a commercial sponsor of a study. But the document does not provide detailed guidance on the appropriate ways of addressing institutional conflicts of interest, particularly institutional relationships with university-related start-up companies. HHS received 36 comments on its draft guidance from health care professionals, institution officials, and representatives of the patient community, FDA, and academic associations. Some members of the research community expressed concern about the guidance’s usefulness and appropriateness. These groups also commented that the academic community had not yet fully discussed institutional financial conflicts of interest and was still grappling with a definition. Some research community members disagreed with giving responsibilities regarding financial conflicts of interest to already overburdened IRBs, which could distract them from their role of protecting human research subjects. Another stated that the draft interim guidance emphasized academic institutions without taking into account the perspective of other types of research centers, such as hospitals and freestanding centers. After reviewing the draft guidance and comments, the National Human Research Protections Advisory Committee (NHRPAC), an advisory group to HHS, recommended that the Secretary of Health and Human Services move to release the guidance. NHRPAC also recommended that, in the absence of consistent federal regulations, institutions should use the PHS threshold for disclosure of financial interests but that, ultimately, the PHS and FDA thresholds should be harmonized. All research subject to HHS regulations, funded privately or publicly, then would be held to the same standards. Steps toward harmonization, in NHRPAC’s view, would include regulatory measures that go beyond the draft interim guidance. In addition, NHRPAC stated that IRBs should not have to collect, analyze, and provide remedies for financial conflicts of interest but should rely on a conflict-of-interest entity (such as a committee or an individual charged with conflict-of-interest review responsibilities) to handle the matters and report formally to the IRB as part of the research application. NHRPAC supported HHS’ efforts to identify and define institutional financial conflicts of interest and methods to manage them and suggested that such interests could be disclosed to the institution’s conflict-of-interest entity. NHRPAC recommended that specific, detailed information be provided in the informed consent process when an actual conflict of interest has been identified during financial disclosure and, in cases in which a potential conflict is conceivable, to make general information about financial interests available, with more detailed information available upon request. Finally, NHRPAC recommended that institutions audit and monitor compliance with their own institutional policies and procedures and develop and enforce disciplinary standards for violations. The final version of the guidance is scheduled for completion this fall. The five universities in our study implemented the PHS regulation on individual investigators’ financial interests in different ways, and they had or were developing policies and procedures to address aspects of institutional financial conflicts of interest. The universities expressed interest in learning about others’ policies and procedures, such as how investigators’ financial disclosure information was communicated to IRBs or ways the universities monitored financial conflicts of interest. Having information on the best practices of institutions for dealing with investigator and institutional financial conflicts of interest could help institutions develop policies and procedures that would best meet their needs. HHS’ proposed guidance on financial relationships in clinical research is promising and will help institutions implement the PHS regulation on investigators’ financial interests. With some revision, this guidance could link the HHS financial interest regulations with the human subjects protection regulations, making sure that IRBs are aware of financial conflicts of interest to help minimize risks to study subjects. However, the guidance is limited in its treatment of institutional conflicts of interest. As financial relationships between institutions and industry proliferate, the need for guidance in this area increases. Research institutions are not required to apply their financial conflict-of- interest policies and procedures, as the five we studied did, to both publicly funded and privately funded research. Furthermore, a significant and growing amount of biomedical research is now conducted outside of universities by entities that may not be operating under broad financial conflict-of-interest policies and procedures. Addressing potential financial conflicts of interest in these other settings will be important to ensure the integrity of research and the well-being of human research subjects. To ensure the integrity of biomedical research and the protection of human research subjects, HHS needs to improve the implementation of its financial interest regulations and its oversight of financial conflicts of interest. Specifically, we recommend that the Secretary of Health and Human Services take the following actions: Develop and communicate information on best practices for institutions to consider for identifying and managing investigator and institutional financial conflicts of interest in biomedical research. Develop specific guidance or regulations concerning institutional financial conflicts of interest. HHS reviewed a draft of this report and provided comments, which are included as appendix III. HHS said that the report gives a useful overview of how some academic research institutions handle financial conflicts of interest and clinical research issues. HHS concurred with our recommendations. With regard to our recommendation to develop information on best practices, HHS stated that NIH has efforts under way to collect such information by making site visits to institutions and analyzing financial conflict-of-interest policies from institutions. NIH plans to post this information on its Web site. Regarding our recommendation to develop guidance or regulations concerning institutional conflicts of interest, HHS said that NIH’s Regulatory Burden Reduction Committee has begun to address institutional conflicts of interest. To the extent that specific policies or guidance on human subjects protection and financial conflict of interest are developed, HHS said it will be coordinated within the Department. HHS made several specific comments. It noted that financial conflicts of interest occur in the context of all areas of research, not just clinical research. We agree with this assessment, but our report focuses on biomedical research funded or regulated by HHS. HHS suggested that we expand on the rationale for selecting the five universities in our report in order to better explain the institutional variability we observed. We did not add any information because we believe appendix II clearly states our selection criteria and the sample is too small to draw conclusions about how specific characteristics of the universities relate to policy differences. HHS also noted that one reason NIH typically obtains only limited information about financial conflicts of interest from institutions is that any information NIH has about these matters would be subject to disclosure under the Freedom of Information Act. We agree that financial details disclosed by investigators to NIH potentially are subject to disclosure under the Freedom of Information Act. However, as FDA has recognized in its treatment of such information, the likelihood of such disclosures is slim, and only when necessary to effect a public purpose that outweighs a particular privacy interest. FDA decides such matters on a case-by-case basis and has recognized that, in some cases, there may be legitimate public interests in the financial information of investigators that warrants its disclosure. In its comments, HHS also questioned the purpose for which follow-up information would be gathered. We revised the report to avoid implying that NIH should routinely seek further information and to emphasize instead that NIH already has authority to obtain additional information on the conflict of interest if it chooses to do so. We believe, however, that there may be instances where NIH may need to know the nature and details of a financial conflict of interest to determine whether it was acted on appropriately. HHS also stated that concerns remain that the PHS regulation on financial interests does not specifically or adequately address the impact of financial relationships on the interests and welfare of human subjects and added that an IRB may not be the most appropriate body to consider financial conflicts of interest. We have added a discussion about the absence of a link between the HHS financial interest regulations and the human subjects protection regulations. We agree with HHS that an IRB may not be the most appropriate body to review investigators’ financial interests and that an IRB can also learn about any risks from conflicts of interest by receiving information from a conflict-of-interest committee or by asking for information directly from investigators. HHS also provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to the Secretary of Health and Human Services, the Director of OHRP, the Acting Director of NIH, the Acting Principal Deputy Commissioner of FDA, appropriate congressional committees, and others who are interested. We will also make copies available to others on request. If you or your staff have any questions, please contact me at (202) 512-7119 or Marcia Crosse at (202) 512-3407. Other major contributors are listed in appendix IV. Reporting requirements Required financial disclosures of investigators must be provided to the institution by the time the grant application is submitted to PHS. Significant financial interests are defined as interests that would be affected by research or in entities whose financial interests reasonably appear to be affected by research, including equity interests exceeding $10,000 or 5 percent ownership in a single entity; salaries, royalties, or other payments (not from applicant institution) expected to total more than $10,000 in the next year; and patents. Grant applications to PHS must certify that the institution has implemented a written and enforced administrative process to identify, manage, reduce, or eliminate conflicting interests; that all conflicts have been reported; and that each conflict will be managed, reduced, or eliminated before the expenditure of PHS funds. Applicants who submit marketing applications for human drugs, biological products, or medical devices and submit clinical studies in support of those applications. Financial interests and arrangements of the investigator: A financial interest or arrangement subject to disclosure includes (1) an arrangement between the sponsor and the investigator (or spouse or dependent child) in which the value of the investigator’s compensation could be influenced by the study outcome; (2) significant payments from sponsor to investigator or institution supporting investigator activities that are valued at more than $25,000 beyond the costs incurred in conducting the study; (3) proprietary interests, including patents, held by the investigator in the product; or (4) significant equity interests in the sponsor of a covered study whose value cannot be readily determined through reference to public prices or valued at more than $50,000 if a company is a publicly traded corporation. Investigators must update financial disclosure reports annually or as new interests are obtained. Investigators must provide the sponsor with sufficient, accurate financial information needed to allow subsequent disclosure or certification. The applicant must submit, for each investigator who participates in a covered study, either certification that no financial interest or arrangement listed in the regulation exists or disclose the nature of the interest or arrangement to the agency. Certifications and disclosures must accompany marketing application. Investigators must update financial disclosure reports during the course of the study or for 1 year following its completion. Applicant also must disclose any steps taken to minimize the potential for bias. To address our objectives, we reviewed the HHS regulations pertaining to financial interests in biomedical research. In addition, we interviewed officials at the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Office for Human Research Protections (OHRP). We also interviewed staff at the American Association of Medical Colleges, the Association of Academic Health Centers, the Association of American Universities, the National Association of College and University Business Officers, the National Bioethics Advisory Commission, and HHS’ Office of Inspector General. We also visited five universities that received federal funding for biomedical research in order to understand how they were implementing the HHS financial interest regulations. Our sample selection and data collection are described in the following sections. Our sample included public and private academic institutions. Accordingly, this report does not address how financial conflicts of interest in clinical research are managed at hospitals or other research institutions. Our selection criteria were universities that received large amounts of research funding from NIH (top 20 universities); had extensive technology transfer activities, according to the Association of University Technology Managers’ (AUTM) 1999 licensing survey; had not been extensively scrutinized, audited, or targeted recently for review by NIH’s Office of Extramural Research or OHRP; and that were located in different geographic areas of the United States. We visited the following academic institutions: University of California-Los Angeles; University of North Carolina, Chapel Hill; University of Washington, Seattle; Washington University, St. Louis; and Yale University, New Haven. Given our selection criteria, our sample is biased toward large research universities with complex organizational structures. Medium and small universities may not necessarily have comparable organizational structures. Consequently, our study results are not generalizable to all universities. At each of the five universities, we interviewed the following officials: the institution official responsible for research; the head of the conflict-of- interest committee or the institution official responsible for managing conflict-of-interest issues, or both; the chairperson or a member of the institutional review board (IRB), or both; the head of the technology transfer office; and two investigators selected by the university (one receiving NIH funding for research and another receiving private funding). We reviewed the universities’ policies and procedures on financial conflicts of interest, sponsored research, outside professional activities, and equity acquisition. We also reviewed a sample of investigators’ financial disclosures for fiscal years 1999 and 2000. Some universities provided copies of these financial disclosures and the university’s management plans with the names of investigators and sponsors removed. To obtain information on the percentage of university clinical investigators with financial interests related to their research, we requested information on the total number of clinical investigators receiving sponsored research funding and the number of those clinical investigators who disclosed financial interests each year from 1995 through 2000. We also requested information on whether the research funding was private or public, the type of financial interests disclosed (for example, income, equity interests, or intellectual property rights), and the type of management strategies employed. We conducted our work from February through September 2001 in accordance with generally accepted government auditing standards. In addition to the person named above, Anne Dievler, Bertha Dong, Romy Gelb, Julian Klazkin, and Elizabeth Morrison made important contributions to this report. Human Subjects Research: HHS Takes Steps to Strengthen Protections, but Concerns Remain (GAO-01-775T, May 23, 2001). VA Research: Protections for Human Subjects Need to Be Strengthened (GAO/HEHS-00-155, Sept. 28, 2000). NIH Research: Improvements Needed in Monitoring Extramural Grants (GAO/HEHS/AIMD-00-139, May 31, 2000). NIH Clinical Trials: Various Factors Affect Patient Participation (GAO/HEHS-99-182, Sept. 30, 1999). Technology Transfer: Reporting Requirements for Federally Sponsored Inventions Need Revision (GAO/RCED-99-242, Aug. 12, 1999). Technology Transfer: Number and Characteristics of Inventions Licensed by Six Federal Agencies (GAO/RCED-99-173, June 18, 1999). Medical Records Privacy: Access Needed for Health Research, but Oversight of Privacy Protections Is Limited (GAO/HEHS-99-55, Feb. 24, 1999). Technology Transfer: Administration of the Bayh-Dole Act by Research Universities (GAO/RCED-98-126, May 7, 1998). NIH Extramural Clinical Research: Internal Controls Are Key to Safeguarding Phase III Trials Against Misconduct (GAO/HEHS-96-117, July 11, 1996). Scientific Research: Continued Vigilance Critical to Protecting Human Subjects (GAO/HEHS-96-72, Mar. 8, 1996).
|
Financial relationships between individual investigators or their research institutions and private industry have yielded significant results, including treatments for such diseases as AIDS and strokes. However, some collaborations have raised concerns that the focus on financial reward might compromise the integrity of the research and the safety of human research subjects. GAO reviewed five universities with broad policies and procedures on financial conflicts of interest. All five had difficulty providing basic data on individual investigators' financial conflicts of interest in clinical research involving human subjects. The universities acknowledged a need for better coordination of information on investigators' financial relationships, and several universities were developing ways to do so. Policies and procedures at the five universities addressed financial conflicts of interest affecting institutions, including technology transfer activities and financial relationships with small start-up companies that market products developed by the universities. The Department of Health and Human Services has had limited success in promoting the integrity of biomedical research and protecting human subjects. HHS has taken steps to improve its oversight and monitoring and has drafted guidance on financial conflicts of interest, but this guidance does not provide detailed advice on how to manage institutional conflicts of interest.
|
The Internet is a vast network of interconnected networks. It is used by governments, businesses, research institutions, and individuals around the world to communicate, engage in commerce, do research, educate, and entertain. While most Americans are familiar with Internet service providers—such as America Online and EarthLink—that provide consumers with a pathway, or “on-ramp,” to the Internet, many are less familiar with how the Internet was developed, the underlying structure of the Internet, and how it works. In the late 1960s and the 1970s, the Department of Defense’s Advanced Research Projects Agency developed a network to allow multiple universities to communicate and share computing resources. In the ensuing decades, this project grew to become a large network of networks and was joined with an array of scientific and academic computers funded by the National Science Foundation. This expanded network provided the backbone infrastructure of today’s Internet. In 1995, the federal government began to turn the backbone of the Internet over to a consortium of commercial backbone providers. From that point on, the Internet infrastructure was owned and operated by private companies—including telecommunications companies, cable companies, and Internet service providers. Today’s Internet connects millions of small, medium, and large networks. When an Internet user wants to access a Web site or to send an e-mail to someone who is connected to the Internet through a different Internet service provider, the data must be transferred between networks. Transit across the Internet is provided by either national backbone providers, regional network operators, or a combination of both. National backbone providers are companies that own and operate high-capacity, long-haul backbone networks. These providers transmit data traffic over long distances using high-speed, fiber-optic lines. Because national backbone operators do not service all locations worldwide, regional network providers supplement the long-haul traffic by providing regional service. Data cross between networks at Internet exchange points—which can be either hub points where multiple networks exchange data or private interconnection points arranged by transit providers. At these exchange points, computer systems called routers determine the optimal path for the data to reach their destination. The data then continue their path through the national and regional networks and exchange points, as necessary, to reach the recipient’s Internet service provider and the recipient (see fig. 1). The networks that make up the Internet communicate via standardized rules called protocols. These rules can be considered voluntary because there is no formal institutional or governmental mechanism for enforcing them. However, if any computer deviates from accepted standards, it risks losing the ability to communicate with other computers that follow the standards. Thus, the rules are essentially self enforcing. One critical set of rules is the Transmission Control Protocol/Internet Protocol suite. These protocols define a detailed process that a sender and receiver agree upon for exchanging data. They describe the flow of data between the physical connection to the network and on to the end-user application. Specifically, these protocols control the addressing of a message by the sender, its division into packets, its transmission across networks, and its reassembly and verification by the receiver. This protocol suite has become the de facto communication standard of the Internet because many standard services (including mail transfer, news, and Web pages) are available on systems that support these protocols. Another critical set of protocols, collectively known as the Domain Name System, ensures the uniqueness of each e-mail and Web site address. This system links names like www.senate.gov with the underlying numerical addresses that computers use to communicate with each other. It translates names into addresses and back again in a process invisible to the end user. This process relies on a system of servers, called domain name servers, which store data linking names with numbers. Each domain name server stores a limited set of names and numbers. They are linked by a series of 13 root servers, which coordinate the data and allow users to find the server that identifies the sites they want to reach. Domain name servers are organized into a hierarchy that parallels the organization of the domain names. For example, when someone wants to reach the Web site at www.senate.gov, his or her computer will ask one of the root servers for help. The root server will direct the query to a second server that knows the location of names ending in the .gov top-level domain. If the address includes a subdomain, the second server refers the query to a third server—in this case, one that knows the addresses for all names ending in senate.gov. The third server will then respond to the request with a numerical address, which the original requester uses to establish a direct connection with the www.senate.gov site. Figure 2 illustrates this example. Another critical set of rules is called the Border Gateway Protocol—a protocol for routing packets between autonomous systems. This protocol is used by routers located at network nodes to direct traffic across the Internet. Typically, routers that use this protocol maintain a routing table that lists all feasible paths to a particular network. They also determine metrics associated with each path (such as cost, stability, and speed), so that the best available path can be chosen. This protocol is important because if a certain path becomes unavailable, the system will send data over the next best path (see fig. 3). From its origins in the 1960s as a research project sponsored by the U.S. government, the Internet has grown increasingly important to both American and foreign businesses and consumers, serving as the medium for hundreds of billions of dollars of commerce each year. According to the U.S. Census Bureau, retail e-commerce sales in the United States were an estimated $86 billion in 2005. The Internet has also become an extended information and communications infrastructure, supporting vital services such as power distribution, health care, law enforcement, and national defense. Federal regulation recognizes the need to protect critical infrastructures. In December 2003, the President updated a national directive for federal departments and agencies to identify and prioritize critical infrastructure sectors and key resources and to protect them from terrorist attack. (See table 1 for a list of critical infrastructure sectors.) This directive recognized that since a large portion of these critical infrastructures is owned and operated by the private sector, a public/private partnership is crucial for the successful protection of these critical infrastructures. In its plan for protecting these critical infrastructures, DHS recognizes that the Internet is a key resource composed of assets within both the information technology and the telecommunications sectors. It notes that the Internet is used by all sectors to varying degrees, and that it provides information and communications to meet the needs of businesses, government, and the other critical infrastructure sectors. Similarly, the national cyberspace strategy states that cyberspace is the nervous system supporting our nation’s critical infrastructures and recognizes the Internet as the core of our information infrastructure. It is also important to note that there are critical interdependencies between sectors. For example, the telecommunications and information technology sectors, like many other sectors, depend heavily on the energy sector. In recent years, cyber attacks involving malicious software or hacking have been increasing in frequency and complexity. These attacks can come from a variety of actors. Table 2 lists sources of cyber threats that have been identified by the U.S. intelligence community. An intelligence report on global trends forecast that terrorists may develop capabilities to conduct both cyber and physical attacks against nodes of the world’s information infrastructure—including the Internet and other systems that control critical industrial processes—such as electricity grids, refineries, and flood control mechanisms. The report stated that terrorists already have specified the U.S. information infrastructure as a target and currently are capable of physical attacks that would cause at least brief, isolated disruptions. According to a Congressional Research Service report, the annual worldwide cost of major cyber attacks was, on average, $13.5 billion from 2000 to 2003. A more recently published report estimated that the worldwide financial impact of virus attacks was $17.5 billion in 2004 and $14.2 billion in 2005. In the event of a major Internet disruption, multiple organizations could help recover Internet service. These organizations include private industry, collaborative groups, and government organizations. Private industry is central to Internet recovery because private companies own the vast majority of the Internet’s infrastructure and often have response plans. Collaborative groups—including working groups and industry councils—provide information-sharing mechanisms to allow private organizations to restore services. Additionally, government initiatives could facilitate responding to major Internet disruptions. Private industry organizations are critical to recovering Internet services in the event of a major disruption because they own and operate the vast majority of the Internet’s infrastructure. This group of Internet infrastructure owners and operators includes telecommunications companies (such as AT&T and Verizon Communications), cable companies (such as Cox Communications and Time Warner Cable), Internet service providers (such as AOL and EarthLink), and root server operators (such as VeriSign and the University of Maryland). These entities own or operate cable lines; telephone lines; fiber-optic cables; or critical core systems, such as network routers and domain name servers. These private companies currently deal with cyber attacks and physical disruptions on the Internet on a regular basis. According to representatives of Internet infrastructure owners and operators, these firms typically have disaster recovery plans in place. For example, a representative from a major telecommunications company stated that the company has emergency response plans for its primary and secondary emergency operations centers. Similarly, representatives of a cable trade association reported that most cable companies have standard disaster recovery plans and a network operations center from which they can monitor recovery operations. Infrastructure representatives also noted that in the event of a network disruption, companies that are competitors work together to resolve the disruption. They said that although the companies are competitors, they have a business interest in cooperating because it is common to rely on each other’s networks. For example, a representative of a major telecommunications company noted that the company has “mutual-aid” agreements with its competitors to exchange technicians and hardware in the event of an emergency. Collaborative groups—working groups and industry councils that the private and public sectors have established to allow technical information sharing—help handle and recover from Internet disruptions. These collaborative groups are usually composed of individuals and experts from separate organizations. In the event of a major Internet disruption, these groups allow individuals from different companies to exchange information in order to assess the scope of the disruption and to restore services. Table 3 provides descriptions of selected collaborative groups. Federal policies and plans assign DHS lead responsibility for facilitating a public/private response to and recovery from major Internet disruptions. Within DHS, responsibilities reside in two divisions within the Preparedness Directorate: the National Cyber Security Division (NCSD) and the National Communications System (NCS). NCSD operates the U.S. Computer Emergency Readiness Team (US-CERT), which coordinates defense against and response to cyber attacks. The other division, NCS, provides programs and services that ensure the resilience of the telecommunications infrastructure in times of crisis. In June 2003, DHS created NCSD to serve as a national focal point for addressing cybersecurity issues and to coordinate the implementation of the National Strategy to Secure Cyberspace. Its mission is to secure cyberspace and America’s cyber assets in cooperation with public, private, and international entities. NCSD is the government lead on a public/private partnership supporting the US-CERT, an operational organization responsible for analyzing and addressing cyber threats and vulnerabilities and disseminating cyber-threat warning information. In the event of an Internet disruption, US-CERT facilitates coordination of recovery activities with the network and security operations centers of owners and operators of the Internet and with government incident response teams. NCSD also serves as the lead for the federal government’s cyber incident response through the National Cyber Response Coordination Group. This group is the principal federal interagency mechanism for coordinating the preparation for, and response to, significant cyber incidents—such as a major Internet disruption. In the event of a major disruption, the group convenes to facilitate intragovernmental and public/private preparedness and operations. The group brings together officials from national security, law enforcement, defense, intelligence, and other government agencies that maintain significant cybersecurity responsibilities and capabilities. Members use their established relationships with the private sector and with state and local governments to help coordinate and share situational awareness, manage a cyber crisis, develop courses of action, and devise response and recovery strategies. NCSD also recently formed the Internet Disruption Working Group, which is a partnership between NCSD, NCS, the Department of the Treasury, the Department of Defense, and private-sector companies, to plan for ways to improve DHS’s ability to respond to and recover from major Internet disruptions. The goals of the working group are to identify and prioritize the short-term protective measures necessary to prevent major disruptions to the Internet or reduce their consequences and to identify reconstitution measures in the event of a major disruption. NCS is responsible for ensuring a communications infrastructure for the federal government under all conditions—ranging from normal situations to national emergencies and international crises. NCS is composed of members from 23 federal departments and agencies. Although originally focused on traditional telephone service, due to the convergence of the Internet and telecommunications NCS has taken a larger role in Internet-related issues and has partnered with NCSD and private companies to address issues related to major Internet disruptions. For example, NCS now helps manage issues related to disruptions of the Internet backbone (e.g., high-capacity data routes). The National Coordinating Center for Telecommunications (National Coordinating Center), which serves as the operational component of NCS, also has a role in Internet recovery. The center has eight resident industry members (representing companies that were originally telephone providers) as well as additional nonresident members, including representatives of newer, more Internet-oriented companies. During a major disruption to telecommunications services, the center communicates with both resident and nonresident members, with the goal of restoring service as soon as possible. In the event of a major Internet disruption, the National Coordinating Center plays a role in the recovery effort through its partnerships and collaboration with telecommunications and Internet-related companies. The Federal Communications Commission can support Internet recovery by coordinating resources for restoring the basic communications infrastructures over which Internet services run. For example, after Hurricane Katrina, the commission granted temporary authority for private companies to set up wireless Internet communications supporting various relief groups; federal, state, and local government agencies; businesses; and victims in the disaster areas. The commission also sponsors the Network Reliability and Interoperability Council. A primary goal of the council is to prevent Internet disruptions from occurring in the first place. The council has developed a list of best practices for Internet disaster recovery that provides guidance on strategic issues (such as exercising disaster recovery plans) as well as operational issues (such as how to restore a corrupt domain name server). In May 2005, we issued a report on DHS’s efforts to fulfill its cybersecurity responsibilities. We noted that while DHS had initiated multiple efforts to fulfill its responsibilities, it had not fully addressed any of the 13 key cybersecurity responsibilities (see table 4) noted in federal law and policy. For example, we noted that the department established US-CERT as a public/private partnership to make cybersecurity a coordinated national effort, and it established forums to build greater trust and information sharing among federal officials with information security responsibilities and with law enforcement entities. However, DHS had not yet developed national cyber threat and vulnerability assessment or government/industry cybersecurity recovery plans—including a plan for recovering key Internet functions. We also noted in our May 2005 report that DHS faced a number of challenges that have impeded its ability to fulfill its cyber responsibilities. These challenges included achieving organizational stability, gaining organizational authority, overcoming hiring and contracting issues, increasing awareness of cybersecurity roles and capabilities, establishing effective partnerships with stakeholders, achieving two-way information sharing with stakeholders, and demonstrating the value that DHS can provide. We made recommendations to the department to strengthen its ability to implement key responsibilities by completing critical activities and resolving underlying challenges. DHS agreed that strengthening cybersecurity is central to protecting the nation’s critical infrastructures and that much remained to be done, but it has not yet addressed our recommendations. We continue to evaluate DHS’s progress in implementing our recommendations. The Internet’s infrastructure is vulnerable to disruptions in service due to terrorist and other malicious attacks, natural disasters, accidents, technological problems, or a combination of the above. Disruptions to Internet service can be caused by cyber and physical incidents—both intentional and unintentional. Private network operators routinely deal with Internet disruptions of both types. Recent cyber and physical incidents have caused localized or regional disruptions, highlighting the importance of recovery planning. However, these incidents have also shown the Internet as a whole to be flexible and resilient. Even in severe circumstances, the Internet has not yet suffered a catastrophic failure. The Internet can be disrupted by either cyber or physical incidents, or by a combination of the two. These incidents can be intentional (such as a cyber attack or a terrorist attack on our nation’s physical infrastructure) or unintentional (such as a software malfunction or a natural disaster). Table 5 provides examples of intentional and unintentional cyber and physical incidents. A cyber incident could cause a disruption if it affects a network protocol or an application that is integral to the working of the Internet. A cyber incident could be unintended (such as a software problem) or intended (such as an attack using malicious software or hacking that causes a disruption of service). Unintended incidents have caused significant disruptions in the past. For example, in 1998, a major Internet backbone provider had a massive outage due to a software flaw in the infrastructure that caused systems to crash; in 2002, a different provider had an outage due to a router with a faulty configuration. Intentional incidents, or malicious attacks, have been increasing in frequency and complexity and recently have been linked to organized crime. Examples of malicious attacks include viruses and worms. Viruses and worms are often used to launch denial-of-service attacks, which flood targeted networks and systems with so much data that regular traffic is either slowed or stopped. Such attacks have been used ever since the groundbreaking Morris worm in November 1988, which brought 10 percent of the systems connected to the Internet to a halt. More recently, in 2001, the Code Red worm used a denial-of-service attack to affect millions of computer users by shutting down Web sites, slowing Internet service, and disrupting business and government operations. Cyber attacks can also cause Internet disruptions by targeting specific protocols, such as the Border Gateway Protocol or the Domain Name System. If a vulnerability in the Border Gateway Protocol was exploited, the ability of Internet traffic to reach its destination could be limited or halted. Some experts believe that it could take weeks to recover from a major attack on the Border Gateway Protocol. The Domain Name System is also susceptible to various attacks, including the corruption of stored domain name information and the misdirection of addresses. Recently, hackers have used domain name servers to launch denial-of-service attacks—thereby amplifying the strength of the attacks. A network security expert stated that there have been numerous attacks of this type recently, and that some attacks have targeted top-level domains and Internet service providers. Attacks against top-level domain servers could disrupt users’ capability to connect to various Internet addresses. It could take several days to recover from a massive disruption of the domain name server system. As the number of individuals with computer skills has increased, more intrusion, or hacking, tools have become readily available and relatively easy to use. Frequently, skilled hackers develop exploitation tools and post them on Internet hacking sites. These tools are then readily available for others to download, allowing even inexperienced programmers to create a computer virus or to literally point and click to launch an attack. According to the National Institute of Standards and Technology, 30 to 40 new attack tools are posted on the Internet every month. Experts also agree that there has been a steady advance in the sophistication and effectiveness of attack technology. In the case of insider incidents, these tools may not even be necessary, because insiders often have unfettered access to their employers’ computer systems. In one incident, an insider installed unauthorized backdoor access to his employer’s systems. After his termination, the insider used these back doors to gain access to the systems and to delete accounts, change passwords, and delete security logs. While this is a case of an insider disrupting a single network, an insider could also use this knowledge to disrupt the operation of an Internet service provider. For example, an insider at a company that develops critical routing hardware might be able to use specific technical knowledge of the products to create an attack that could disrupt networks that use that particular equipment. To date, cyber attacks have caused various degrees of damage. The following case studies provide examples of cyber attacks; the effects of these attacks; and the government’s role, if any, in recovery (see figs. 4 and 5). A physical incident could be caused by an intentional attack, a natural disaster, or an accident. For example, terrorist attacks, storms, earthquakes, and unintentional cutting of cables can all cause physical disruptions. Physical incidents causing Internet and telecommunications disruptions occur regularly—often as a result of the accidental cutting of cable lines. Physical incidents could affect various aspects of the Internet infrastructure, including underground or undersea cables and facilities that house telecommunications equipment, Internet exchange points, or Internet service providers. Such incidents could also disrupt the power infrastructure—leading to an extended power outage and thereby disrupting telecommunications and Internet service. The following case studies provide examples of physical incidents that caused Internet disruptions and the effect of these incidents (see figs. 6 to 8). Since its inception, the Internet has experienced disruptions of varying scale—from fast-spreading worms, to denial-of-service attacks, to physical destruction of key infrastructure components. However, the Internet has yet to experience a catastrophic disruption. Experts agree—and case studies show—that the Internet is resilient and flexible enough to handle and recover from many types of disruptions. While specific regions may experience Internet disruptions, backup servers and the ability to reroute traffic limit the effect of many targeted attacks. These efforts highlight the importance of recovery planning. However, it is possible that a complex attack or set of attacks could cause the Internet to fail. It is also possible that a series of attacks against the Internet could undermine users’ trust—and thereby reduce the Internet’s utility. Several federal laws and regulations provide broad guidance that applies to the Internet infrastructure, but it is not clear how useful these authorities would be in helping to recover from a major Internet disruption, because some do not specifically address Internet recovery and others have seldom been used. Pertinent laws and regulations address critical infrastructure protection, federal disaster response, and the telecommunications infrastructure (see app. II for additional details). Specifically, the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7 establish critical infrastructure protection as a national goal and describe a strategy for cooperative efforts by the government and the private-sector to protect the cyber- and physical-based systems that are essential to the operations of both the economy and the government. These authorities apply to the Internet because it is a core communications infrastructure supporting the information technology and telecommunications sectors. However, this law and regulation do not specifically address roles and responsibilities in the event of an Internet disruption. Regarding federal disaster response, the Defense Production Act and the Stafford Act provide authority to federal agencies to plan for and respond to incidents of national significance—like disasters and terrorist attacks. Specifically, the Defense Production Act authorizes the President to ensure the timely availability of products, materials, and services needed to meet the requirements of a national emergency. The act is applicable to critical infrastructure protection and restoration, but it has never been used for Internet recovery. The Stafford Act authorizes federal assistance to states, local governments, nonprofit entities, and individuals in the event of a major disaster or emergency. However, the act does not authorize assistance to for-profit companies—such as those that own and operate core Internet components. Several representatives of private companies reported that they were unable to obtain needed resources to restore the communications infrastructure in the aftermath of Hurricane Katrina because the act does not extend to for-profit companies. Other legislation and regulations, including the Communications Act of 1934 and the National Communications System (NCS) authorities, govern the telecommunications infrastructure and help ensure communications during national emergencies. The act governs the regulation of the telecommunications infrastructure upon which the Internet depends. However, coverage of the Internet is subsumed in provisions that govern interstate wire and radio communications, and there is no specific provision governing Internet recovery. NCS authorities establish guidance for operationally coordinating with industry to protect and restore key national security and emergency preparedness communications services. These authorities grant the President certain emergency powers regarding telecommunications, including the authority to require any carrier subject to the Communications Act of 1934 to grant preference or priority to essential communications. The President may also, in the event of war or national emergency, suspend regulations governing wire and radio transmissions and authorize the use or control of any such facility or station and its apparatus and equipment by any department of the government. Although these authorities remain in force and are implemented in the Code of Federal Regulations, they have been seldom used—and never for Internet recovery. Thus, it is not clear how effective they would be if used for this purpose. “The Internet infrastructure is owned and operated by the private sector. Although certain policies direct DHS to work with the private sector to ensure infrastructure protection, DHS does not have the authority to direct Internet owners and operators in their recovery efforts.” DHS has begun a variety of initiatives to fulfill its responsibility for developing an integrated public/private plan for Internet recovery, but these efforts are not complete or comprehensive. Specifically, DHS has developed high-level plans for infrastructure protection and national disaster response, but the components of these plans that address the Internet infrastructure are not complete. In addition, DHS has started a variety of initiatives to improve the nation’s ability to recover from Internet disruptions, including working groups to facilitate coordination and exercises in which government and private industry practice responding to cyber events. While these activities are promising, some initiatives are not complete, others lack time lines and priorities, and still others lack effective mechanisms for incorporating lessons learned. In addition, the relationships among these initiatives are not evident. As a result, the nation is not prepared to effectively coordinate public/private plans for recovering from a major Internet disruption. Federal policy establishes DHS as the central coordinator for cyberspace security efforts and tasks the department with developing an integrated public/private plan for Internet recovery. DHS has two key documents that guide its infrastructure protection and recovery efforts, but components of these plans dealing with Internet recovery are not complete. The National Response Plan is DHS’s overarching framework for responding to domestic incidents. The plan, which was released in December 2004, contains the following two components that address issues related to telecommunications and the Internet: The Emergency Support Function 2 of the plan identifies federal actions to provide temporary emergency telecommunications during a significant incident and to restore telecommunications after the incident. It assigns roles and responsibilities to different federal agencies; provides guidelines for incident response; and identifies actions to take before, during, and after the incident. Because the Internet is supported by the telecommunications infrastructure, this section of the plan could help with Internet recovery efforts. The Cyber Incident Annex identifies policies and organizational responsibilities for preparing for, responding to, and recovering from cyber-related incidents impacting critical national processes and the national economy. The annex recognizes the National Cyber Response Coordination Group as the principal federal interagency mechanism to coordinate the government’s preparation for, response to, and recovery from a major Internet disruption or significant cyber incident. These components, however, are not complete in that the Emergency Support Function 2 does not directly address Internet recovery, and the Cyber Incident Annex does not reflect the National Cyber Response Coordination Group’s current operating procedures. DHS officials acknowledged that both Emergency Support Function 2 and the Cyber Incident Annex need to be revised to reflect the maturing capabilities of the National Cyber Response Coordination Group, the planned organizational changes affecting NCS and NCSD, and the convergence of voice and Internet networks. However, DHS has not reached consensus on the best approach for revising these components, and it has not established a schedule for revising the overall plan. The Draft National Infrastructure Protection Plan consists of both a base plan and sector-specific plans, but these have not been finalized. A January 2006 draft of the base plan identifies roles, responsibilities, and a high-level strategy for infrastructure protection across all sectors. It emphasizes the need to protect and recover the cyber infrastructure, including the Internet. Additionally, the sector plans are expected to apply the strategies identified in the base plan to the infrastructure sectors. For example, the information technology sector plan identifies relationships within the information technology sector and with other infrastructure sectors. It also identifies preliminary steps for infrastructure protection, such as identifying key assets and the consequences of the failure of those assets. DHS is planning to finalize its base plan in 2006, but it has not yet set a date for doing so. Once this plan is released, it will lead to the development of the more detailed sector-specific plans. The next versions of the information technology and telecommunications sector plans are due to DHS within 180 days of the release of the final base plan. While DHS’s intentions to revise these plans are necessary steps in the right direction, the plans do not fulfill the department’s responsibility to develop an integrated public/private plan for Internet recovery. Several representatives of private-sector firms supporting the Internet infrastructure expressed concerns about both plans, noting that the plans would be difficult to execute in times of crisis. Other representatives were uneasy about the government developing recovery plans, because they were not confident in the government’s ability to successfully execute the plans. DHS officials acknowledged that it will be important to obtain input from private-sector organizations as they refine these plans and initiate more detailed public/private planning. Until both the National Response Plan and the National Infrastructure Protection Plan are updated and more detailed public/private planning begins, DHS lacks the integrated approach to Internet recovery called for in the cyberspace strategy and risks not being prepared to effectively coordinate such a recovery. While the National Response Plan outlines an overall framework for incident response, it is designed to be supplemented by more specific plans and activities. DHS has numerous initiatives under way to better define its ability to assist in responding to major Internet disruptions. These initiatives include task forces, working groups, and exercises. While these activities are promising, some initiatives are incomplete, others still lack time lines and priorities, and others lack an effective mechanism for incorporating lessons learned. In addition, the relationships and interdependencies among different initiatives are not evident. As a result, tangible progress toward improving the government’s ability to help recover from a major Internet disruption has been limited. DHS plans to revise the role and mission of the National Communications System (NCS) to reflect the convergence of voice and data communications, but this effort is not yet complete. NCS is responsible for ensuring the availability of a viable national security and emergency preparedness communications infrastructure. Originally focused on traditional telephone service, NCS has recently taken on a larger role in Internet-related issues due to the convergence of the infrastructures that serve traditional telephone traffic and those that serve data (such as Internet traffic). A presidential advisory committee on telecommunications has established two task forces to recommend changes to NCS’s role, mission, and functions to reflect this convergence. One task force focused on changes due to next-generation network technologies, while the other focused on revising the role and mission of NCS’s National Communications Center. Appendix III provides additional details on the two task forces. Both task forces have made recommendations to improve NCS’s operations, but DHS has not yet developed plans to address these recommendations. Until NCS completes efforts to revise its role and mission, the group is at risk of not being prepared to address the unique issues that could be caused by future Internet disruptions. As a primary entity responsible for coordinating governmentwide responses to cyber incidents—such as major Internet disruptions—DHS’s National Cyber Response Coordination Group is working to define its roles and responsibilities, but much remains to be done. The group reported that it has begun efforts to define its roles, responsibilities, capabilities, and activities. For example, the group has developed a concept of operations—which includes a high-level recovery function—but is waiting for the results of additional analyses before revising and enhancing the concept of operations. The group also drafted operating procedures that it used during a national cyber exercise in February 2006, and it plans to incorporate lessons learned from the exercise into the operating procedures and to issue revised procedures by June 2006. The group also reported that it has made progress on initiatives to (1) map the current capabilities of government agencies to detect, respond to, and recover from cyber incidents; (2) identify secure communications capabilities within the government that can be used to respond to cyber incidents; (3) perform a gap analysis of different agencies’ capabilities for responding to cyber incidents; and (4) establish formal resource-sharing agreements with other federal agencies as well as state and local governments. However, much remains to be done to complete these initiatives. One challenge facing the National Cyber Response Coordination Group is the “trigger” for government involvement. Currently, the group can be activated by a cyber incident that may relate to or constitute a terrorist attack, a terrorist threat, a threat to national security, a disaster, or any other cyber emergency requiring federal government response; a confirmed, significant cyber incident directed at one or more national a cyber incident that impacts or potentially impacts national security, national economic security, public health or safety, or public confidence and morale; discovery of an exploitable vulnerability in a widely used protocol; other complex or unusual circumstances related to a cyber incident that requires interagency coordination; or any cyber incident briefed to the President. DHS officials acknowledged that the trigger to activate this group is imprecise and will need to be clarified. Because key activities to define roles, responsibilities, capabilities, and the appropriate trigger for government involvement are still under way, the group is at risk of not being able to act quickly and definitively during a major Internet disruption. Since most of the Internet is owned and operated by the private sector, NCSD and NCS established the Internet Disruption Working Group to work with the private sector to establish priorities and develop action plans to prevent major disruptions of the Internet and to identify recovery measures in the event of a major disruption. The group includes representatives of both domestic and international government agencies and private Internet-related companies. According to DHS officials who organized the group, the group held its first forum in November 2005 to begin to identify real versus perceived threats to the Internet, refine the definition of an Internet disruption, determine the scope of a planned analysis of disruptions, and identify near-term protective measures. DHS officials stated that they had identified a number of potential future plans, including meeting with industry representatives to better understand what constitutes normal network activity and what further refine the definition of an Internet disruption; determine which public/private organizations would be contacted in an emergency and what contingency plans the government could establish; encourage implementation of best practices for protecting key Internet infrastructure, including the Domain Name System; and consider requiring improved security technologies for the Domain Name System and the Border Gateway Protocol in government contracts. Efforts such as those previously mentioned appear to be worthwhile; however, agency officials have not yet finalized plans, resources, or milestones for these efforts. Until they do, the benefits of these efforts will not be fully realized. In addition to the Internet Disruption Working Group, US-CERT officials formed the North American Incident Response Group. The group, modeled on similar groups in Asia and Europe, includes both public and private-sector network operators who would be the first to recognize and respond to cyber disruptions. In September 2005, US-CERT officials conducted regional workshops with group members to share information on structure and programs and incident response, and to seek ways for the government and industry to work together operationally. The attendees included 32 organizations, such as computer security incident response teams; information sharing and analysis centers; members of private firms that provide security services; information technology vendors; and other organizations that participate in cyber watch, warning, and response functions. US-CERT officials stated that these events were highly successful, and that they hope to continue to hold such events quarterly beginning in 2006. As a result of the first meetings, US-CERT officials developed a list of action items and assigned milestones to some of these items. For example, US-CERT has established a secure instant messaging capability to communicate with group members. In addition, it plans to conduct a survey of the group members to determine what they need from US-CERT and what types of information they can provide. While the outreach efforts of the North American Incident Response Group are promising, DHS has only just begun developing plans and activities to address the concerns of private-sector stakeholders. Over the last few years, DHS has conducted several broad intergovernmental exercises to test regional responses to significant incidents that could affect the critical infrastructure. These regional exercises included incidents that could cause localized Internet disruptions, and they resulted in numerous findings and recommendations regarding the government’s ability to respond to and recover from a major Internet disruption. For example, selected exercises found that both the government and private-sector organizations were poorly prepared to effectively respond to cyber events. They cited the lack of clarity on roles and responsibilities, the lack of coordination and communication, and a limited understanding of cybersecurity concerns as serious obstacles to effective response and recovery from cyber attacks and disruptions. Furthermore, regional participants reported being unclear regarding who was in charge of incident management at the local, state, and national levels. More recently, in February 2006, DHS conducted an exercise called Cyber Storm, which was focused primarily on testing responses to a cyber-related incident of national significance. The exercise involved a simulated large-scale attack affecting the energy and transportation infrastructures, using the telecommunications infrastructure as a medium for the attack. The results of this exercise have not yet been published. (Details on these exercises are provided in app. IV.) Exercises that include Internet disruptions can help to identify issues and interdependencies that need to be addressed. However, DHS has not yet identified planned activities and milestones or identified which group should be responsible for incorporating into its plans and initiatives lessons learned from the regional and Cyber Storm exercises. Without a coordination process, plans, and milestones, there is less chance that the lessons learned from the exercises will be successfully transferred to operational improvements. While DHS has various initiatives under way—including efforts to update the National Response Plan, task forces assessing changes to NCS, working groups on responding to cyber incidents, and exercises to practice recovery efforts—the relationships and interdependencies among these various efforts are not evident. For example, plans to update the National Response Plan to better reflect the Internet infrastructure are related to task force efforts to suggest changes to NCS to deal with the convergence of voice and data technologies. However, it is not clear how these initiatives are being coordinated. Furthermore, the National Cyber Response Coordination Group, the Internet Disruption Working Group, and the North American Incident Response Group are all meeting to discuss ways to address Internet recovery, but the interdependencies among the groups have not been clearly established. Additionally, it is not evident that lessons learned from the various cyber-related exercises are being incorporated in the planned revision of the National Response Plan or the ongoing efforts of the various working groups. Without a thorough understanding of the interrelationships among its various initiatives, DHS risks pursuing redundant efforts and missing opportunities to build on related efforts. DHS officials acknowledged that they have not yet fully coordinated the various initiatives aimed at enhancing the department’s ability to help respond to and recover from a major Internet disruption, but they noted that the complexity of this undertaking and the number of entities involved in Internet recovery make this effort challenging. Although DHS has various initiatives under way to improve Internet recovery planning, it faces key challenges in developing a public/private plan for Internet recovery, including (1) innate characteristics of the Internet that make planning for and responding to a disruption difficult, (2) a lack of consensus on DHS’s role and on when the department should get involved in responding to a disruption, (3) legal issues affecting DHS’s ability to provide assistance to restore Internet service, (4) reluctance of the private-sector to share information on Internet disruptions with DHS, and (5) leadership and organizational uncertainties within DHS. Until it addresses these challenges, DHS will have difficulty achieving results in its role as the focal point for recovering the Internet from a major disruption. The Internet’s diffuse structure, vulnerabilities in its basic protocols, and lack of agreed-upon performance measures make planning for and responding to a disruption more difficult. The diffuse control of the Internet makes planning for recovering from a disruption more challenging. The components of the Internet are not all governed by the same organization. Some components of the Internet are controlled by government organizations, while others are controlled by academic or research institutions. However, the vast majority of the Internet is owned and operated by the private sector. Each organization makes decisions to implement or not implement various standards based on issues such as security, cost, and ease of use. Therefore, any plan for responding to a disruption requires the agreement and cooperation of these private-sector organizations. In addition, the Internet is international. According to private-sector estimates, only about 20 percent of Internet users are in the United States. Cyber actors in one country have the potential to impact systems connected to the Internet in another country. This geographical diversity makes planning for Internet recovery more difficult. The Internet’s protocols have vulnerabilities that can be exploited. Examples of these vulnerabilities include the following: The version of Internet Protocol (IPv4) that is widely used today has certain security limitations that have been addressed but are not fully integrated into the protocol. The newest version of the protocol (IPv6) addresses some of these limitations, but it has not yet been fully adopted. The Domain Name System, which directs users to the correct Web site based on the name they typed in, was not originally built with the intent of being resistant to attacks. Domain name servers or caches storing Domain Name System information can be corrupted. Although some protective measures have been implemented, a method to encrypt and protect Domain Name System information has not yet been widely deployed. Border Gateway Protocol, the protocol that transmits routing information among separate networks, has vulnerabilities that, if not mitigated, could subject those networks to attack. For example, a malicious actor could advertise incorrect routing information. Because this protocol provides the basis for all Internet connectivity, a successful attack could have wide-ranging effects. There are no well-accepted standards for measuring and monitoring the Internet infrastructure’s availability and performance. Instead, individuals and organizations rate the Internet’s performance according to their own priorities. The commonly used version of Internet Protocol (IPv4) does not guarantee a priority or speed for delivery, but rather provides “best effort” service. The next version (IPv6) has features that may help the delivery of future Internet traffic, but it is not yet widely used. The topic of guaranteeing a particular level of service, called “quality of service,” is currently the subject of much research. For example, NCS requested information from private companies on the potential for prioritizing certain types of Internet service over others if network capacity was limited; NCS found that there is currently no offering of a priority service, nor is there any consensus by industry on a standard approach to prioritization. Obstacles to offering the service include both technical and financial challenges. Since there are no clear standards for quality of service, prioritizing service if capacity is limited or setting thresholds that indicate a disrupted network can be difficult. Private-sector representatives identified additional challenges to network measurement and performance standards, including a reluctance to share proprietary performance data that other companies could use for competitive advantage, flaws in measurement techniques, and the ability to “spoof” performance data. The lack of agreement on standards for measurement and performance limits the ability of the government and private sector to readily identify poor performance and identify when recovery efforts should begin. There is a lack of consensus about the role DHS should play in responding to a major Internet disruption and about the appropriate trigger for its involvement. As we previously noted in this report, the lack of clear legislative authority for Internet recovery efforts complicates the definition of this role. DHS is currently providing information to private industry through existing US-CERT and National Coordinating Center relationships and conducting exercises such as Cyber Storm. US-CERT and National Coordinating Center officials are also working to improve their relationships with the private sector. However, DHS officials acknowledged that their role in recovering from an Internet disruption needs additional clarification, because private industry owns and operates the vast majority of the Internet. Private-sector officials representing telecommunication backbone providers and Internet service providers were also unclear about the types of assistance DHS could provide in responding to an incident and about the value of such assistance. While many officials stated that the government did not have a direct recovery role, others identified a variety of roles ranging from providing information on specific threats (which DHS currently does through US-CERT), providing security and disaster relief support during a crisis, funding backup communication infrastructures, and driving improved Internet security through requirements for its own procurement. Clearly, there was no consensus among the officials on this issue. Table 6 summarizes potential roles suggested by private-sector representatives and DHS officials’ assessments of each area. The difference between a minor and a major Internet disruption can be a combination of factors. The severity of a disruption can be influenced by the length of time that the disruption lasts; the impact of the disruption on the operation of the Internet, both in quality of operation (e.g., if the speed of the Internet is affected), and the number of users that cannot access the Internet; the impact that the disruption has on society, such as the impact on national security or economic security; and the simultaneity of events (e.g., a disruption coinciding with a national disaster or terrorist attack could be more severe than a disruption occurring on an uneventful day). However, it is not clear when the government should get involved in a disruption. For example, the lessons learned from the DHS-sponsored regional exercises show that organizations do not know how and to whom they should report a cyber attack and what information to convey; local and state emergency operations centers often lack procedures to determine when they should activate for a cyber event; private-sector participants often do not inform government authorities about what they see as routine events because of company policy, legal constraints, or liability concerns; and it is unclear when a cybersecurity incident becomes a source of concern and what types of incidents should be communicated to local and federal law enforcement. The trigger for the National Response Plan, which is DHS’s overall framework for incident response, is poorly defined and has been found by both GAO and the White House to need revision. DHS officials acknowledged that the definition for activation of its National Cyber Response Coordination Group is very broad and needs clarification. In addition, other DHS officials stated that, in their meetings with private-sector firms and other government agencies, they have determined that they need to further refine the definition of when government should be involved during an Internet disruption. DHS officials have stated that a successful public/private partnership is critical to the success of efforts to plan for responding to Internet disruptions. Since private-sector participation in DHS planning activities for Internet disruption is voluntary, agreement on the appropriate trigger for government involvement and on the role of government in resolving an Internet disruption are essential to any plan’s success. Without a consensus on the appropriate role of government in responding to the disruption, or on the trigger for government involvement, planning for response to the disruption is difficult. There are key legal issues affecting DHS’s ability to provide assistance to help restore Internet service. As previously noted, key legislation and regulations guiding critical infrastructure protection, disaster recovery, and the telecommunications infrastructure do not provide specific authorities for Internet recovery. As a result, there is no clear legislative guidance on what government entity would be responsible in the case of a major Internet disruption. In addition, while the Stafford Act authorizes the government to provide federal assistance to states, local governments, nonprofit entities, and individuals in the event of a major disaster or emergency, it does not authorize assistance to for-profit corporations. Several representatives of telecommunications companies reported that they had requested federal assistance from DHS during Hurricane Katrina. Specifically, they requested food, water, and security for the teams they were sending in to restore the communications infrastructure, and fuel to power their generators. DHS responded that it could not fulfill these requests, noting that the Stafford Act did not extend to for-profit companies. Because a large percentage of the nation’s critical infrastructure—including the Internet—is owned and operated by the private sector, public/private partnerships are crucial for successful critical infrastructure protection. Although certain policies direct DHS to work with the private sector to ensure infrastructure protection, DHS does not have the authority to direct Internet owners and operators in their recovery efforts. Instead, it must rely on the private sector to share information on incidents, disruptions, and recovery efforts. We have previously reported that many in the private sector are reluctant to share information with the federal government. Many private-sector representatives questioned the value of providing information to DHS regarding planning for and recovery from Internet disruption. Concerns included the potential for disclosure of the information and the perceived lack of benefit in providing the information. In addition, DHS identified provisions of the Federal Advisory Committee Act as having a “chilling effect” on cooperation with the private sector. The act governs the structure of certain federal advisory groups and requires that membership in and information about the groups’ activities be public record. However, both the act itself and other federal legislation provide the ability to limit disclosure of sensitive information provided to the government. While DHS officials stated that the agency was working on a solution to problems posed by the act, they did not provide us with information on potential solutions or milestones for completing these activities. The uncertainties regarding the value and risks of cooperation with the government limit incentives for the private sector to cooperate in Internet recovery planning efforts. In 2003 and again in 2005, we identified the transformation of DHS from 22 agencies into one department as a high-risk area. As part of this body of work, we noted that organizational and management practices are critical to successfully transforming an organization. Additionally, we reported on the importance of top leadership driving any transformation and the need for a stable and authoritative organizational structure. However, DHS has lacked permanent leadership while developing its plans for Internet recovery and reconstitution. In addition, the organizations with roles in Internet recovery have overlapping responsibilities and may be reorganized once DHS selects permanent leadership. As a result, it is difficult for DHS to develop a clear set of organizational priorities and to coordinate among the various activities responsible for Internet recovery planning. In recent years, DHS has experienced a high level of turnover in its cybersecurity division and has lacked permanent leadership in key roles. In May 2005, we reported that multiple senior DHS cybersecurity officials had recently left the department. These officials included the NCSD Director, the Deputy Director responsible for Outreach and Awareness, the Director of the US-CERT Control Systems Security Center, the Under Secretary for the Information Analysis and Infrastructure Protection Directorate, and the Assistant Secretary responsible for the Information Protection Office. Subsequently, in July 2005, the DHS Secretary announced a major reorganization of the department. Under this reorganization, the Information Analysis and Infrastructure Protection Directorate, which contained NCS and NCSD, was renamed the Directorate for Preparedness, which would be managed by an appointed under secretary. The responsibilities of NCS and NCSD were placed under a new Assistant Secretary for Cyber Security and Telecommunications. DHS stated that the creation of a position for Assistant Secretary for Cyber Security and Telecommunications within the department would elevate the position of cybersecurity in the department and by doing so raise visibility for the issue. However, as of May 2006, no candidate for the assistant secretary position had yet been publicly announced. In addition, the current head of NCSD is in an acting position and has been since October 2004. While DHS stated that the lack of a permanent assistant secretary has not hampered its efforts in protecting critical infrastructure, several private-sector representatives stated that DHS’s lack of leadership in this area has limited progress. Specifically, these representatives stated that filling key leadership positions would enhance DHS’s visibility to the Internet industry and potentially improve its reputation. DHS officials acknowledged that the current organizational structure has overlapping responsibilities in planning for and recovering from a major Internet disruption. NCSD is responsible for planning and response activities governing information technology, while NCS has the lead for telecommunications. However, because of the convergence of voice and data networks, NCS has become more involved in Internet issues. There is currently no written division of responsibilities between NCS and NCSD related to Internet recovery. NCS officials stated that a revision of the Emergency Support Function 2 would help address the apparent overlap, but DHS has not established a date for finalizing this document. Furthermore, DHS officials stated that the new assistant secretary would have discretion to reorganize NCS and NCSD. For example, NCS and NCSD could be combined, or one or more program areas could be modified. As a result, it is difficult for DHS to develop a clear set of organizational priorities and to coordinate among the various activities responsible for Internet recovery planning. As a critical information infrastructure supporting our nation’s commerce and communications, the Internet is subject to disruption—from both intentional and unintentional incidents. While major incidents to date have had regional or local impacts, the Internet has not yet suffered a catastrophic failure. Should such a failure occur, however, existing legislation and regulations supporting critical infrastructure protection, disaster response, and the telecommunications infrastructure do not specifically address roles and responsibilities for Internet recovery. A national policy, the National Strategy to Secure Cyberspace, establishes DHS as the focal point for ensuring the security of cyberspace—a role that includes developing joint public/private plans for facilitating a recovery from a major Internet disruption. While DHS has initiated efforts to refine high-level disaster recovery plans, the components of these plans that pertain to the Internet are not complete. Additionally, while DHS has undertaken several initiatives to improve Internet recovery planning, much remains to be done. Specifically, some initiatives lack clear time lines, lessons learned are not consistently being incorporated in recovery plans, and the relationships between the various initiatives are not clear. DHS faces numerous challenges to developing integrated public/private recovery plans—not the least of which is the fact that the government does not own or operate much of the Internet. In addition, there is no consensus among public and private stakeholders about the appropriate role of DHS and when it should get involved; legal issues limit the actions the government can take; the private sector is reluctant to share information on Internet performance with the government; and DHS is undergoing important organizational and leadership changes. As a result, the exact role of the government in helping to recover the Internet infrastructure following a major disruption remains unclear. Given the importance of the Internet as a critical infrastructure supporting our nation’s communications and commerce, Congress should consider clarifying the legal framework that guides roles and responsibilities for Internet recovery in the event of a major disruption. This effort could include providing specific authorities for Internet recovery as well as examining potential roles for the federal government, such as providing access to disaster areas, prioritizing selected entities for service recovery, and using federal contracting mechanisms to encourage more secure technologies. This effort also could include examining the Stafford Act to determine if there would be benefits in establishing specific authority for the government to provide for-profit companies—such as those that own or operate critical communications infrastructures—with limited assistance during a crisis. To improve DHS’s ability to facilitate public/private efforts to recover the Internet in case of a major disruption, we recommend that the Secretary of the Department of Homeland Security implement the following nine actions: Establish dates for revising the National Response Plan and finalizing the National Infrastructure Protection Plan—including efforts to update key components relevant to the Internet. Use the planned revisions to the National Response Plan and the National Infrastructure Protection Plan as a basis, draft public/private plans for Internet recovery, and obtain input from key Internet infrastructure companies. Review the NCS and NCSD organizational structures and roles in light of the convergence of voice and data communications. Identify the relationships and interdependencies among the various Internet recovery-related activities currently under way in NCS and NCSD, including initiatives by US-CERT, the National Cyber Response Coordination Group, the Internet Disruption Working Group, the North American Incident Response Group, and the groups responsible for developing and implementing cyber recovery exercises. Establish time lines and priorities for key efforts identified by the Internet Disruption Working Group. Identify ways to incorporate lessons learned from actual incidents and during cyber exercises into recovery plans and procedures. Work with private-sector stakeholders representing the Internet infrastructure to address challenges to effective Internet recovery by further defining needed government functions in responding to a major Internet disruption (this effort should include a careful consideration of the potential government functions identified by the private sector in table 6 of this report), defining a trigger for government involvement in responding to such documenting assumptions and developing approaches to deal with key challenges that are not within the government’s control. We received written comments from DHS on a draft of this report (see app. V). In DHS’s response, the Director of the Departmental GAO/Office of Inspector General Liaison Office concurred with our recommendations. DHS stated that it recognizes that the Internet is an important component of the information infrastructure in which both the information technology and telecommunications sectors share an interest. It also stated that because of the increasing reliance of various critical infrastructure sectors on interconnected information systems, the Internet represents a significant source of interdependencies for many sectors. DHS agreed that strengthened collaboration between the public and private sectors is critical to protecting the Internet. DHS also provided information on initial actions it is taking to implement our recommendations. DHS officials, as well as others who were quoted in our report, also provided technical corrections, which we have incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact us at (202) 512-9286 and at (202) 512-6412, or by e-mail at pownerd@gao.gov and rhodesk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) identify examples of major disruptions to the Internet, (2) identify the primary laws and regulations governing recovery of the Internet in the event of a major disruption, (3) evaluate the Department of Homeland Security’s (DHS) plans for facilitating recovery from Internet disruptions, and (4) assess challenges to such efforts. To determine the types of major disruptions to the Internet, we analyzed our prior work on cybersecurity issues as well as reports by private organizations, research experts, and government agencies. We identified incidents that were representative of types of disruptions that have actually occurred. We compiled case studies by reviewing and summarizing research reports and interviewing private-industry experts and government officials. We also conducted interviews with individuals in the private/public sectors, including representatives of private companies that operate portions of Internet infrastructure. To determine the primary laws and regulations for recovering the Internet in the event of a major disruption, we analyzed relevant laws and regulations related to infrastructure protection, disaster response, and the telecommunications infrastructure. These laws and regulations included the Homeland Security Act of 2002, Homeland Security Presidential Directive 7, the Defense Production Act, the Stafford Act, the Communications Act of 1934, and the National Communications System (NCS) authorities. We also obtained the perspectives of DHS and the Federal Communications Commission on the laws and regulations that govern Internet recovery. Additionally, we conducted interviews with DHS and other government officials as well as representatives of the telecommunications and information technology sectors. To assess plans for recovery of Internet service in the event of a major disruption, we analyzed key documents, such as the interim National Infrastructure Protection Plan, the National Response Plan, a report from the National Coordinating Center Task Force, and reports from regional tabletop security exercises. We observed a portion of DHS’s Cyber Storm exercise, which focused on facilitating government and private industry organizations to address an array of cybersecurity issues. We also spoke with the Deputy Manager of NCS and the Deputy Director of the NCSD to identify DHS’s initiatives in the area of Internet protection and recovery. Additionally, we interviewed representatives from private companies that operate portions of Internet infrastructure. These included representatives of major telecommunications and cable companies, Internet service providers, and root server operators. We also interviewed representatives from three information sharing and analysis centers to obtain their perspectives on DHS’s capabilities in the area of Internet recovery. To identify the challenges that may affect current recovery plans, we analyzed DHS plans, congressional testimony, and other evaluations of challenges to Internet recovery. We also interviewed officials at DHS, including NCSD’s Deputy Director of Strategic Initiatives and Deputy Director of Operations and NCS’s Chief of the Critical Infrastructure Protection Division. In addition, we interviewed other agencies that are involved with the government’s efforts in the area of Internet recovery and experts in the private sector and academia. We performed our work from August 2005 to May 2006 in accordance with generally accepted government auditing standards. Federal laws and policies establish critical infrastructure protection as a national goal and describe a strategy for cooperative efforts by government and the private sector to protect the cyber- and physical-based systems that are essential to the minimum operations of the economy and the government. The primary authorities governing protection of critical infrastructure include the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7. The Homeland Security Act of 2002 established DHS and gave it lead responsibility for preventing terrorist attacks in the United States, reducing the vulnerability of the United States to terrorist attacks, and minimizing the damage and assisting in the recovery from attacks that do occur. The act also assigns DHS a number of responsibilities for critical infrastructure protection, including (1) developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States; (2) recommending measures to protect the key resources and critical infrastructure of the United States in coordination with other federal agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities; and (3) disseminating, as appropriate, information analyzed by the department—both within the department and to other federal, state, and local government agencies and private-sector entities—to assist in the deterrence, prevention, or preemption of or response to terrorist attacks. Additionally, the act specifically charged DHS with providing state and local government entities and, upon request, private entities that own or operate critical infrastructure, with analyses and warnings concerning vulnerabilities and threats to critical crisis management support in response to threats or attacks on critical technical assistance with respect to recovery plans to respond to major failures of critical information systems. Homeland Security Presidential Directive 7, dated December 17, 2003, superseded Presidential Decision Directive 63 and established a national policy for federal departments and agencies to identify and prioritize critical infrastructures and key resources and to protect them from terrorist attack. The directive defines responsibilities for (1) DHS, (2) sector-specific federal agencies that are responsible for addressing specific critical infrastructure sectors, and (3) other departments and agencies. The directive also makes DHS responsible for coordinating the national effort to enhance the protection of the critical infrastructure and key resources of the United States. Under the directive, the Secretary of DHS is to serve as the principal federal official to lead, integrate, and coordinate implementation of efforts among federal departments and agencies, state and local governments, and the private sector to protect critical infrastructure and key resources. The Secretary also is to work closely with other federal departments and agencies, state and local governments, and the private sector in accomplishing the objectives of the directive. The Secretary is given responsibility to coordinate protection activities for several key infrastructure sectors, including the information technology and telecommunications sectors. Homeland Security Presidential Directive 7 provides that DHS is to collaborate with the appropriate private-sector entities and to encourage the development of information-sharing and analysis mechanisms. Additionally, the department and sector-specific agencies are to collaborate with the private sector and continue to support sector-coordinating mechanisms to identify, prioritize, and coordinate the protection of critical infrastructure and key resources and facilitate sharing of information about cyber and physical threats, vulnerabilities, incidents, potential protective measures, and best practices. Federal planning for disaster recovery is governed by legislation including the Defense Production Act and the Stafford Act. The Defense Production Act was enacted at the outset of the Korean War to ensure the availability of industrial resources to meet the needs of the Department of Defense. The act is intended to facilitate the supply and timely delivery of products, materials, and services to military and civilian agencies, in times of peace as well as in times of war. Presently, only titles I, III, and VII of the Defense Production Act remain in effect. DHS identified the act as a primary authority that supports telecommunications emergency planning and response functions. Title I of the act authorizes the President to ensure the timely availability of products, materials, and services needed to meet current defense preparedness and military readiness requirements as well as the requirements of a national emergency. Under section 101 of the act, the President may require preferential performance on contracts and orders to meet approved national defense requirements and may allocate materials, services, and facilities as necessary to promote the national defense in a national emergency. Homeland Security Presidential Directive 7, previously discussed, specifically acknowledges the authority of the Department of Commerce to use the act to ensure the timely availability of industrial products, materials, and services to meet homeland security requirements. Title III of the act authorizes the use of financial incentives to expand productive capacity and supply. It authorizes loan guarantees, loans, purchases, purchase guarantees, and installation of equipment in contractor facilities for those goods necessary for national defense. It is used only in cases where domestic sources are required and domestic firms cannot, or will not, act on their own to meet a national defense production need. Title VII of the Defense Production Act defines national defense to include domestic emergency preparedness and critical infrastructure protection and restoration activities. The act’s authorities, therefore, are available to meet requirements in a civil disaster, such as a major Internet disruption. The act also authorizes the President to provide antitrust defenses to private firms participating in voluntary agreements aimed at solving production and distribution problems. The Year 2000 computer transition and the September 11, 2001, attacks prompted new interest in the act and its application to information technology and cybersecurity. Some commentators indicated that the act would be a useful tool in managing a critical infrastructure emergency. In January 2001, President Clinton directed the Secretary of Energy to exercise authority under the act, among other statutes, to ensure the availability of natural gas for high-priority uses in California. President Clinton found that ensuring natural gas supplies to California was necessary and appropriate to maximize domestic supplies and to promote the national defense. President Bush subsequently extended this executive order. In recent years, Congress has expanded the Defense Production Act’s coverage to include crises resulting from natural disasters or “man-caused events” not amounting to an armed attack on the United States. The definition of national defense in the act was expanded in 1994 to include emergency preparedness activities authorized by the Stafford Act. In 2003, the act was reauthorized through September 30, 2008. It was also amended to add explicit authority to use the act for critical infrastructure protection and restoration. In addition, the 2003 Act (section 5) added a definition of critical infrastructure to the act. “Major disaster means any natural catastrophe (including any hurricane, tornado, storm, high water, winddriven water, tidal wave, tsunami, earthquake, volcanic eruption, landslide, mudslide, snowstorm, or drought), or, regardless of cause, any fire, flood, or explosion, in any part of the United States, which in the determination of the President causes damage of sufficient severity and magnitude to warrant major disaster assistance under this Act to supplement the efforts and available resources of States, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused thereby.” A presidential declaration that a major disaster has occurred activates the federal response plan for the delivery of federal disaster assistance. The Federal Emergency Management Agency is responsible for coordinating the federal and private response effort. A presidential declaration of a major disaster triggers several Stafford Act authorities, including, for example, federal activities to support state and local governments to facilitate the distribution of help distribute aid to victims through state and local governments and voluntary organizations, perform life- and property-saving assistance, clear debris, and use the resources of the Department of Defense; repair and reconstruct federal facilities; repair, restore, and replace damaged facilities owned by state and local governments, as well as private nonprofit facilities that provide essential services or contributions for other facilities or hazard mitigation measures in lieu of repairing or restoring damaged facilities; and establish—during or in anticipation of an emergency—temporary communications systems, and make such communications available to state and local government officials. The Internet is enabled by the telecommunications infrastructure that supports transmission of data. Key laws and regulations include the Communications Act of 1934, as amended, and the National Communications System (NCS) authorities. The primary federal telecommunications law is the Communications Act of 1934. Its original purpose was to regulate interstate and foreign commerce in communications by wire and radio by licensing radio stations and regulating the telecommunications monopolies of the time. The 1934 Act also created the Federal Communications Commission to implement the act. The 1934 act, as amended, has remained for more than 60 years as the basis of federal regulation of telecommunications services. The Telecommunications Act of 1996 amended the 1934 Act to enhance competition in the telecommunications market. These laws govern regulation of forms of transmission upon which the Internet depends. There is, however, no general regulatory provision for the Internet in the act and no specific provision providing authorities and responsibilities for Internet recovery. NCS was established by a memorandum signed by President Kennedy in 1963, following the Cuban Missile Crisis. The memorandum called for establishing a national communications system by linking together and improving the communication facilities and components of various federal agencies. This original memorandum has since been amended and superseded over time. The executive order currently in force is Executive Order 12472, April 3, 1984, which was amended slightly by Executive Order 13286 on February 28, 2003. Executive Order 12472, as amended by Executive Order 13286, established NCS and provided that its mission was to assist the President, the National Security Council, the Homeland Security Council, the Director of the Office of Science and Technology Policy, and the Director of the Office of Management and Budget in, among other responsibilities, “the coordination of the planning for and provision of national security and emergency preparedness communications for the Federal government under all circumstances, including crisis or emergency, attack, recovery and reconstitution.” The administrative structure includes a National Communications System Committee of Principals, an executive agent, and a manager. The Homeland Security Act of 2002 transferred NCS to DHS. To reflect this change, Executive Order 13286 made the Secretary of DHS the Executive Agent. NCS’s mission with regard to critical infrastructure protection is to ensure the reliability and availability of telecommunications for national security and emergency preparedness. Its mission includes, but it is not necessarily limited to, responsibility for (1) ensuring the government’s ability to receive priority services for national security and emergency preparedness purposes in current and future telecommunications networks by conducting research and development and participating in national and international standards bodies and (2) operationally coordinating with industry for protecting and restoring national security and emergency preparedness services in an all-hazards environment. Section 706 of the Communications Act of 1934 grants the President certain emergency powers regarding telecommunications, including the authority to grant essential communications “preference or priority with any carrier” subject to this act. The President may also, in the event of war or national emergency, suspend regulations governing wire and radio transmissions and “authorize the use or control of any such facility or station and its apparatus and equipment by any department of the Government.” Section 706 is implemented in Executive Order 12472, which provides that the Director of the Office of Science and Technology Policy shall direct the exercise of the war power functions of the President under section 706(a), (c)-(e) of the Communications Act of 1934, as amended (47 U.S.C. 606). Section 706 is implemented in the Code of Federal Regulations at title 47, chapter II. The National Security Telecommunications Advisory Committee advises the President on issues and problems related to implementing national security and emergency preparedness telecommunications policy. The committee recently formed two task forces to provide recommendations on changes to DHS’s NCS division and operations. In May 2004, the Next Generation Network Task Force was formed to develop recommendations on changes that needed to be made to NCS as a result of issues such as the convergence of voice and data communications. The task force was to (1) define the expected structure for next-generation networks, such as those using Internet-based protocols; (2) identify national security and emergency preparedness user requirements for next- generation networks and outline how these requirements will be met; and (3) examine relevant user scenarios and expected cyber threats and recommend optimal actions to address these threats. The task force agreed to present its findings and recommendations in two separate reports to the President—a near-term recommendations report and a final comprehensive report. In March 2005, the task force issued near-term recommendations for the federal government. While the recommendations did not address NCS’s role in recovering from an Internet disruption, they included exploring the use of government networks as alternatives for critical emergency communications during times of national crisis; using and testing existing and leading-edge technologies and commercial capabilities to support critical emergency user requirements for security and availability; studying and supporting industry efforts in areas that present the greatest emergency communications risks during the period of convergence, including gateways, control systems, and first responder communications systems; and reviewing the value of satellite systems as a broad alternative transmission channel for critical emergency communications. The final report, issued in March 2006, contained recommendations that the federal government require federal agencies to plan for and invest in resilient and alternate communications mechanisms to be used in a crisis, develop identity management tools to support priority emergency communication on next-generation networks, develop supporting policies for emergency communications on next- improve DHS incident management capabilities. DHS has not yet developed specific plans to address the recommendations from either report. In October 2004, a task force was established to examine the future mission and role of the National Coordinating Center, which is part of NCS. This task force was to study the direction of the center over the next year, 3 years, and 5 years, including how industry members of the center should continue to partner with the government and how the center should be structured. The task force researched the center’s functions and mapped the center’s authorities to its missions. It studied the center’s organizational structure, information sharing and analysis, incident management and leadership, and international mutual-aid abilities. In its report issued in May 2006, the task force found that since the September 11 attacks the number of companies participating in the National Coordinating Center has more than doubled, but the influx of new members has hindered information sharing because of the time it takes to develop trusted relationships between members. The report also found that members wanted government to increase its sharing of threat information with the communications industry through the National Coordinating Center. The report recommended that the National Coordinating Center broaden center membership by including additional firms, such as cable operators, satellite operators, and Internet service providers; NCS examine the possible combination of the National Coordinating Center and the Information Technology Information Sharing and Analysis Center; DHS clarify responsibilities and authorities in emergency situations to facilitate response to telecommunications disruptions; DHS revise the Cyber Incident Annex to the National Response Plan to clarify the trigger for the annex and the appropriate role of the government in responding to such an incident; the National Coordinating Center develop a concept of operations for responding to cyber events; and DHS resolve confusion over legal or jurisdictional issues in responding to cyber or communications crises. DHS has not yet developed a plan to address these findings and recommendations. Over the last few years, DHS has conducted several exercises to test the federal and regional response to incidents affecting critical infrastructures. Among other events, these exercises included incidents that could cause localized Internet disruptions. Specifically, DHS sponsored two cyber tabletop exercises with Connecticut and New Jersey, as well as a series of exercises in the Pacific Northwest and Gulf Coast regions of the United States. The series of exercises in the Pacific Northwest was named Blue Cascades. Blue Cascades II, conducted in September 2004, addressed a scenario involving cyber attacks and attacks that disrupted infrastructure, including telecommunications and electric power. The scenario explored regional capabilities to deal with threats, interdependences, cascading impacts, and incident response. Blue Cascades III, conducted in March 2006, focused on the impact of a major earthquake in the area and the resulting efforts to recover and restore services. Both exercises were sponsored by NCSD and organized by the Pacific Northwest Economic Region. Purple Crescent II, held in New Orleans, Louisiana, in October 2004, was also designed to raise awareness of infrastructure interdependencies and to identify how to improve regional preparedness. The scenario involved a cell of terrorists that used an approaching major hurricane to test their ability to disrupt regional infrastructures, government and private organizations, and particularly disaster preparedness operations using cyber attacks. The exercise was sponsored by the Gulf Coast Regional Partnership for Infrastructure Security and funded by NCSD. The objectives of these exercises included raising awareness of infrastructure-related cybersecurity issues and identifying response and recovery challenges; bringing together physical security, emergency management, and other disciplines involved in homeland security and disaster response; identifying roles and responsibilities in addressing cyber attacks and disruptions; determining ways to foster public/private cooperation and information identifying preparedness gaps associated with cybersecurity and related producing an action plan of activities. The exercises resulted in many findings regarding the overall preparedness for cyber incidents (see table 7). Overall, the exercises found that both the government and private-sector organizations were poorly prepared to effectively respond to cyber events. The lack of clarity on roles and responsibilities coupled with both the lack of coordination and communication and limited understanding of cybersecurity concerns pose serious obstacles to effective response and recovery from cyber attacks and disruptions. Furthermore, it was unclear who was in charge of incident management at the local, state, or national levels. The after-action reports from the exercises recommended areas for additional study and planning, including additional study of the vulnerabilities of critical infrastructures to cyber improved information on training, assessments, and resources to be used against cyber attacks; improved federal, state, local, and private-sector planning and defined thresholds for what constitutes a major cyber attack. Cyber Storm, held in February 2006 in Washington, D.C., was the first DHS- sponsored national exercise to test response to a cyber-related incident of national significance. The exercise involved a simulated, large-scale attack affecting the energy, information technology, telecommunications, and transportation infrastructures. DHS officials stated that they plan to hold a similar exercise every other year. According to information provided by agency officials, the exercise involved eight federal departments and three agencies, three states, and four foreign countries. The exercise also involved representatives from the private sector, including nine information technology companies, six electric companies, and two airlines. The exercise objectives included testing interagency, intergovernmental, and public/private coordination of incident response. Representatives of private-sector companies provided mixed responses on the value of exercises such as Cyber Storm. Selected representatives expressed concerns about the overly broad scope and the difficulty in justifying dedicating resources for the exercises due to the lack of clear goals and outcomes. Another representative stated that government exercises help the government but exercises involving private-sector coordination with multiple agencies would also be helpful. Another representative stated that exercises were only of value if there was a process for integrating lessons learned from the exercises into policies and procedures. Two representatives, from a private-sector company that participated in Cyber Storm, stated that, while useful, the exercise was not designed for network operators, who would benefit from more comprehensive training in incident response. In addition to those named above, Don R. Adams, Naba Barkakati, Scott Borre, Neil Doherty, Vijay D’Souza, Joshua A. Hammerstein, Bert Japikse, Joanne Landesman, Frank Maguire, Teresa M. Neven, and Colleen M. Phillips made key contributions to this report.
|
Since the early 1990s, growth in the use of the Internet has revolutionized the way that our nation communicates and conducts business. While the Internet was originally developed by the Department of Defense, the vast majority of its infrastructure is currently owned and operated by the private sector. Federal policy recognizes the need to prepare for debilitating Internet disruptions and tasks the Department of Homeland Security (DHS) with developing an integrated public/private plan for Internet recovery. GAO was asked to (1) identify examples of major disruptions to the Internet, (2) identify the primary laws and regulations governing recovery of the Internet in the event of a major disruption, (3) evaluate DHS plans for facilitating recovery from Internet disruptions, and (4) assess challenges to such efforts. A major disruption to the Internet could be caused by a cyber incident (such as a software malfunction or a malicious virus), a physical incident (such as a natural disaster or an attack that affects key facilities), or a combination of both cyber and physical incidents. Recent cyber and physical incidents have caused localized or regional disruptions but have not caused a catastrophic Internet failure. Federal laws and regulations addressing critical infrastructure protection, disaster recovery, and the telecommunications infrastructure provide broad guidance that applies to the Internet, but it is not clear how useful these authorities would be in helping to recover from a major Internet disruption. Specifically, key legislation on critical infrastructure protection does not address roles and responsibilities in the event of an Internet disruption. Other laws and regulations governing disaster response and emergency communications have never been used for Internet recovery. DHS has begun a variety of initiatives to fulfill its responsibility for developing an integrated public/private plan for Internet recovery, but these efforts are not complete or comprehensive. Specifically, DHS has developed high-level plans for infrastructure protection and incident response, but the components of these plans that address the Internet infrastructure are not complete. In addition, the department has started a variety of initiatives to improve the nation's ability to recover from Internet disruptions, including working groups to facilitate coordination and exercises in which government and private industry practice responding to cyber events. However, progress to date on these initiatives has been limited, and other initiatives lack time frames for completion. Also, the relationships among these initiatives are not evident. As a result, the government is not yet adequately prepared to effectively coordinate public/private plans for recovering from a major Internet disruption. Key challenges to establishing a plan for recovering from Internet disruptions include (1) innate characteristics of the Internet (such as the diffuse control of the many networks making up the Internet and private sector ownership of core components) that make planning for and responding to disruptions difficult, (2) a lack of consensus on DHS's role and when the department should get involved in responding to a disruption, (3) legal issues affecting DHS's ability to provide assistance to restore Internet service, (4) reluctance of many in the private sector to share information on Internet disruptions with DHS, and (5) leadership and organizational uncertainties within DHS. Until these challenges are addressed, DHS will have difficulty achieving results in its role as a focal point for helping to recover the Internet from a major disruption.
|
Under the authority of the Ports and Waterways Safety Act of 1972, as amended, the Coast Guard operates VTS systems in eight ports. Operations and maintenance costs for these systems, which totaled about $19 million in fiscal year 1995, are borne by the Coast Guard and are not passed on to the ports or the shipping industry. Two other ports, Los Angeles/Long Beach and Philadelphia/Delaware Bay, have user-funded systems. Study of VTS systems was prompted by the Oil Pollution Act of 1990 (P.L. 101-380), passed after the 1989 Exxon Valdez oil spill and other accidents in various ports. The Act directed the Secretary of Transportation to prioritize U.S. ports and channels in need of new, expanded, or improved VTS systems. The resulting report, called the Port Needs Study, was submitted to the Congress in March 1992. This study laid much of the groundwork for the proposal for VTS 2000. Making funding decisions today about VTS 2000 is complicated by several as-yet-unanswered questions regarding the need for the system in certain ports, the system’s cost, and available alternatives to VTS 2000. Having more complete, up-to-date information on these questions is critical to deciding whether to move forward with the program. One uncertainty relates to which ports will receive VTS 2000 systems. Most of the 17 candidate ports were identified in the 1991 Port Needs Study, which quantified (in dollar terms) the benefits of building new VTS systems at port areas nationwide. The Coast Guard is not scheduled to make a final decision on which ports to include in the program until fiscal year 2000, but the information developed to date suggests that the number of ports ultimately selected could be much less than 17. The Port Needs Study and the follow-on studies completed so far show that a new system would produce little or no added benefit at about two-thirds of the ports being considered. Budget information the Coast Guard has provided to the Congress thus far has not fully reflected the limited benefits of installing VTS 2000 systems in many of the ports being considered. For example, the Coast Guard should provide to the Congress updated information on the added benefits, if any, that would be achieved by installing VTS 2000 at various ports, especially for those that already have VTS systems. In our view, this information, coupled with the Coast Guard’s current thinking on the high and low priority locations for VTS 2000, is critical to assist the Congress in deciding on whether a development effort for 17 ports is warranted. We realize that the Coast Guard is not in a position to make a final decision on all ports at this time, because it is still gathering information and conducting follow-on studies to reassess some ports on the list. However, having the most current and complete data will allow the Congress to better decide on funding levels for the VTS 2000 program and provide direction to the Coast Guard. A second major area of uncertainty is the cost to develop VTS 2000. This cost is considerable, regardless of whether it is installed at a few ports or all 17. The Coast Guard initially estimated that development costs alone (exclusive of installation costs at most sites) would total $69 million to $145 million, depending on the number of sites that receive VTS 2000 and the extent of software development. The estimated costs to install equipment and build facilities at each site ranged from $5 million to $30 million, bringing the program’s total costs to between $260 million and $310 million. The Coast Guard’s updated estimate of annual operating costs for a 17-site system is $42 million. At present, the Coast Guard plans to pay for all of these costs from its budget instead of passing them on to users. A few days ago, the Coast Guard awarded contracts for initial development of the VTS 2000 system. The bids from three vendors currently competing for the contract to design the system were substantially lower than earlier estimates. Further refinements to the Coast Guard cost estimates will be made in early 1997 when the Coast Guard plans to select a single contractor to build the VTS 2000 system. The system’s costs will also depend on the Coast Guard’s decision about how sophisticated the system should be. VTS 2000 can be developed in four phases; and additional capability can be added at each phase. For example, phase 1, originally estimated to cost $69 million, would create a system with operational capabilities that are about on a par with upgraded VTS systems currently being installed at some ports. The Coast Guard’s development plan allows for stopping after phase 1 (or any other phase) if cost or other considerations preclude further development. To date, the Coast Guard’s approach has not involved much consideration of whether feasible alternatives exist to VTS 2000 at individual ports under consideration. I want to emphasize that we did not attempt to assess whether other alternatives were preferable, but many would appear to merit consideration or study. Here are a few of these alternatives: Reliance on existing VTS systems. The systems in place at seven locations may be sufficient. For example, the port of Los Angeles/Long Beach, which is on the Coast Guard’s “short list” for the first round of VTS 2000 systems, now has a VTS system, which cost about $1 million to build and meets nearly all of VTS 2000’s operational requirements, according to a Coast Guard study. The Coast Guard is reconsidering its decision to keep the port on the “short list” but is still evaluating it for VTS 2000. Other VTS systems in Houston/Galveston, Puget Sound, Philadelphia/Delaware Bay, New York, San Francisco, and Valdez all have been recently upgraded or enhanced or are scheduled to be upgraded in the near future irrespective of VTS 2000. Therefore, these systems may provide protection similar to that of VTS 2000 now and into the future. VTS systems with smaller scope than proposed thus far under VTS 2000. The Port Needs Study and follow-on studies have proposed blanketing an entire port area with VTS coverage, but less comprehensive VTS coverage might be sufficient. For example, some key stakeholders at Port Arthur/Lake Charles, which has no radar-based VTS coverage, said such coverage was needed at only a few key locations, instead of portwide. A group is studying the feasibility of a more limited, privately-funded system. One vendor estimated that a system to cover key locations at Port Arthur/Lake Charles would cost $2 million to $3 million. Coast Guard officials told us that reduced coverage is an option they could consider when site-specific plans are established for VTS 2000. Non-VTS approaches. In some cases, improvements have been proposed that are not as extensive as installing a VTS system. For example, several years ago in Mobile/Pascagoula, the Coast Guard Captain of the Port proposed a means to enhance port safety at two locations where the deep ship channels (for ocean-going ships) intersect the Intracoastal Waterway (which mainly has barge traffic and small vessels). The proposal involved establishing “regulated navigation areas” that would require vessels from both directions to radio their approach and location to all other vessels in the vicinity. This proposal may merit further consideration before a decision is made on the need for a VTS in this port area. At the ports we visited, few stakeholders said they had been involved with the Coast Guard in discussing whether such alternatives are a viable alternative to VTS 2000 systems in their port. In discussions with us, Coast Guard officials agreed that greater communication with key stakeholders is an essential step in making decisions about VTS 2000. An additional study currently being conducted by the Marine Board of the National Research Council may provide additional information that will be useful in assessing VTS 2000. Among other things, this study will address the role of the public and private sectors in developing and operating VTS systems in the United States. An interim report is due to be completed in June 1996. Most of the stakeholders we interviewed did not support installing a VTS 2000 system at their port. Their opinions were predominantly negative at five ports, about evenly split at two, and uncertain at one. Many who opposed VTS 2000 perceived the proposed system as being more expensive than needed. Support for VTS 2000 was even less when we asked if stakeholders would be willing to pay for the system, perhaps through fees levied on vessels. A clear majority of the stakeholders was not willing to fund VTS 2000 at six of the ports; at the other two, support was mixed. The stakeholders interviewed at six ports generally supported some form of VTS system that they perceived to be less expensive than VTS 2000. However, at the four ports with VTS systems, this support did not reflect a belief that a new system was needed; most stakeholders said that existing systems were sufficient. The two locations without a VTS system (New Orleans and Tampa) supported an alternative VTS system. In contrast, at Mobile/Pascagoula, most stakeholders were opposed to a VTS system, saying that the low volume of ocean-going vessels did not warrant such a system. At Port Arthur/Lake Charles, views were evenly mixed as to whether a system was needed. In general, because stakeholders perceived that other alternative VTS systems could be less costly than VTS 2000, they were somewhat more disposed to consider paying for them. At two locations with existing private VTS systems, they are already doing so. At the remaining six ports, the stakeholders had the following views on paying for alternative VTS systems: stakeholders’ views were generally supportive at three, opposed at one, and mixed at the other two. In discussions with key stakeholders at each of the eight ports we visited, three main concerns emerged that could impede private-sector involvement in building and operating VTS systems. Obtaining funding for construction. At half of the six ports that do not have a privately funded VTS, the stakeholders were concerned that if local VTS systems are to be funded by the user community rather than through tax dollars, the lack of adequate funding for constructing such a system may pose a barrier. The cost of a VTS depends on its size and complexity; however, radar equipment, computer hardware and software, and a facility for monitoring vessel traffic alone could cost $1 million or more at each port. The privately funded systems at Los Angeles/Long Beach and Philadelphia/Delaware Bay initially faced similar financing concerns; both received federal or state assistance, either financial or in-kind. Obtaining liability protection. At each of the same six ports, most of the stakeholders were concerned that private VTS operators might be held liable for damages if they provided inaccurate information to vessel operators that contributed to an accident. At locations such as Tampa and San Francisco, where the possibility of privately funded systems has been discussed, the stakeholders believe that securing liability protection is a key issue that must be resolved before they would move forward to establish a VTS system. Currently, the two existing privately funded VTS systems receive liability protection under state laws, except in cases of intentional misconduct or gross negligence. However, these laws have yet to be tested in court. Defining the Coast Guard’s role. Federal law does not address what role, if any, the Coast Guard should play in privately funded systems. At seven of the ports, most of the stakeholders said the Coast Guard should have a role. In support of this position, they cited such things as the (1) need for the Coast Guard’s authority to require mandatory participation by potential VTS users and to ensure consistent VTS operations and (2) Coast Guard’s expertise in and experience with other VTS systems. In summary, difficult choices need to be made about how to improve marine safety in the nation’s ports. There is an acknowledged need to improve marine safety at a number of ports, but not much agreement about how it should be done. Decisions about whether VTS 2000 represents the best approach are made more difficult by the uncertainties surrounding the scope, cost, and appropriateness of VTS 2000 over other alternatives in a number of locations. While some unresolved questions cannot be immediately answered, we think it is vitally important for the Coast Guard to present a clearer picture to the Congress as soon as possible of what VTS 2000 is likely to entail. Complete, up-to-date information will put the Congress in a better position to make informed decisions about the development of VTS 2000. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or the Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the Coast Guard's vessel traffic service (VTS) 2000 program. GAO noted that: (1) it is difficult to judge whether VTS 2000 is the best marine safety system because it is unknown how many ports need the system, how much it will cost, and whether other cost-effective alternatives are available; (2) most key stakeholders do not support VTS 2000 because they believe it will be too costly; (3) most key stakeholders oppose user fees or other funding methods that would shift the financial costs of VTS 2000 from the Coast Guard to users; (4) support for a VTS system of any kind varied among key stakeholders at different ports, but most favored the least expensive options available; and (5) issues affecting privately funded or privately operated VTS systems include the initial costs of a VTS system, the private sector's exposure to liability, and the Coast Guard's oversight role.
|
The past decade has seen an increasing emphasis in the United States on the role of state and local entities in the fight against violent extremism. More recently, in August 2011, the White House issued the nation’s first CVE strategy, Empowering Local Partners to Prevent Violent Extremism in the United States, and in December 2011, it issued an implementation plan for the CVE national strategy. The strategy leverages existing programs and structures in order to counter radicalization that leads to violence, rather than creating new programs and funding streams. The strategy highlights three major areas of activity: (1) enhancing engagement with and support to local communities that violent extremists may target, (2) building government and law enforcement expertise for preventing violent extremism, and (3) countering violent extremist propaganda while promoting U.S. ideals. The strategy also identifies the provision of training to federal, state, and local entities as a major component of the national CVE approach, and the implementation plan notes that the federal government will enhance CVE- related training offered to federal, state, and local agencies. The implementation plan states that this is necessary because of “a small number of instances of federally-sponsored or funded CVE and counterterrorism training that used offensive and inaccurate information.” Accordingly, one of the objectives of the implementation plan is to improve the development and use of standardized training with rigorous curricula that imparts information about violent extremism, improves cultural competency, and conveys best practices and lessons for effective community engagement and partnerships. The implementation plan designates federal departments, agencies, and components as leaders and partners regarding certain aspects of CVE, and DHS and DOJ have principal roles in implementing the CVE national strategy. Table 1 identifies the primary federal departments and agencies with CVE-related responsibilities and their respective missions. Other agencies involved in implementing the strategy include the Departments of the Treasury, Education, and Commerce, among others. The CVE national strategy implementation plan assigns both DHS and DOJ responsibility for supporting national CVE-related training efforts and emphasizes the importance of collaboration among federal, state, local, and tribal government agencies in order to achieve the goals of the strategy. In order for DHS and DOJ components to determine the extent to which they are fulfilling departmental CVE-related responsibilities, they must be able to identify which of the training they conduct is CVE-related, which requires that they understand what constitutes CVE-related training. The DHS Counterterrorism Working Group, the entity responsible for leading DHS’s CVE efforts under the direction of the Principal Deputy Counterterrorism Coordinator, has identified topics to be addressed in CVE-related training that DHS develops, provides, or funds. The group has also undertaken efforts to communicate these topics to other DHS components, state and local law enforcement officials, and grant recipients who may allocate DHS funding for CVE-related training within their states. DHS’s communication efforts have helped DHS components and state and local partners to better understand what constitutes CVE-related training, but some DHS grantees who responded to our survey reported that they were not clear as to what topics should be addressed in CVE-related training, and most indicated that it would be helpful for DHS to provide additional information or guidance on topics covered under CVE. DHS plans to undertake additional communication efforts with these grantees to educate them about the principal topics CVE-related training addresses. In contrast, DOJ has not identified topics it considers as CVE-related training. Consequently, DOJ is unable to demonstrate how it is meeting its CVE responsibilities under the CVE national strategy. In February 2010, the Secretary of Homeland Security tasked the Homeland Security Advisory Council (HSAC) with developing recommendations regarding how DHS can better support community- based efforts to combat violent extremism domestically, focusing on the issues of training, information sharing, and the adoption of community- oriented law enforcement approaches. The council established the HSAC CVE Working Group to carry out this tasking, and the working group issued its findings in summer 2010. The HSAC CVE Working Group determined that CVE-related training should focus on (1) improving the capacity of law enforcement and other government personnel to communicate and collaborate with individuals from diverse religious, ethnic, and racial communities, and (2) promoting understanding of the threats facing a local community and recognizing behavior and indicators associated with those threats. The DHS Counterterrorism Working Group subsequently determined that, in order to support implementation of the CVE national strategy and the HSAC CVE Working Group findings, CVE- related training should address the following: violent extremism (e.g., the threat it poses), cultural demystification (e.g., education on culture and religion), community partnerships (e.g., how to build them), and community policing efforts (e.g., how to apply community policing efforts to CVE). According to the DHS Principal Deputy Counterterrorism Coordinator, identifying these topics helped to provide a logical structure for DHS’s CVE-related training–related efforts. The Counterterrorism Working Group has undertaken efforts to communicate these topics to DHS components that contribute to DHS CVE-related training. Toward the beginning of our review officials from DHS components that contributed to training in fiscal years 2010 and 2011 that was CVE-related according to our framework cited lack of clarity regarding what topics CVE-related training is to address; however, by August 2012, the components reported that the topics were clear, a fact that they attributed to these communications efforts. The Counterterrorism Working Group communicated CVE-related training topics to relevant DHS components during weekly meetings as well as by involving the components in the development of new CVE-related training. For example, the Counterterrorism Working Group has invited relevant components to participate in workshops on CVE-related training, provided them with briefings and updates on its CVE-related training development efforts, and included them in review of draft CVE curricula. According to Counterterrorism Working Group officials, the group led a series of meetings with these components to communicate and review the content of multiple CVE-related trainings the group is working to develop. According to officials from relevant DHS components, these communication efforts have helped to clarify topics CVE-related training addresses. For example, according to the official that leads CVE-related training that the Office for Civil Rights and Civil Liberties provides, reviewing the CVE curricula under development involves ensuring that training topics are clear and well understood. In addition, according to the S&T official who oversees research on CVE that is to inform CVE-related training content, DHS officials have clearly communicated topics that CVE-related training is to include during weekly meetings that the Counterterrorism Working Group leads involving all DHS CVE Working Group members. The Counterterrorism Working Group also communicated with state and local partners and associations that DHS collaborates with to achieve national CVE goals regarding DHS’s CVE-related training topics. For example, according to the director of a state police academy and a police department lieutenant, the Counterterrorism Working Group has consistently consulted with them in developing training modules addressing CVE topics. The Counterterrorism Working Group is also collaborating to develop and implement CVE-related training curricula with the Major Cities Chiefs Association (MCC), the National Consortium for Advanced Policing (NCAP), and the International Association of Chiefs of Police (IACP). As reported by the official who oversees CVE-related training that the DHS Office for Civil Rights and Civil Liberties provides, such collaboration inherently entails discussion of topics CVE-related training is to address. DHS’s communication efforts have helped DHS components and state and local partners to better understand what constitutes CVE-related training, but our review indicates that some state administrative agency representatives are not clear about the principal topics CVE-related training addresses, making it difficult for them to determine what CVE- related training best supports national CVE efforts. According to officials from FEMA, which administers DHS grant funding, the agency has increased grant funding available for CVE-related training because the Secretary of Homeland Security has identified CVE efforts as a priority for the department. In particular, in fiscal year 2011, FEMA began to allow state and local entities to use funds awarded through the Homeland Security Grant Program for CVE-related training. Further, in fiscal year 2012, FEMA explicitly stated in its Homeland Security Grant Program funding announcement that grantees could use program funds for CVE- related training, and retroactively allowed recipients to use program funds from prior years for CVE activities. In July 2012, we surveyed the 51 training points of contact within state administrative agencies—which are responsible for managing Homeland Security Grant Program funds that DHS awards—about the extent to which they understand what is meant by CVE training. Of the 30 training points of contact who responded to our survey, 11 indicated that they were not at all clear or were somewhat clear on what is meant by CVE-related training. Further, 26 agreed or strongly agreed that it would be helpful for DHS to provide additional information or guidance on topics covered under CVE. As long as FEMA continues to make grant funding available for CVE-related training, but grantees do not have an understanding of what topics CVE-related training should address, it will be difficult for grantees to determine what training best supports the national CVE objective of improving CVE- related training and use funds appropriately toward those efforts. DHS Counterterrorism Working Group officials stated that the group had made efforts to communicate CVE-related training topics to state administrative agencies, but in light of our survey results, the group plans to expand its efforts. In winter 2011, the Principal Deputy Counterterrorism Coordinator, who leads DHS CVE efforts, participated in a conference call with State Homeland Security Program advisers and staff who administer DHS grants that can be used for CVE-related training, during which this official highlighted DHS’s CVE-related training efforts and associated guidance. Nonetheless, according to the Principal Deputy Counterterrorism Coordinator, some training points of contact may not be aware of what topics CVE-related training should address because the working group’s coordination efforts have focused on state and local representatives who administer law enforcement training programs (e.g., at police academies), not state administrative agencies. The Principal Deputy Counterterrorism Coordinator also emphasized that DHS has focused its efforts on developing high-quality CVE-related training that state and local entities can readily access and that FEMA will pre approve as eligible for DHS grant funding. As a result, according to this official, grantees will rarely have to independently identify appropriate CVE-related training to fund or undertake steps to ensure the quality of CVE-related training they fund. Nevertheless, the Principal Deputy Counterterrorism Coordinator agreed that our survey results revealed that it is important for DHS to undertake additional efforts to educate state administrative agency officials on the principal topics CVE-related training addresses. To that end, in August 2012, the Principal Deputy Counterterrorism Coordinator held an additional meeting with more than 100 state administrative agency representatives and other federal, state, and local officials, during which the Coordinator provided information on DHS CVE-related training development efforts and the content of DHS’s CVE-related training, among other things. In addition, in August 2012, DHS, in partnership with the FBI, launched an online portal for a select group of law enforcement training partners that is intended to provide federal, state, local, tribal, territorial, and correctional law enforcement with access to CVE-related training materials. DHS aims to broaden access to the portal to trainers nationwide by the end of September 2012. Further, the Principal Deputy Counterterrorism Coordinator stated that the Counterterrorism Working Group is developing an outreach strategy for communicating with state and local entities about DHS’s CVE-related training efforts. Given the recency of these efforts, we are not able to assess their effectiveness as part of our review. However, they are positive steps that should contribute to educating state administrative agency representatives about CVE topics, and thereby help them to fund CVE-related training that is consistent with the goals of the CVE national strategy. As with DHS, the CVE national strategy implementation plan has identified DOJ, including the FBI, as among the federal departments and agencies responsible for conducting CVE-related training. However, DOJ has not yet identified topics that should be covered in its CVE-related training. In addition, DOJ has not generally identified which of its existing training could be categorized as CVE-related training, thus limiting DOJ’s ability to demonstrate how it is fulfilling its training responsibilities under the CVE national strategy. According to senior DOJ officials, even though the department has not identified CVE-related training topics, they understand internally which of the department’s training is CVE-related and contributes either directly or indirectly to the department’s training responsibilities under the CVE national strategy. However, because DOJ has not identified what constitutes CVE-related training, CVE-related efforts undertaken at the direction of the President’s National Security Staff have been hindered, according to DHS officials who participated in an Interagency Policy Committee Working Group on Law Enforcement Training Regarding Domestic Radicalization and CVE. This group, which is chaired by DHS and NCTC, was formed at the direction of the President’s National Security Staff to identify and coordinate CVE-related training that federal agencies deliver or fund. The group’s principal objective was twofold: (1) to determine how agencies are currently developing training and (2) to identify options for ensuring that the Intelligence Community’s current analysis of radicalization informs training for federal, state, local, and tribal officials, and that customers of this type of training receive high- quality training and information consistent with U.S. government analysis. As part of this effort, the Interagency Policy Committee Working Group on Law Enforcement Training Regarding Domestic Radicalization and CVE endeavored to create an inventory of CVE- related training that the federal government offers. However, according to DHS officials that participated in the working group, members who led this effort found it challenging to do so because agencies’ views differed as to what CVE-related training includes when providing information on their training. More specifically, according to one DHS official, some components found it difficult to differentiate between counterterrorism and CVE-related training, and trying to categorize training that was not developed for CVE purposes but that can benefit CVE can be confusing. We observed this problem firsthand during our review when the DOJ components that the department identified as potentially relevant to our work, including the FBI, Executive Office for United States Attorneys, and Office of Community Oriented Policing Services could not readily respond to our requests for information about CVE-related training they provide or fund. According to these officials, they found it difficult to respond to our requests because DOJ has not established a definition for “CVE” or “CVE-related training,” and therefore they were not sure what constitutes CVE-related training.acknowledged that training that BJA funds under the State and Local Anti-Terrorism Training (SLATT) program could be considered CVE- related training, but they also acknowledged that what constitutes CVE- related training was not clear, in part because CVE is a relatively new term. The other DOJ components, however, relied upon a framework that we developed for the purpose of this review to determine which of their existing training was CVE-related. The Community Relations Service is DOJ’s “peacemaker” for community conflicts and tensions arising from differences of race, color, and national origin. It is dedicated to assisting state and local units of government, private and public organizations, and community groups with preventing and resolving racial and ethnic tensions, incidents, and civil disorders, and in restoring racial stability and harmony. According to DOJ, pursuant to the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act, the Community Relations Service also works with communities to develop strategies to prevent and respond more effectively to alleged violent hate crimes committed on the basis of race, color, national origin, gender, gender identity, sexual orientation, religion, or disability. See generally Pub. L. No. 111-84, Div. E, 123 Stat. 2190, 2835 (2009). See also 18 U.S.C. § 249. and explicitly emphasize the importance of community engagement in CVE efforts while recognizing that such engagement should focus on a full range of community concerns, and not just on issues such as national security. Further, the implementation plan has assigned DOJ responsibility for supporting national CVE-related training efforts. However, because DOJ has not identified what topics it thinks should be addressed by CVE-related training, it is difficult to identify which of DOJ’s current training is related to CVE—either directly or indirectly, which also makes it difficult to determine whether and how DOJ is fulfilling its training responsibilities per the CVE national strategy. If departments are unclear regarding what constitutes CVE-related training, they will also have difficulty accounting for their CVE-related training responsibilities. By not identifying and communicating CVE- related training topics to its components, DOJ is not able to demonstrate how it is fulfilling its CVE-related training responsibilities and ensure that it is carrying out its responsibilities as established in the CVE national strategy implementation plan. Less than 1 percent of state and local participants in CVE-related training that DHS and DOJ provided or funded who provided feedback to the departments expressed concerns about information included in the course materials or that instructors presented during training. In addition, while DOJ generally solicits feedback from all participants for programs that provide formal, curriculum-based CVE-related training, the FBI and USAOs do not always solicit feedback for programs that provide less formal CVE-related training (e.g., presentations by guest speakers), even though such training was provided to about 9,900 participants in fiscal years 2010 and 2011. Finally, apart from the training participants, some individuals and advocacy organizations have raised concerns about DHS and DOJ CVE-related training. As previously discussed, because DHS and DOJ components were unclear regarding what constitutes CVE-related training, for the purposes of conducting this review, we developed a framework for determining which training may be CVE-related. Our framework identifies training as CVE-related if it addressed one or more of the following three content areas: (1) radicalization, (2) cultural competency, and (3) community engagement. DHS Counterterrorism Working Group officials generally agreed with the content areas we identified, and we incorporated feedback the group provided, as appropriate. DOJ officials stated that they view the framework as reasonable for the purpose of our review. However, as previously discussed, DOJ officials do not think it is appropriate for DOJ to identify topics as addressed in CVE-related training. We applied our framework to identify CVE-related training DOJ and DHS components provided to state and local entities during fiscal years 2010 and 2011. Figure 1 presents the DOJ and DHS programs that provided the CVE-related training we identified, and appendix III provides more detailed information about the training, including the number of participants and associated costs. The majority of participant feedback on CVE-related training that DHS and DOJ provided or funded during fiscal years 2010 and 2011 was positive or neutral; a minority of participants expressed concerns about information included in course materials or that instructors presented. DHS and DOJ collected and retained feedback forms from 8,424 of the more than 28,000 participants—including state, local, and tribal law enforcement officials, prison officials, and community members—of training they provided or funded in fiscal years 2010 and 2011 that was CVE-related according to our framework. We analyzed all of these evaluations and found that the vast majority of participants submitted comments about the training that were positive or neutral. For example, participants commented that the courses were among the most challenging they had taken, that the instructors were professional and knowledgeable, or that the course materials were well assembled. In addition, participants stated that the training was informative with regard to the threat posed by, and how to best counter, violent extremists or provided a valuable overview of an extremist group. In another instance, a participant stated that the course was helpful in understanding the beliefs and concerns of a particular community. Some participants also said that the training would be worthwhile to provide to a broader audience, that they intended to share what they learned with colleagues, or that they would like to see the course length expanded. We also identified 77 participant evaluations—less than 1 percent—that included comments that expressed concern of any sort. For example, we identified concerns that a training was too politically correct, as well as concerns that a training was one-sided, with regard to issues of religion and culture. The concerns the participants expressed fell into the following three categories: 1. The course information or instruction was politically or culturally biased (54 evaluations). For example, participant comments that fell into this category were that the instructor had a liberal bias, and other comments were that the instructor too often relayed his or her personal views. 2. The course information or instruction was offensive (12 evaluations). For example, one concern raised in this category was that an instructor presented Islam in a negative manner, whereas another concern was that a guest presenter spoke disrespectfully about the United States. 3. The course information was inaccurate (11 evaluations). For example, comments that fell into this category raised concern that an instructor provided misinformation about dressing norms for Middle Eastern women and that an instructor cited incorrect information about a criminal case discussed during the class. The concerns that were raised varied across different training providers and, although few, most of the concerns stemmed from the evaluation records documenting feedback from DOJ SLATT Program and FBI National Joint Terrorism Task Force Program participants. See appendix IV for additional details on the types of concerns by training provider. DOJ and DHS officials who oversee these training programs indicated that they review the feedback participants provide and assess if it warrants action. However, these officials stated that determining how to respond to feedback can be difficult when the feedback is subjective or not actionable. For example, the SLATT Program Director stated that if a comment simply says “one-sided information,” he cannot take action on it because he does not know which side the person is referring to or what the person thinks should be changed. However, if there is a trend in clear feedback participants provided, he will take action. Further, according to SLATT and Office for Civil Rights and Civil Liberties officials, perceptions regarding what is biased vary by audience and even by the participants within a given audience. Therefore, DHS and DOJ officials stated that they take action to address participant feedback on a case-by-case basis, as they and their staff deem appropriate. For example, the SLATT Director explained that there is no specific threshold to determine whether a participant’s comment warrants further action, but generally, if a similar concern has been submitted by multiple participants, over multiple courses, SLATT officials will review the substance of the comment and devise a plan to correct the issue. For example, the SLATT Director noted that in response to a comment that a course title did not reflect the material taught in the course, he suggested a change to the title. Most of the CVE-related training that DHS and DOJ components provided was formal, classroom-based or curriculum-based training, and the components generally solicited participant feedback for this type of training, which we describe above. In addition, two DOJ components— FBI and USAOs—also provided informal CVE-related training consisting of briefings and presentations at workshops, conferences, and other venues to about 9,900 participants in fiscal years 2010 and 2011. However, these components did not consistently solicit participant feedback for this type of training, which makes it difficult for them to assess the quality of the training, determine whether the training is achieving expected outcomes, and make changes where appropriate. According to FBI officials, training that the FBI centrally administers— including that provided under the National Academy and National Joint Terrorism Task Force programs—is to adhere to the Kirkpatrick model to help ensure its quality. The standards this model prescribes require the solicitation of student feedback. As a result, the FBI collects feedback through evaluations on the formal, classroom-based courses it provides through its National Academy. The FBI does not require entities providing informal training, such as briefings and presentations during outreach, to solicit feedback. Specifically, officials from the FBI’s Office of Public Affairs told us that the bureau does not solicit feedback on presentations, briefings, or its Citizens’ Academy and Community Relations Executive Seminar Training (CREST) outreach programs because doing so is not required, and the officials noted that the FBI does not classify these programs and activities as training. Officials also noted that some field offices, which administer the programs, do solicit feedback from participants although they are not required to do so. For example, 4 of 21 FBI field offices that provided Citizens’ Academy training that was CVE- related according to our framework collected evaluations. However, none of the 3 FBI field offices that provided CREST training or the 5 FBI field offices that provided other training that was CVE-related according to our framework solicited feedback from course participants. Similarly, USAOs are not required to obtain feedback from recipients of training that their individual offices provide. According to Executive Office for U.S. Attorneys officials, USAOs do not typically solicit feedback from participants on the presentations that our framework identified as CVE-related that they provide in their districts, particularly with respect to threat-related briefings for law enforcement officials that are intended to address a particular area of concern for that region at a particular time. Under these circumstances, according to these officials, feedback may be less useful than it would be for curriculum-based trainings, because the presentation is less likely to be repeated for many different audiences. We identified 39 USAOs that provided or facilitated training that was CVE-related according to our framework, excluding training that was facilitated by a USAO, but provided by another federal entity (such as SLATT). Out of these 39 USAOs, 15 collected feedback from CVE-related training participants. We have previously reported that evaluating training is important and that agencies need to develop systematic evaluation processes in order to We obtain accurate information about the benefits of their training. recognize the distinction between formal training programs and less formal training, such as presentations. However, the CREST and Citizens’ Academy programs, other FBI field office initiatives, and USAOs collectively trained about 39 percent (about 9,900) of all training participants in DOJ CVE-related training during fiscal years 2010 and 2011. Soliciting feedback on informal training could help the FBI and USAOs obtain valuable information for determining the extent to which these programs are yielding desired outcomes (e.g., whether the FBI’s Citizens’ Academy is projecting a positive image of the FBI in the communities it serves) as well as complying with the CVE national strategy. Such feedback could also be obtained without incurring significant costs. According to officials at a FBI field office that distributes feedback forms and the DHS official who oversees the Office for Civil Rights and Civil Liberties CVE-related training, agencies can solicit feedback from training participants at minimal cost (e.g., the paper on which the form is distributed and the employee time associated with reviewing the forms), feedback is critical to ensure the training is communicating its intended messages effectively, and soliciting feedback is a worthwhile undertaking given the significant time and resources their offices invest in providing CVE-related training. In addition to the concerns we identified in participant evaluations, individuals and advocacy organizations submitted at least six letters of complaint to DHS, DOJ, the Executive Office of the President, and other federal government entities regarding 18 alleged incidents of biased CVE and counterterrorism training that DHS or DOJ provided or funded during fiscal years 2010 and 2011. Representatives of the advocacy organizations that submitted the letters generally did not participate in the training that generated these concerns. Rather, their concerns were derived from information reported in the media and individuals who attended a training session and expressed concern about the training to the organizations. We determined that 7 of the alleged incidents described in five of the letters were relevant to this review because they pertained to CVE-related training provided to state and local officials and community members, not training that was exclusively provided to federal officials. The 7 incidents described in these letters, some of which the media initially reported, articulated similar concerns as those identified in the participant evaluations we reviewed. That is, the allegations made in the letters raised concerns that course information and instructors were biased, offensive, or inaccurate. Table 2 summarizes the concerns raised in these five letters and the agency’s perspectives about the concerns. Although the number of concerns and complaints raised about CVE- related training may have been small, according to DHS and DOJ officials, the departments have generally considered the complaints as serious issues that warranted action to better ensure the quality of future training, particularly given the negative effects that such incidents can have on the departments’ reputations and trust with the communities they serve. For example, according to the DHS Principal Deputy Counterterrorism Coordinator, developing CVE-related training is a priority for the department because inappropriate and inaccurate training undermines community partnerships that are critical to preventing crime and negatively impacts efforts of law enforcement to identify legitimate behaviors and indicators of violent extremism. DOJ has undertaken quality reviews of existing training materials that are CVE-related according to our framework, and both DOJ and DHS have developed guidance for CVE-related training and developed other quality assurance mechanisms for this training. DOJ components have conducted or are currently conducting internal reviews of their training materials, including those with topics that our framework identified as related to CVE, in an effort to identify and purge potentially objectionable materials. In September 2011, the FBI launched a review of all FBI counterterrorism training materials, including materials that were CVE-related according to our framework. This review included approximately 160,000 pages of training materials, and the FBI determined that less than one percent of the pages contained factually inaccurate or imprecise information or used stereotypes. The Office of the Deputy Attorney General has also ordered a departmentwide review of training materials. Unlike the FBI’s internal review, which focused on counterterrorism training materials, a memorandum issued by the Deputy Attorney General to heads of DOJ components and U.S. Attorneys in September 2011 directed them to carefully review all training material and presentations that their personnel provided. The memorandum stated components particularly should review training related to combating terrorism, CVE, and other subjects that may relate to ongoing outreach efforts in Arab, Muslim, Sikh, South Asian, and other communities. The purpose of the review was to ensure that the material and information presented are consistent with DOJ standards, goals, and instructions. Officials from the four DOJ components that we identified as having provided or funded CVE-related training reported that their components have completed, or intend to complete, the review the Deputy Attorney General ordered. According to DOJ officials, as of August 2012, some components are still reviewing relevant materials and the Deputy Attorney General asked components to provide any questionable training materials to the Deputy Attorney General’s office. DOJ officials also told us that each DOJ component is to make its own determination on what materials are appropriate, but that components are to review all training materials, even if the components do not have specific plans to present the materials in the future. DHS, DOJ, and the FBI have developed guidance to avoid future incidences or allegations of biased or otherwise inappropriate training. In October 2011, the DHS Office for Civil Rights and Civil Liberties issued Countering Violent Extremism Training Guidance & Best Practices (DHS CVE Guidance), which acknowledges that it is important for law enforcement personnel to be appropriately trained in understanding and detecting ideologically motivated criminal behavior and in working with communities and local law enforcement to counter domestic violent extremism.accurate, based on current intelligence, and include cultural competency training. To this end, its goals are to help ensure that (1) trainers are experts and well regarded; (2) training is sensitive to constitutional values; (3) training facilitates further dialogue and learning; (4) training adheres to government standards and efforts; and (5) training and objectives are appropriately tailored, focused, and supported. The guidance provides best practices for federal, state, and local officials organizing CVE, cultural awareness, or counterterrorism training to adhere to in support of these goals. Best practices include reviewing a prospective trainer’s résumé; reviewing the training program to ensure that it uses examples to demonstrate that terrorists and violent extremists vary in ethnicity, race, gender, and religion; and reaching out to sponsors of existing government training efforts for input. The DHS CVE guidance states that training must be Following the release of DHS’s CVE Guidance, FEMA issued an information bulletin to its state, local, and private sector partners and grantees to emphasize the importance of ensuring that all CVE-related training is consistent with DHS and U.S. government policy.referenced the DHS CVE Guidance and stated, among other things, that grant-funded training should avoid the use of hostile, stereotypical, or factually inaccurate information about Muslims and Islam or any community. The bulletin also emphasized the importance of community engagement and interaction to promote communities as part of the solution. According to FEMA officials, if a grantee were to provide CVE- related training and not follow the DHS CVE guidance, DHS may require that the grantee repay any grant funds that were spent on the training. However, several DHS grantees indicated that they would not necessarily know when to apply the best practices for ensuring the quality of CVE- related training described in the informational bulletin. Specifically, of the 30 Homeland Security Grant Program training points of contact who responded to our survey, 18 said that they were not at all clear or only somewhat clear about when to apply the principles in the FEMA bulletin. In addition, 20 said that topics that may be covered during CVE-related training are not at all clear or only somewhat clear in the bulletin. As a result, these grantees could have difficulty in determining when to apply the principles. As previously discussed, the additional efforts DHS is undertaking to educate state administrative agency officials on the principal topics CVE-related training addresses could further enable the officials to fund training that supports the CVE national strategy. These survey results indicate that such educational efforts should help grantees more readily identify topics that may be covered during CVE-related training, and thus more appropriately apply DHS CVE-related training quality assurance guidance. DHS is also developing additional mechanisms to ensure the quality of CVE-related training. Specifically, Counterterrorism Working Group officials told us that in June 2012 DHS established a CVE-related training Working Group within the department to develop a framework to (1) ensure that training DHS components provide meets DHS and the U.S. government’s CVE standards; (2) ensure that grantees using grant funds for training utilize certified trainers; and (3) disseminate DHS training through agency partners, such as the International Association of Chiefs of Police. In July 2012, this working group proposed recommendations for meeting these goals in a memorandum to the DHS Deputy Counterterrorism Coordinator. For example, the group recommended that the department establish and maintain a database of certified CVE instructors and appoint a CVE program coordinator to oversee the instructor vetting and training process. According to Counterterrorism Working Group officials, DHS is working on plans to implement these recommendations. As these recommendations were made recently and DHS has just decided to implement them, it is too early to assess any quality assurance impact they will have on CVE-related training. DOJ also developed guidance applicable to all training, including CVE- related training, conducted or funded by DOJ to help ensure its quality. DOJ formed a working group on training issues chaired by its Civil Rights Division within the Attorney General’s Arab-Muslim Engagement Advisory Group. The working group developed the DOJ training principles to guide DOJ’s training and to ensure that all communities that DOJ serves are respected. In March 2012, the Deputy Attorney General issued a memorandum for DOJ heads of components and USAOs outlining guiding principles to which all training that DOJ conducted or funded must adhere. Specifically, it stated that (1) training must be consistent with the U.S. Constitution and DOJ values; (2) the content of training and training materials must be accurate, appropriately tailored, and focused; (3) trainers must be well qualified in the subject area and skilled in presenting it; (4) trainers must demonstrate the highest standards of professionalism; and (5) training must meet department standards. Also in March 2012, the FBI published The FBI’s Guiding Principles Touchstone Document on Training. This document is intended to be consistent with the March 2012 Deputy Attorney General guidance, but elaborates on each training principle outlined in the document. The FBI’s guidance states that training must (1) conform to constitutional principles and adhere to the FBI’s core values; (2) be tailored to the intended audience, focused to ensure message clarity, and supported with the appropriate course materials; and (3) be reviewed, and trainers must be knowledgeable of applicable subject material. DOJ officials also told us that the department’s guiding principles are meant to memorialize department training standards and values and are the group’s first step for ongoing work to ensure the quality of future counterterrorism and CVE-related training. Although developing these principles marks an important first step, we were unable to assess the extent to which they can help ensure the quality of CVE-related training moving forward because the review is ongoing and DOJ officials are in the process of planning additional efforts. Providing high-quality and balanced CVE-related training is a difficult task given the complexity and sensitivities surrounding the phenomenon of violent extremism. However, misinformation about the threat and dynamics of radicalization to violence can harm security efforts by unnecessarily creating tensions with potential community partners. The CVE national strategy implementation plan commits the federal government, including DHS and DOJ, to supporting state and local partners in their efforts to prevent violent extremism by providing CVE- related training. By identifying and communicating CVE-related training topics, DOJ could better demonstrate the extent to which it is fulfilling departmental CVE-related responsibilities as established in the implementation plan for the CVE national strategy. In addition, by proactively soliciting feedback from participants in informal CVE-related training on a more consistent basis, FBI field offices and USAOs could more effectively obtain information on the strengths and weaknesses of their presentations and briefings, and thus better ensure their quality. To better enable DOJ to demonstrate the extent to which it is fulfilling its CVE-related training responsibilities, we recommend that the Deputy Attorney General identify principal topics that encompass CVE-related training—including training that is directly related to CVE or that has ancillary benefits for CVE—and communicate the topics to DOJ components. To obtain valuable information for determining the extent to which CVE- related programs are yielding the desired outcomes and complying with the CVE national strategy, we recommend that the Deputy Attorney General direct USAOs and the Director of the FBI’s Office of Public Affairs direct FBI field offices to consider soliciting feedback more consistently from participants in informal training, such as presentations and briefings, that covers the type of information addressed in the CVE national strategy. We provided a draft of the sensitive version of this report to DHS, DOJ, ODNI, and DOD for their review and comment. We received written comments from DHS and DOJ, which are reproduced in full in appendixes V and VI, respectively. DHS generally agreed with the findings in its comments, and DOJ agreed with one of the recommendations in this report, but disagreed with the other recommendation. ODNI and DOD did not provide written comments on the draft report. However, ODNI provided technical comments, as did DHS and DOJ, which we incorporated throughout the report as appropriate. In its written comments, DHS noted that the report recognizes DHS’s efforts to develop and improve the quality of CVE training and identified additional efforts that the department is taking to improve communication with its various CVE stakeholders and to implement the priorities outlined in its framework for vetting CVE training. For example, DHS stated that it will be hosting a CVE train-the-trainer workshop in September 2012, and identifying trainers on its online CVE training portal who meet the standards included in DHS’s training guidance and best practices. DHS also stated that it remains committed to improving and expanding its development of CVE resources and providing information about those resources to state and local partners. DOJ stated that it generally agrees with the recommendation that the Deputy Attorney General and the Director of FBI’s Office of Public Affairs direct USAOs and FBI field offices to consider soliciting feedback more consistently from participants in informal training that covers the type of information addressed in the CVE national strategy. The department stated that it will develop a plan of action that describes how USAOs and FBI field offices will implement this recommendation. Developing such a plan should address the intent of our recommendations. DOJ, however, disagreed with the recommendation that the Deputy Attorney General identify principal topics that encompass CVE-related training and communicate those topics to DOJ components. According to DOJ, the CVE national strategy implementation plan assigns DOJ, through its USAOs, primary responsibility for expanding the scope of engagement and outreach events and initiatives that may have direct or indirect benefits for CVE; however, the plan does not assign the department primary responsibility for developing specific CVE-related training. We recognize that DOJ is not the lead agency for the subsection of the implementation plan related to the development of standardized CVE training; however, the CVE implementation plan nonetheless assigns DOJ as a lead or partner agency for other CVE training-related activities. For example, the implementation plan states that the FBI will lead the development of CVE-specific education modules and that DOJ will colead (1) the expansion of briefings about violent extremism for state and local law enforcement and government, and (2) the expansion of briefing efforts to raise community awareness about the threat of radicalization to violence. In addition, the implementation plan directs the FBI to develop a CVE Coordination Office, and according to the FBI, that office is in the process of developing CVE-related training. Given that DOJ has been identified as a lead or partner agency for several training related activities identified in the implementation plan, identifying CVE training topics could help DOJ demonstrate the extent to which it is fulfilling its responsibilities under the CVE national strategy. Identifying CVE training topics could also help the FBI determine what issues it should be addressing in the training that its CVE Coordination Office is developing, and assist the department in being able to publicly account for the CVE-related training that the department provides or funds. DOJ also stated in its comments that the draft report recommended that DOJ redefine its cultural competency training and community outreach efforts (which may have benefits for CVE) as “CVE.” DOJ then stated that redefining these efforts as such would be imprecise and potentially counterproductive, and that labeling these efforts as CVE would suggest that they are driven by security efforts, when they are not. To clarify, the report does not include a recommendation that DOJ redefine or label its cultural competency training and community outreach efforts as CVE. Although we included these topics in the framework we used to identify potentially CVE-related training for the purpose of this review, the recommendation was that DOJ identify principal topics that encompass CVE-related training and communicate such topics to DOJ components. We defer to the department to determine which topics are appropriate to cover in its CVE-related training. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees. We will also send copies to the Secretary of Homeland Security, the Attorney General, the Secretary of Defense, and the Director of National Intelligence. In addition, this report will be made publicly available at no extra charge on the GAO Website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report answers the following questions: 1. To what extent have the Department of Homeland Security (DHS) and the Department of Justice (DOJ) identified and communicated topics that countering violent extremism-related (CVE-related) training addresses to their components and state and local partners? 2. What, if any, concerns have been raised by state and local partners who have participated in CVE-related training provided or funded by DHS and DOJ? 3. What actions, if any, have DHS and DOJ taken to improve the quality of CVE-related training? To determine the extent to which DHS and DOJ identified and communicated topics that should be addressed by CVE-related training, we met with officials from both departments to discuss how they define CVE-related training, which departmental training programs were relevant to our review, and how the departments communicated principal CVE- related training topics to relevant components and state and local partners. We then analyzed this information to assess the extent to which the departments’ efforts allow them to demonstrate fulfillment of their CVE-related training responsibilities under the CVE national strategy. We also met with officials from the Department of Defense (DOD) and Office of the Director of National Intelligence (ODNI) who possess knowledge about CVE-related training and who are involved in interagency efforts related to CVE. More specifically, we met with officials from the components and offices listed in table 3. To obtain additional views on CVE-related training provided or funded by DHS or DOJ, we interviewed representatives from nine state and local law enforcement agencies and law enforcement representative organizations involved with federal CVE-related training efforts. They included the Minneapolis Police Department, the Los Angeles Police Department, the Las Vegas Sheriff’s Department, the Arkansas State Police Program, the Dearborn Police Department, the National Sheriff’s Association, the Major City Chief’s Association, the International Association of Law Enforcement Intelligence Analysts, and the National Consortium for Advanced Policing. We selected these agencies and organizations based on their involvement with CVE-related training efforts and the extent to which they collaborate with DHS or DOJ on CVE-related training. While the views of these entities do not represent the views of all agencies and organizations involved in CVE-related training, these entities were able to offer helpful perspectives for the purpose of this review. We also interviewed individuals with expertise in CVE, such as academic researchers who have published on CVE-related topics and researchers from organizations that study CVE-related topics, to obtain their views on topics CVE-related training should address and identify potential training programs to include in our review. They included individuals from the Georgetown University Prince Alwaleed Bin Talal Center for Muslim-Christian Understanding, the RAND Corporation, the Foundation for Defense of Democracies, the International Centre for the Study of Radicalisation, and the National Consortium for the Study of Terrorism and Responses to Terrorism. We selected these individuals based on the depth of their experience with, and knowledge of, CVE; the relevance of their publications; referrals from other practitioners; and to develop a sample that represented various sectors (e.g., academic, advocacy, etc.). They provided valuable insight even though the perspectives they offered are not generalizable. The state administrative agencies that we surveyed are responsible for managing DHS grant awards to states and the District of Columbia that are eligible for CVE-related training and ensuring that grant recipients comply with grant requirements. such as California and Texas, did not. As a result, the experiences of state administrative agencies from some of the larger states may not be captured in our survey results. Nevertheless, the survey results provide insights into the level of clarity about DHS CVE-related guidance for other grantees. To obtain a better understanding of the departments’ CVE-related training responsibilities, we requested information from DOJ and DHS on the approximate number and type of participants that attended training we determined was CVE-related and the estimated cost. We provide additional details on how we classified training as CVE-related below. We assessed the reliability of the training data provided by interviewing agency officials familiar with the data to learn more about the processes used to collect, record, and analyze the data. For example, we found that several training providers collected information on the number and type of participants through sign-in sheets. We used these data to approximate the dollar amount spent by agencies on CVE-related training in appendix III. As described above, we determined that the data were sufficiently reliable for showing general trends in attendance and spending, but some agencies either did not record participant data, and thus could not provide them; did not record participant figures and provided estimates of attendance based on the instructor’s recall; or recorded participant figures, but not the participants’ places of employment, so they could not specify how many of the attendees were from state and local versus federal entities. We noted these instances in our report. During our initial interviews with DHS and DOJ, officials expressed difficulty in responding to our request for CVE-related training materials, in part because agency officials were not clear on which training should be considered CVE-related. To facilitate our request for course materials for CVE-related training, we developed a framework to classify training as CVE-related based on our review and analysis of information from the following sources: (1) federal strategies related to violent extremism, such as Empowering Local Partners to Prevent Violent Extremism in the United States and its associated implementation plan;reports, or strategies that address CVE-related training topics such as DHS’s CVE-related training Guidance and Best Practices; and (3) perspectives provided by individuals with CVE expertise. Specifically, we conducted a content analysis of our transcripts of interviews with experts and CVE-related documents to determine the current understanding of the content areas covered by CVE-related training and the knowledge state and local officials should possess or principles they should understand to effectively carry out CVE efforts. We then analyzed this information to identify similar themes and principles across the sources and grouped them together into three distinct content areas CVE-related training likely addresses: (2) DHS and DOJ plans, 1. Radicalization addresses approaches that are based on research and accurate information to understanding the threat radicalization poses, how individuals may become radicalized, how individuals seek to radicalize Americans (threat of violent extremist recruitment), behaviors exhibited by radicalized individuals, or what works to prevent radicalization that results in violence. 2. Cultural competency seeks to enhance state and local law enforcement’s understanding of culture or religion, and civil rights and civil liberties, or their ability to distinguish, using information driven and standardized approaches, between violent extremism and legal behavior. 3. Community engagement addresses ways to build effective community partnerships, such as through outreach, and community capacity for the purpose of, among other things, mitigating threats posed by violent extremism. We solicited feedback on this framework from DHS and DOJ. DHS Counterterrorism Working Group officials generally agreed with the content areas we identified, and we incorporated feedback the group provided, as appropriate. DOJ officials stated that they view the framework as reasonable for the purpose of our review. For this review, we considered CVE-related training to include instruction, presentations, briefings, or related outreach efforts conducted, sponsored, promoted, or otherwise supported by DOJ, DHS, or a respective component, to help state, local, or tribal entities related to the three aforementioned content areas. We asked DHS and DOJ to identify and provide all course materials for any courses that they provided or funded during fiscal years 2010 and 2011 through grant programs for state and local entities, including law enforcement officers and community members, assumed to be CVE- related based on GAO’s framework. We focused generally on training provided in fiscal years 2010 and 2011 because “countering violent extremism” is a relatively nascent term. In addition, we focused on training provided to state and local entities because the CVE national strategy emphasizes the importance of providing CVE-related training to these entities. While the FBI identified its National Academy as providing training that could be considered CVE-related, it did not identify any of its other programs as germane to our review. However, complaint letters raised concerns about FBI training that was CVE-related according to our framework that was provided through two other FBI programs— the Citizens’ Academy and the National Joint Terrorism Task Force. We assessed some of the training provided through these programs and determined the training to be CVE-related according to our framework. In addition, the FBI’s internal review of counterterrorism training, which included the FBI programs within the scope of our review, assessed the training materials against criteria for CVE-related training, thereby suggesting that these programs may have provided training that was CVE-related. Accordingly, we requested course materials on these programs, as well as the Community Relations Executive Seminar Training Program, which is an abbreviated version of the Citizens’ Academy. We received approximately 290 presentations, briefings, and course materials from two components within DHS and four within DOJ. In some cases, DHS and DOJ offices provided us only with course abstracts or agendas instead of the full presentations or course materials because (1) they contracted the training with an outside provider and did not retain all of the associated training materials or (2) the training materials were particularly voluminous and, on the basis of discussions with the offices, we agreed that the course abstracts or agendas would enable us to sufficiently determine the relevancy of the training to our review. In those cases, we determined CVE-relevancy based on the agenda or abstract alone. We reviewed these training materials to assess whether each of the individual courses, presentations, briefings, and other training-related activities undertaken or funded by DHS and DOJ agencies addressed one or more of the three content areas described above. If they addressed any of these content areas, we considered them CVE-related, even if the primary focus of the materials was not CVE-related. To ensure consistency in our analysis, two analysts independently reviewed the materials for each training and recorded their assessment of whether the training addressed each content area. Any discrepancies in the initial determinations were then discussed and reconciled. To determine what concerns, if any, participants raised about CVE- related training, we reviewed course evaluations completed by participants of CVE-related training offered by DHS I&A, DHS Office for Civil Rights and Civil Liberties, DOJ BJA, and the FBI, and identified complaints or concerns about CVE-related training made formally in writing. We limited our analysis to training that was provided or funded by DHS or DOJ during fiscal years 2010 or 2011 and provided to a state or local entity (e.g., police department, community group, or fusion center). Two analysts independently reviewed 8,424 course evaluations from six training programs to consistently determine which ones included concerns or complaints. The analysts also assessed the nature of the concerns and complaints and assigned each complaint to one of three categories: (1) politically or culturally biased, (2) offensive, or (3) inaccurate. Where there were discrepancies between the analysts, they were resolved through supervisory review. To identify formally submitted or documented complaints or concerns participants expressed, we asked DHS and DOJ to identify those submitted in writing to DHS or DOJ, or articulated to DHS or DOJ through other means but subsequently documented by the agency, from fiscal years 2010 through 2011. We also conducted keyword searches using LexisNexis and Google to identify concerns that were raised by either individuals or advocacy groups that were submitted in writing to DHS or DOJ. In addition, we interviewed representatives, including leaders, of select advocacy groups that raised concerns about CVE-related training to identify what concerns and complaints, if any, they submitted in writing to DHS or DOJ on behalf of training participants. The advocacy and civil liberties organizations we interviewed included the American Civil Liberties Union, the American-Arab Anti-Discrimination Committee, the Council on American Islamic Relations, and the Muslim Public Affairs Council. We selected these organizations based on their leadership in raising concerns we identified (e.g., by virtue of being the primary signatories) and upon the recommendation of other advocacy groups. These interviews also enabled us to confirm or obtain additional views on the formally documented complaints DHS or DOJ provided. Through these approaches, we identified a total of six letters of complaint regarding 18 alleged incidents of biased CVE and counterterrorism training that DHS or DOJ provided or funded during fiscal years 2010 and 2011. Given that the scope of this review is limited to CVE-related training provided to state and local officials and community members, and not training that is exclusively provided to federal officials, we determined that 7 of the alleged incidents described in five of the letters were relevant to this review. We also interviewed relevant DHS and DOJ officials to obtain their perspectives on the concerns raised in the written complaints and information on any actions agencies took in response to these incidents. To address what actions, if any, DHS and DOJ have taken overall to improve the quality of CVE-related training, we interviewed DHS and DOJ officials responsible for providing or funding CVE-related training to inquire about any current or pending guidance, whether documented or undocumented, they adhere to when vetting training materials and instructors and other actions they have taken to ensure the quality of CVE-related training. We reviewed relevant DHS and DOJ documents including recently released guidance and best practices for training that DHS, DOJ, and the FBI developed. We also analyzed FBI and DOJ data from training reviews and information on how DHS and DOJ review and vet training curricula and instructors. Specifically, we analyzed the counterterrorism training materials that the FBI determined were inappropriate as a result of its internal review, which the FBI undertook to identify and purge potentially objectionable training materials. This analysis enabled us to better understand the review results with regard to training materials that were CVE-related under our framework, and provided context for the quality assurance steps FBI has taken in response to the review. To focus our analysis on training materials included in the FBI’s review that were CVE-related, one analyst assessed which of these training materials were CVE-related, according to our framework, and if the materials were CVE-related, the analyst entered the FBI’s observations and additional data about that training into a data collection form. A second analyst then reviewed these results. When there was disagreement, the two reviewers discussed the material, reached agreement, and modified the entries as necessary to ensure concurrence regarding which of the training materials included in the FBI’s review were germane to our review. The FBI considers the methodology it used to conduct its internal review and our analysis of the training materials that the FBI considered objectionable to be For Official Use Only; therefore, we did not include that information in this report. In addition, we conducted a site visit in San Diego, California, in January 2012, where DHS hosted a pilot of a CVE-related course under development. During the site visit, we observed the pilot training, and interviewed DHS officials who were sponsoring the training and local agencies that had developed and delivered the course curriculum. On the basis of the information we collected, we evaluated DHS’s adherence to its own CVE-related training guidance. We also assessed DHS and DOJ guidance and actions related to guidance provided by departmental leadership, such as DOJ training guidance issued to its components. We conducted this performance audit from October 2011 through October 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DHS is currently working with its components and relevant state and local entities to develop and implement CVE-focused training for state and local law enforcement officers, state police academy recruits, correctional facility officers, and new federal law enforcement officers. DHS’s Principal Deputy Counterterrorism Coordinator, who heads the department’s CVE efforts, has testified that developing CVE-related training is a priority for the department because inappropriate or inaccurate training undermines community partnerships and negatively affects efforts of law enforcement to identify legitimate behaviors and indicators of violent extremism. DHS has determined that CVE-related training should address: violent extremism (e.g., the threat it poses), cultural demystification (e.g., education on culture and religion), community partnerships (e.g., how to build them), and community policing efforts (e.g., how to apply community policing efforts to CVE). Accordingly, the DHS Counterterrorism Working Group, which is overseen by the Principal Deputy Counterterrorism Coordinator, is developing training that addresses these topics. These trainings include the following: A continuing education CVE curriculum for frontline and executive state and local law enforcement that DHS is developing with the Los Angeles Police Department, Major Cities Chiefs Association (MCC), and the National Consortium for Advanced Policing (NCAP). DHS hosted a first pilot for this course in San Diego, California, in January 2012 that 45 state and local law enforcement officials attended. The pilot consisted of 3 days of classroom instruction and student participation activities. According to Counterterrorism Working Group officials, DHS held a second pilot in the National Capital Region in July 2012, and a third pilot in Minneapolis, Minnesota, in August 2012. In July 2012, DHS also presented the curriculum at a CVE conference it hosted in Washington, D.C., and according to Counterterrorism Working Group officials, the department is working to enhance the curriculum based on feedback that conference attendees provided. MCC has passed a motion to adopt the curriculum, which DHS aims to implement in collaboration with state and local partners in 2013. CVE-related training modules for state police academies, which DHS is developing in collaboration with the International Association of Chiefs of Police (IACP). These training modules will be 1 to 2 hours in length, and are intended for police recruits. DHS plans for police academies to introduce the modules into their training and to make them available online for police recruits by the end of 2012. A CVE awareness training for correctional facility, probation, and patrol officers at the state and local levels that DHS is working to develop in collaboration with the Bureau of Prisons, the FBI National Joint Terrorism Task Force, and the Interagency Threat Assessment Coordination Group. Counterterrorism Working Group officials reported that DHS completed pilots for this training in Maryland in March 2012 and in California in July 2012. FEMA is also developing a curriculum for rural correctional facility management. Further, according to DHS officials, the Federal Law Enforcement Training Center has finalized a CVE-related training course that it integrated into its existing training for recruits. In February 2012, DHS hosted a symposium on the curriculum, and as of July 2012, FLETC had taught the curriculum to about 190 students. In addition, according to DHS officials, FLETC is also in the process of integrating aspects of the DHS Office for Civil Rights and Civil Liberty’s cultural competency training, which is described in detail in appendix III, into all new CVE curriculum and training efforts. Within DOJ, the FBI is also developing CVE-related training. The CVE national strategy implementation plan tasks FBI with establishing a CVE Coordination Office that will, as part of its activities, coordinate with the National Task Force on CVE-specific education and awareness modules. According to FBI officials, the FBI established a CVE office in January 2012, and as of August 2012, had assigned staff to the office and was in the process of developing CVE-related training modules. In particular, the CVE Office developed and presented a CVE-related training module to FBI public affairs specialists and community outreach coordinators and specialists in FBI field offices from April through August, 2012, according to FBI officials. FBI officials also reported that the CVE Office is collaborating with the FBI Counterterrorism Division to develop a CVE-related training module for FBI special agents and mid- and senior- level managers that it plans to complete in December 2012 and implement in early 2013. DOJ and DHS components provided training that was CVE-related according to our framework to more than 28,000 state and local entities, including law enforcement officials, fusion center personnel, and community members, during fiscal years 2010 and 2011. That is, DOJ and DHS components provided training, including courses, briefings, presentations, and workshops, that addressed one or more of the three CVE-related training topical areas we identified: (1) the phenomenon of violent extremism and the threat posed by radicalization that leads to violence; (2) cultural competency and how to distinguish between criminal and constitutionally protected cultural and religious behaviors; and (3) how to build effective community partnerships to, among other things, mitigate threats posed by violent extremism. The majority of these trainings did not have the term “CVE” in their titles, a fact that DOJ and DHS officials attributed to CVE being a relatively new concept, or that the trainings had been developed for purposes other than CVE. Nonetheless, they provided some instruction on at least one of the three CVE-related training topics we identified, and thus are considered CVE-related for the purpose of this review. Although the CVE-related trainings that DOJ and DHS provided collectively addressed all three CVE-related training topics, the trainings more frequently addressed the phenomenon of violent extremism and cultural competency than community engagement. The specific topics addressed by each training DOJ and DHS components provided during fiscal years 2010 and 2011 are described in the tables that follow. In addition, the DOJ grant-funded State and Local Anti- Terrorism Training (SLATT) Program provided CVE-related training to approximately 11,000 state and local law enforcement officials. Within DOJ, the FBI, CRS, and U.S. Attorneys’ Offices (USAO) provided CVE-related training directly to state and local entities during fiscal years 2010 and 2011. In total, these entities provided CVE-related training to more than 15,000 state and local law enforcement and community members. More specifically, the FBI National Academy, the FBI National Joint Terrorism Task Force (NJTTF) Program, select FBI field offices, CRS, and about half of USAOs (48 of 93 offices) provided CVE-related training to law enforcement. In addition, the FBI’s Citizens’ Academy and Community Relations Executive Seminar Training (CREST) outreach programs provided CVE-related training to community members. Tables 4, 5, and 6 provide more detailed information on these programs and trainings. Although we determined that CRS provided CVE-related training according to our framework, CRS officials emphasized that the service’s mission does not include any national security, counterterrorism, or CVE- related training efforts. CRS works with communities to help address tension associated with allegations of discrimination on the basis of race, color, or national origin. CRS also works with communities to develop strategies to prevent and respond more effectively to alleged violent hate crimes on the basis of race, color, national origin, gender, gender identity, sexual orientation, religion, or disability. According to CRS officials, through its work preventing hate crimes, CRS helps develop relationships among Arab, Muslim, and Sikh communities who may be targeted for hate violence by violent extremists, including supremacists, and other community members, as well as local government and law enforcement officials. As a result, CRS does not conduct activities or programs with the express goal of CVE, but recognizes that its ability to help promote dialogue and develop strong relationships to create a sense of inclusion in communities may have ancillary CVE benefits in preventing violent extremism. Within DHS, the Office for Civil Rights and Civil Liberties Institute and I&A provided CVE-related training to approximately 3,410 state and local intelligence and law enforcement officials during fiscal years 2010 and 2011. This training consisted of two classroom-based courses that the Office for Civil Rights and Civil Liberties Institute provided on about 40 occasions; one CVE-focused workshop that the I&A State and Local Program Office hosted; and 17 briefings that the I&A Homegrown Violent Extremism Branch (HVEB) provided in coordination with the FBI and NCTC at fusion centers and fusion center conferences. Table 7 provides more detailed information on each of these trainings. DOJ and DHS also administered four grant programs during fiscal years 2010 and 2011 that provided funding for which CVE-related training was an eligible expense: (1) the DOJ Community Policing Development (CPD) Program, (2) the DOJ Edward Byrne Memorial Justice Assistance Grant (JAG) Program, (3) the DHS Homeland Security Grant Program (HSGP), and (4) the DOJ SLATT Program. We reviewed grant documentation for CPD grant projects that DOJ identified as potentially CVE-related and determined that they were not used to pay for training that was CVE- related according to our framework. Information DHS and DOJ collect on grant projects funded through the HSGP and JAG programs suggests that minimal, if any, funds from these programs were used for CVE- related training purposes; however, the level of detail in the information the departments collect from HSGP and JAG grantees is not sufficient to reliably and conclusively make this determination. In fiscal years 2010 and 2011, SLATT provided CVE-related training to approximately 11,000 state and local officials. Additional details regarding this training are provided in table 8. Table 9 presents a summary of the 77 state and local participant concerns that we identified during our review of course evaluation forms that DHS and DOJ provided to us. In addition to the contact named above, Kristy N. Brown, Assistant Director, and Taylor Matheson, Analyst-In-Charge, managed this assignment. Melissa Bogar and Lerone Reid made significant contributions to this report. Gustavo Crosetto, Pamela Davidson, Richard, Eiserman, Eric Hauswirth, Thomas Lombardi, Linda Miller, Jan Montgomery, and Anthony Pordes also provided valuable assistance.
|
DHS and DOJ have responsibility for training state and local law enforcement and community members on how to defend against violent extremism--ideologically motivated violence to further political goals. Community members and advocacy organizations have raised concerns about the quality of some CVE-related training that DOJ and DHS provide or fund. As requested, GAO examined (1)the extent to which DHS and DOJ have identified and communicated topics that CVE-related training should address to their components and state and local partners, (2) any concerns raised by state and local partners who have participated in CVE-related training provided or funded by DHS or DOJ, and (3) actions DHS and DOJ have taken to improve the quality of CVE-related training. GAO reviewed relevant documents, such as training participant feedback forms and DHS and DOJ guidance; and interviewed relevant officials from DHS and DOJ components. This is a public version of a sensitive report that GAO issued in September 2012. Information that the FBI deemed sensitive has been redacted. The Department of Homeland Security (DHS) has identified and is communicating to its components and state and local partners topics that the training on countering violent extremism (CVE) it provides or funds should cover; in contrast, the Department of Justice (DOJ) has not identified what topics should be covered in its CVE-related training. According to a DHS official who leads DHS's CVE efforts, identifying topics has helped to provide a logical structure for DHS's CVE-related training efforts. According to DOJ officials, even though they have not specifically identified what topics should be covered in CVE-related training, they understand internally which of the department's training is CVE-related and contributes either directly or indirectly to the department's training responsibilities under the CVE national strategy. However, over the course of this review, the department generally relied upon the framework GAO developed for potential CVE-related training topics to determine which of its existing training was CVE-related. Further, because DOJ has not identified CVE-related training topics, DOJ components have had challenges in determining the extent to which their training efforts contribute to DOJ's responsibilities under the CVE national strategy. In addition, officials who participated in an interagency working group focusing on ensuring CVE-related training quality stated that the group found it challenging to catalogue federal CVE-related training because agencies' views differed as to what CVE-related training includes. The majority of state and local participant feedback on training that DHS or DOJ provided or funded and that GAO identified as CVE-related was positive or neutral, but a minority of participants raised concerns about biased, inaccurate, or offensive material. DHS and DOJ collected feedback from 8,424 state and local participants in CVE-related training during fiscal years 2010 and 2011, and 77--less than 1 percent--provided comments that expressed such concerns. According to DHS and DOJ officials, agencies used the feedback to make changes where appropriate. DOJ's Federal Bureau of Investigation (FBI) and other components generally solicit feedback for more formal, curriculum-based training, but the FBI does not require this for activities such as presentations by guest speakers because the FBI does not consider this to be training. Similarly, DOJ's United States Attorneys' Offices (USAO) do not require feedback on presentations and similar efforts. Nevertheless, FBI field offices and USAOs covered about 39 percent (approximately 9,900) of all participants in DOJ CVE-related training during fiscal years 2010 and 2011 through these less formal methods, yet only 4 of 21 FBI field offices and 15 of 39 USAOs chose to solicit feedback on such methods. GAO has previously reported that agencies need to develop systematic evaluation processes in order to obtain accurate information about the benefits of their training. Soliciting feedback for less formal efforts on a more consistent basis could help these agencies ensure their quality. DOJ and DHS have undertaken reviews and developed guidance to help improve the quality of CVE-related training. For example, in September 2011, the DOJ Deputy Attorney General directed all DOJ components and USAOs to review all of their training materials, including those related to CVE, to ensure they are consistent with DOJ standards. In addition, in October 2011, DHS issued guidance that covers best practices for CVE-related training and informs recipients of DHS grants who use the funding for training involving CVE on how to ensure high-quality training. Since the departments' reviews and efforts to implement the guidance they have developed are relatively new, it is too soon to determine their effectiveness. GAO recommends that DOJ identify and communicate principal CVE-related training topics and that FBI field offices and USAOs consider soliciting feedback more consistently. DOJ agreed that it should more consistently solicit feedback, but disagreed that it should identify CVE training topics because DOJ does not have primary responsibility for CVE-related training, among other things. GAO believes this recommendation remains valid as discussed further in this report.
|
Local governments have primary responsibility for wastewater treatment, owning and operating more than 17,000 treatment plants and 24,000 collection systems nationwide. Local ratepayers have long been relied upon to fund both construction costs and operating and maintenance costs associated with facilities serving their communities. However, the federal government has provided financial assistance for these wastewater treatment facilities since the enactment of the Water Pollution Control Act Amendments of 1956, which established the federal Construction Grants program. Through this program, the federal government provided grants directly to local governments for constructing treatment facilities but limited the federal contribution to the lesser of 30 percent of eligible construction costs or $250,000. The Federal Water Pollution Control Act Amendments of 1972, commonly known as the Clean Water Act, increased the federal share of costs to 75 percent. According to the Congressional Budget Office, federal outlays for wastewater treatment grants rose tenfold during the 1970s, reaching a high of $8.4 billion in 1980. Subsequent amendments in 1981 and 1987 reduced and then phased out the construction grant program, replacing it with the CWSRF. Instead of providing grants directly to localities, the CWSRF provides federal grants to the states, which in turn provide loans to communities and other entities to finance wastewater treatment and other water quality projects. The 1987 law established a system in which the states would use the loan repayments to finance future CWSRF loans, thereby allowing the state revolving funds to operate without sustained federal support. Congress authorized appropriations through 1994 but has continued to appropriate funds to the CWSRF each year since. The transfer of federal funds to state-level CWSRFs begins when Congress appropriates funds annually to the EPA. EPA then allots capitalization grants to the individual states. The Clean Water Act also requires states to provide state funds to match 20 percent of the total federal CWSRF capitalization grants. To receive its allotment, a state must provide an Intended Use Plan that lists potential projects to solve water quality problems and solicit public comments on that list. After completing the plan and receiving its capitalization grant, a state has up to 1 year to enter binding commitments (later converted into loan agreements) with potential borrowers to fund specific water quality projects. The majority of CWSRF borrowers are municipalities and other local units of government, although in some states nonprofit organizations, businesses, farmers, homeowners, and watershed groups are eligible to seek nonpoint source funding through the CWSRF. According to an EPA headquarters official, a single CWSRF loan may support multiple clean water projects. State CWSRF administrators set loan terms, interest rates, and repayment periods. Loan repayments are cycled back into the state-level fund and used for additional water quality projects. States also have the option of using CWSRF funds as collateral to borrow in the public bond market to increase the pool of available funds, a process referred to as “leveraging.” Figure 1 illustrates the flow of funds through the CWSRF program. States can use their CWSRF resources to construct or upgrade wastewater infrastructure, address nonpoint sources of pollution, or develop or implement management plans in federally-designated estuaries. States use a state-developed, EPA-approved, ranking system to direct funds to the highest priority projects. The ranking system considers applicant communities’ current regulatory compliance status, imminent public and environmental health threats, and the relative importance of the affected bodies of water. States are not required to fund these projects in priority order; decisions on which projects to fund first are often based on a project’s readiness to proceed. However, states must first use their CWSRFs to ensure that existing wastewater treatment facilities are in compliance with, or are making progress toward, deadlines, goals, and requirements of the Clean Water Act. After meeting this “first use” requirement, states may use their CWSRFs to construct other wastewater infrastructure or for nonpoint source pollution and estuary management projects. Taken together, states have loaned the majority of their CWSRF dollars — 96 percent or about $50 billion since 1987—to build, upgrade, or enlarge conventional wastewater treatment facilities and conveyances. Direct CWSRF support for nonpoint source activities represents only 4 percent of CWSRF dollars (about $2 billion), although it accounts for over a quarter of all CWSRF projects financed. Nationwide, 23 percent of CWSRF funds (64 percent of all CWSRF loan agreements) were devoted to water quality projects in communities with populations of less than 10,000 people. All 51 CWSRF programs use the large majority of their CWSRF resources for conventional wastewater infrastructure projects. From fiscal year 1987 through June 2005, the Clean Water State Revolving Fund program has provided over $52 billion dollars in financial assistance to local governments and others for a variety of water quality improvement projects across the nation. States provided about 96 percent of this amount—or $50 billion—to municipalities to build, upgrade, or enlarge conventional wastewater treatment facilities and conveyances. EPA reports that conventional wastewater infrastructure projects account for about 73 percent of all CWSRF-funded projects. By their nature, wastewater infrastructure projects are typically much more expensive to complete than nonpoint source projects. Figure 2 illustrates the relative funding for the types of projects receiving CWSRF assistance. Nonpoint source projects ($2 billion) Wastewater treatment projects ($50 billion) Total CWSRF funding equals $52.7 billion. According to EPA, $600 million of available CWSRF resources support short-term planning and design activities and, as such, have not yet been allocated by the states among the qualifying categories of expense. However, EPA expects that these funds will be allocated (most likely to wastewater infrastructure projects) when rolled into longer-term construction projects. Within the conventional wastewater treatment category, states may allocate their CWSRF resources among the following seven major categories of projects: Secondary Treatment includes infrastructure designed to ensure that wastewater treatment plant effluent meets EPA’s secondary treatment standards, a requirement of all new and existing wastewater treatment facilities. Advanced Treatment includes infrastructure designed to further remove nutrients and other matter from wastewater treatment plant effluent beyond secondary treatment standards. New Sewers includes the construction of new wastewater conveyances— such as collector and interceptor sewers—to carry household and industrial wastewater to treatment facilities. Sanitary Sewer Overflow correction includes efforts to prevent the occasional or incidental discharge of untreated sewage from municipal sanitary sewer systems that can occur due to inclement weather and improper maintenance or operation of sewer systems. Combined Sewer Overflow correction includes efforts to prevent or mitigate discharges of untreated wastewater from combined sewer systems, which are designed to collect rainwater runoff, domestic sewage, and industrial wastewater in the same pipe. Combined sewer systems were designed in many cities to occasionally discharge excess wastewater directly to nearby water bodies. However such overflows often pose significant public health and pollution problems and have become a national enforcement priority for EPA. Storm Water Sewers includes both storm water infrastructure and efforts to plan and implement municipal storm water management programs. Recycled Water Distribution includes projects to convey recycled water (i.e., treated wastewater) from treatment facilities to end users such as golf courses and municipal gray water systems. As shown in figure 3, nationwide, states have allocated about 60 percent of their CWSRF wastewater infrastructure dollars for secondary and advanced treatment projects at wastewater treatment facilities. The remainder supports sewers and other conveyances. Since the CWSRF’s inception, the total dollar amounts that states annually provide for wastewater infrastructure and nonpoint source projects has increased. However, CWSRF support for wastewater infrastructure has increased at a greater pace than the amount for nonpoint source projects. Figure 4 shows that states have used their CWSRFs to finance wastewater infrastructure projects since 1987 but only began to use them to support nonpoint source projects in 1990. The annual percentage of the CWSRFs states allocated to nonpoint source projects peaked in 1996 at about 10 percent. Direct CWSRF support for nonpoint source pollution control activities represents only 4 percent (about $2 billion) of CWSRFs allocated by the states but accounts for over 25 percent of all CWSRF-supported projects because nonpoint source projects are typically less expensive than wastewater infrastructure projects. The extent to which states have used their CWSRFs to support nonpoint source projects varies. To date, 37 states have reported using some portion of their CWSRF funds to directly support nonpoint source projects. Among them, Wyoming has allocated the greatest percentage of funds to nonpoint source projects (44 percent), while New York has allocated the greatest dollar amount (over $700 million). Figure 5 illustrates the percentage of funding that all 51 programs have allocated to nonpoint source projects since the CWSRF’s inception. Detailed state by state figures are provided in appendix II. To be eligible for CWSRF support, a nonpoint source pollution control project must help implement a state’s EPA-approved Nonpoint Source Pollution Management Plan. Each state determines which nonpoint source pollution control activities are eligible for funding. Nationally, there are 11 major categories of nonpoint source pollution control projects that have received CWSRF support: Agricultural Best Management Practices include projects to reduce water pollution resulting from activities related to the production of animals and food crops. Projects can include nutrient management practices for the storage and disposal of animal waste; techniques to minimize pollution related to agricultural activities such as grazing, composting, pesticide spraying, planting, harvesting, fertilizing, and tillage; and irrigation water management. Individual/Decentralized Sewage Treatment encompasses the rehabilitation or replacement of individual septic tanks or community sewage disposal systems. This category also includes the construction of collector sewers to transport waste from individual septic systems to a cluster septic tank or other decentralized facility. Groundwater-Unknown Source relates to the protection of groundwater and includes projects to protect wellheads and prevent contamination in areas where groundwater is replenished. Storage Tanks include tanks above or below ground designed to hold petroleum products or chemicals. Projects may include spill containment systems; the upgrade, rehabilitation, or removal of leaking tanks; and the treatment of contaminated soils and groundwater. Sanitary Landfills manages water pollution emanating from landfills and includes activities such as collection of leachate or on-site treatment, capping, and closure. Silviculture includes best management practices related to forestry activities such as timber harvesting, removal of streamside vegetation, road construction, and mechanical preparation for the planting of trees. Eligible activities include preharvest planning, streamside buffers, road management, and re-vegetation of disturbed areas. Marina includes water pollution control activities related to boating and freshwater marinas. Pump-out systems, oil containment booms, and efforts to minimize discharge of sewage from boats are included in this category. Resource Extraction includes pollution control activities related to mining and quarrying. Projects supported can include the construction of detention berms and the revegetation of areas affected by mining activities. Brownfields include abandoned, idle, and underused industrial sites. Eligible projects include groundwater monitoring wells, treatment of contaminated soils and groundwater, capping of contaminated areas to prevent storm water infiltration, and removal of storage tanks at brownfields. Hydromodification relates to the water channel modification, dam construction, stream bank and shoreline erosion, and wetland or riparian area protection or restoration. Examples of eligible activities include conservation easements; shore erosion control; wetland development and restoration; installation of open, vegetated drainage channels designed to detain and/or treat storm water; and bank and channel stabilization. Urban includes activities related to erosion, sedimentation, and discharge of pollutants (e.g., oil, grease, road salt, toxic chemicals) from construction sites, roads, bridges, and parking lots. As shown in figure 6, states have provided the greatest level of nonpoint source support—almost 40 percent of all CWSRF nonpoint source dollars—to mitigate contaminated runoff from sanitary landfills. Although sanitary landfill projects received the largest share of CWSRF nonpoint source dollars, EPA reports that agricultural best management practices account for over 55 percent of all CWSRF-supported nonpoint source projects receiving CWSRF support. Agricultural best management practices—such as constructing a manure retention pond to control pollution created by contaminated storm water runoff—are typically less expensive than other types of nonpoint source projects. EPA also reports that the construction or repair of decentralized or individualized wastewater treatment systems (i.e., septic systems) accounted for about another one-third of all CWSRF-supported nonpoint source projects. Twelve states have reported to EPA that they have indirectly addressed nonpoint sources of pollution with projects categorized under wastewater treatment infrastructure. This may occur, for example, when a state provides a loan to build a centralized collection system or wastewater treatment plant to replace failing individual septic systems, which EPA and the states define as a nonpoint source of water pollution. Because the solution to the nonpoint source pollution problem is technically a wastewater treatment facility, EPA considers the expenditure to be in the wastewater infrastructure category. As detailed in table 1, these 12 states have devoted at least $650 million of their collective financing for wastewater infrastructure projects to address nonpoint sources of pollution. Figure 7 shows that since the inception of the CWSRF program, small communities—defined by EPA as having less than 10,000 inhabitants— have received about 23 percent of total CWSRF dollars. In contrast, over 60 percent of all CWSRF loan agreements supported projects within these smaller communities. Figure 8 shows the considerable degree to which the states vary in the extent to which their CWSRFs support small communities. It illustrates, for example, that just over half of the CWSRF programs have provided 30 percent or more of their CWSRF funds for projects in small communities. Pennsylvania has provided the greatest dollar amount ($914 million), as well as a high percentage of loans (90 percent) to projects in small communities. At the other end of the spectrum, California has provided the lowest CWSRF dollar amount (4 percent) and loans (15 percent) for projects in small communities. Our interviews with state and EPA officials suggest that the diversity states exhibit in their CWSRF spending reflects the variation in what they see as their most pressing water quality infrastructure needs, their most pressing water quality problems, and the degree to which they rely on CWSRF funds to protect smaller communities. EPA and state officials predict that, in future years, states are likely to alter their current CWSRF allocation strategies in response to growing demand and shifting clean water needs and priorities. Some states have focused their CWSRFs on supporting the construction of wastewater treatment plants and conveyance systems. According to EPA officials, these states consider wastewater infrastructure needs their highest CWSRF priority and seek other sources of funding to support nonpoint source pollution problems and estuary management activities. In some cases, state legislation restricts the use of CWSRFs for nonpoint source projects. For example, the legislation that created Alabama’s CWSRF limits the scope of the program by defining projects that receive CWSRF funds as traditional public wastewater facilities. Other states have passed legislation restricting the types of entities that can receive CWSRF loans. Nevada and Colorado, for example, have limited their CWSRF borrowers to local municipalities or similar government entities, thereby excluding private or nongovernmental entities from receiving CWSRF funds. Even where state law allows CWSRF funds to be used for nonpoint source projects, some state CWSRF administrators have told EPA officials that they are not comfortable with using CWSRF funds for this purpose, especially when demand for funding for wastewater infrastructure projects in their states is high. For example, according to officials in EPA’s New York Regional Office, large parts of Puerto Rico lack basic sewers and wastewater treatment facilities. Consequently, Puerto Rico’s CWSRF has focused on these needs. Similarly, according to officials in EPA’s Kansas City Regional Office, Kansas has focused on wastewater treatment projects due to high levels of borrower demand for support for these types of projects. Some states that are willing and legally able to fund both wastewater infrastructure and nonpoint source projects have not done so because of low borrower demand for nonpoint source projects. Officials in EPA’s Dallas and Atlanta Regional Offices told us that Louisiana, Kentucky, and New Mexico are willing to fund nonpoint source projects but have not done so because of a lack of borrower demand. Similarly, North Carolina and Texas CWSRF officials explained that groups that typically implement nonpoint source projects often pursue grant money for their projects from federal, state, or private sources rather than CWSRF loans. CWSRF officials in states we visited indicated that nonpoint source borrowers are often reluctant to accept a CWSRF loan because they lack a dedicated source of revenue to repay it. While wastewater treatment plants can depend on user rates for loan repayments, nonpoint source borrowers may not have a readily available or dedicated source of revenue to repay a loan. As such, these officials suggest that the availability of grants through other federal- or state-funded programs may affect the level of demand for CWSRF loans for nonpoint source projects. As of June 2005, 37 states reported using some portion of their CWSRF funds to support nonpoint source projects, up from only 2 states in 1990. The considerable progress in restoring the nation’s waterways since the passage of the Clean Water Act is largely attributable to significant efforts to reduce pollutant levels from point sources of pollution, which are those that contribute pollutants directly to a body of water from a pipe or other conveyance. However, EPA reports that one-third of the nation’s assessed waters still do not meet water quality standards. Recognizing the considerable role of nonpoint source pollution in these standards violations, the majority of states have decided to focus at least some attention on addressing these problems with their CWSRF resources. EPA has encouraged all states to use a watershed management approach to solving water quality problems, which according to state and EPA officials, has increased the number of states addressing nonpoint source pollution with their CWSRFs. While traditional water quality programs have focused on specific sources of pollution, such as sewage discharges, or on specific water resources, such as a river segment or a wetland, a watershed management approach addresses water quality problems at the watershed level. According to officials at EPA headquarters and several regional offices, this approach to water quality management often highlights the role of nonpoint source pollution in noncompliance issues. These officials suggested that states using a watershed management approach are more likely to fund nonpoint source projects with CWSRF resources. Additionally, CWSRF officials in Ohio and Minnesota told us that developments in water quality monitoring technologies and expansion of monitoring efforts have helped their states better identify nonpoint sources of pollution. According to these officials, the role of nonpoint source pollution in noncompliance has been “uncovered” over the years as they have improved monitoring efforts and as point sources of pollution— such as wastewater treatment facilities—are brought into compliance. Some states have been highly proactive in encouraging use of CWSRF funds to support nonpoint source projects. For example, in an effort to ensure that CWSRFs address nonpoint source problems, some states have passed legislation setting aside a portion of their CWSRFs to be used exclusively for nonpoint source projects. For example, Washington state regulations require that CWSRF administrators reserve up to 20 percent of available funds for nonpoint source pollution control and comprehensive estuary conservation and management projects. Other states have developed innovative lending approaches to overcome some of the barriers to funding nonpoint source projects with CWSRF resources. To increase the number of nonpoint source borrowers while minimizing loan transaction costs, some states pass CWSRF loan risks and loan servicing responsibilities onto third parties. These states have established pass-through lending or linked-deposit programs, whereby loans are passed through state agencies, municipalities, or local banks before reaching the borrower. Minnesota’s CWSRF program, for example, works with the Minnesota Department of Agriculture to allocate a portion of its funds to counties, soil and water conservation districts, and others to help establish minirevolving loan accounts. These local units of government work with local financial institutions to provide low interest loans for projects proposed by farmers, rural landowners, and agriculture supply businesses for projects to implement, among other things, agricultural best management practices. The local units of government approve eligible projects and refer borrowers to the local financial institutions. Using CWSRF funds from the minirevolving loan account, the bank provides low-interest loans to qualified borrowers. The lending institution assumes the risk and management responsibility for the loan. Other states—such as Massachusetts and Missouri—have set up similar pass-through loan programs to address nonpoint sources of pollution with CWSRF funds. To overcome the challenge of finding a dedicated source of repayment for nonpoint source projects, Ohio’s Water Resource Restoration Sponsor Program integrates CWSRF support for nonpoint source projects into loans for wastewater treatment plants. According to Ohio CWSRF officials, communities seeking a CWSRF loan for a wastewater treatment facility can receive a discount to the interest payments that would otherwise be due on their wastewater project loans. After the wastewater facility loan has been awarded, the amount of the interest discount is advanced to the community, which then assumes responsibility for financing the implementation of the associated nonpoint source project. In return, the community receives a reduction to its wastewater facility loan’s interest rate of up to 0.2 percent. A community that participates in this program does not typically implement the nonpoint source project itself. Rather, it enters into an agreement with an implementing partner, such as a land trust or a park district. Using the interest discount funds, this partner develops and implements a nonpoint source project (such as a plan to restore and permanently protect a waterbody’s aquatic habitat resources) but does not repay the CWSRF. Instead, the sponsoring community covers the cost as part if its repayment of its wastewater facility loan. According to Ohio officials, the benefit of the state’s program is that water restoration projects that may not normally receive CWSRF funding are completed with the help of the wastewater treatment plants. Based in part on the program’s success, Ohio officials have decided to set aside $15 million of CWSRF resources each year for their Water Resource Restoration Sponsor Program. A few other states are in the process of establishing similar sponsorship programs. Just as states vary in the way they allocate CWSRF resources according to water quality needs, they also vary in the extent to which they target borrowers in small or economically disadvantaged communities. Smaller communities may struggle more to raise capital for water quality infrastructure than larger communities with broader tax and rate bases. In 1992, Congress directed EPA to establish a Small Town Environmental Planning Task Force to, among other things, advise EPA on how to work better with small communities. The task force found that technical and administrative capacity is often severely limited in small towns, which often lack full-time officials and professional staff. Moreover, the task force found that small communities tend to have severely limited tax bases and budgets and, therefore, may not have the necessary credit ratings to attract capital to finance their wastewater infrastructure. In addition, infrastructure costs fall disproportionately on small towns because entry- level costs must be distributed over a smaller base. Recognizing these challenges, some states—such as Montana, Pennsylvania, and West Virginia—use their CWSRFs to help rural, low- income communities meet required sewage and water quality standards. In Pennsylvania, almost 90 percent of all CWSRF loan agreements and 75 percent of total funding is directed to projects in small communities. Several states have set aside a portion of their funds for CWSRF-funded projects in small or economically disadvantaged communities. For example, Oregon reserves up to 15 percent of its CWSRF to support projects in communities with populations of 5,000 or less that are facing severe water quality problems. According to EPA and state officials, some CWSRF programs have rules to protect the ability of small communities to access CWSRF funds. For example, some states such as New York and Minnesota have placed limits on the amount of CWSRF support any one borrower—such as a major metropolitan area—can receive in a given year. A number of states offer small or economically disadvantaged communities special assistance when applying for CWSRF loans. For example, Ohio offers CWSRF loans with (1) a zero percent interest rate to communities with populations of less than 2,500 and a median household income of less than $45,000 and (2) a 1 percent interest rate to those with populations between 2,500 and 10,000 and a median household income of less than $38,000. West Virginia CWSRF administrators are able to extend repayment terms up to 40 years to qualified disadvantaged communities to help make projects more affordable. Kentucky offers special state-funded, short-term loans to small communities to help them cover expenses related to obtaining a CWSRF loan. Montana has developed special outreach and technical assistance programs to help small communities take advantage of the CWSRF program. Montana officials explained that many small communities lack the necessary administrative structures to receive a CWSRF loan or lack the technical expertise to develop competitive applications for CWSRF loans. The state has contracted with the Rural Community Assistance Partnership, a nonprofit organization, to provide technical assistance to rural and small communities to guide them through the process of developing a competitive application and set up the necessary administrative structures to receive a CWSRF loan. Officials in several small Montana communities told us that, without this technical assistance, they would not have been able to receive the CWSRF loans that were critical to the financing of their wastewater infrastructure. According to the EPA and state officials we interviewed, demand for CWSRF support for both point and nonpoint source projects will grow considerably in the future, and states will likely alter their CWSRF allocation strategies in response to shifting clean water needs and priorities. Among the factors these officials cite in predicting changes in states’ allocation strategies are (1) aging wastewater infrastructure needing rehabilitation or replacement; (2) population growth and redistribution; (3) changes in EPA enforcement priorities, particularly with regard to limiting sewage discharges during wet weather conditions; (4) pressure to implement EPA’s TMDL program; and (5) stricter EPA and state water quality standards for temperature, nutrients, and sediments. Officials in all 10 EPA regional offices and a number of state officials told us that the need to repair or replace aging wastewater infrastructure will be a major driver of future demand for CWSRF resources. These officials point out that many of the wastewater treatment plants and conveyances built with federal support in the early 1970s in response to the passage of the Clean Water Act are now reaching the end of their useful lives. EPA data indicate that wastewater treatment plants typically have an expected useful life of 20 to 50 years before they require expansion or rehabilitation. Wastewater conveyances such as pipes and sewers have life cycles that can range from 15 to over 100 years. In addition, some wastewater systems on the East Coast still rely on pipes that are almost 200 years old. Taking into account the need to repair or replace these aging systems, a 2002 Congressional Budget Office analysis estimated that between 2000 and 2019, $260 to $418 billion will be needed for wastewater infrastructure, while current spending is approximately $10 billion per year. CBO’s analysis suggests that the gap between current and needed spending could be as high as $11 billion per year. In addition to repairing or replacing existing infrastructure, EPA officials predict that some states will face increased demand for new wastewater treatment systems in response to population growth. In addition to overall population growth, EPA also indicates that the existing U.S. population is shifting geographically, requiring rapid increases in wastewater treatment capacity in certain areas. EPA officials indicated that some states in the West—such as Utah and Nevada—and the South—such as Georgia and Florida—are already experiencing rapid population growth and considerable pressure to expand existing treatment capacity. In addition, EPA officials point out that in the near-term, some states along the Gulf Coast will have to balance the need for new growth with demand to replace or repair wastewater infrastructure that was damaged by recent hurricanes. In response to recent EPA wet weather policies and enforcement actions, some state and EPA officials predict that a number of states will experience increased demand for CWSRF assistance to address combined sewer overflows (CSO), which are discharges of untreated wastewater from a combined sewer system. Combined sewer systems collect and transport both sanitary sewage and storm water runoff in a single-pipe system to a wastewater treatment facility. Constructed prior to the 1950s, combined sewer systems exist in primarily older, urban communities in the Northeast, Middle Atlantic, Midwest, and Northwest. An overflow typically occurs when the total wastewater and storm water flow exceeds the capacity of the system and, by design, discharges directly into a receiving water body. Pollutants in CSOs have been shown to be a major contributor to nonattainment of water quality standards and may pose significant public health and pollution threats. As such, EPA has selected these problems as national enforcement priorities. Sixty percent of the more than 9,000 combined sewer systems nationwide serve communities of fewer than 10,000 people—-the very communities that face some of the most difficulty in raising capital to address environmental infrastructure. States have already used almost $5 billion of CWSRF funds to correct CSOs, and EPA recently reported to Congress that an additional $50 billion is required nationwide. Officials in some Midwestern states—such as Michigan and Minnesota—predict that addressing CSOs will be one of the biggest drivers of demand and that funding these projects will become a higher priority in the future. According to officials in EPA’s Chicago and Atlanta Regional Offices, some states facing major CSO problems—such as Indiana and Kentucky—have indicated that the CWSRF will be a primary source of funding for their long-term CSO management plans. State and EPA officials also point out that demand for CWSRF support for nonpoint source pollution control projects is likely to grow as states begin projects to bring impaired waters into compliance with EPA’s TMDL program. A TMDL is a calculation of the total maximum amount of a pollutant that a body of water can receive each day and still meet water quality standards. Water quality standards are set by states, territories, and tribes and identify the uses for each body of water such as drinking water supply, contact recreation (swimming), and aquatic life support (fishing). States generally determine if a body of water is meeting standards by comparing monitoring data with applicable state water quality criteria. If the body of water fails to meet applicable federal, state, or local water quality, then the state is required to list that water as impaired. EPA guidance provides that the state should then develop a TMDL implementation plan that specifies reductions necessary to achieve the standard and then eventually implement a cleanup plan. According to EPA guidance, the state implementation plan should specify which pollution sources will be restricted to meet water quality standards. State and EPA officials indicate that a majority of standards violations relate to nonpoint sources of pollution and, subsequently, a number of TMDL projects address nonpoint sources of water pollution. For example, Minnesota CWSRF officials told us that they believe 86 percent of the pollution in their impaired waters emanates from nonpoint sources of pollution. According to some state and EPA officials, many states are considering the CWSRF as a major source of funding, given the amount of resources and the overall costs of implementing the plans. In a similar vein, EPA and state officials also pointed out that stricter federal, state, and local water quality standards will continue to drive up demand for CWSRF loans for both point and nonpoint source projects. For example, according to officials in EPA’s Philadelphia Regional Office, stricter biological and nutrient standards in the recent Chesapeake Bay Agreement will drive demand for CWSRF loans in Mid-Atlantic states. Officials in Minnesota told us they are experiencing a surge in demand for CWSRF loans to repair or replace individual failing septic systems due to greater attention and more stringent enforcement by state and county regulators. EPA officials in EPA’s Seattle Regional Office point out that efforts to protect the region’s endangered salmon and bull head trout through the Endangered Species Act may force wastewater treatment plants to upgrade their treatment efforts and local municipalities to address nonpoint sources of pollution. These officials predict that tougher temperature and sediment standards in waters receiving effluent will drive demand, especially for nonpoint source projects, in states such as Washington, Idaho, and Oregon. EPA and the states use a uniform set of financial and environmental measures to help determine efficient and effective use of CWSRF resources. EPA and state-level officials rely on three measures to assess financial performance, including a set of national financial indicators, an annual Program Evaluation Report conducted by the cognizant EPA regional office for each state CWSRF program, and an annual independent financial audit of the state program. Efforts to measure the environmental benefits of states’ CWSRFs are relatively new and generally center on EPA’s recently developed electronic Environmental Benefits Reporting System. Since the CWSRF program’s inception, all states have used similar measures to evaluate CWSRF financial performance. The first measure, EPA’s National Financial Indicators, consists of five individual national financial indicators. According to an EPA headquarters official responsible for these indicators, the agency developed these indicators in conjunction with the states to provide a balanced approach to understanding the different objectives of CWSRF financial performance. According to a senior EPA headquarters official, CWSRF project-summary information, reported by the states in the National Information Management System, is used to calculate the indicators on a state by state and national level. The indicators include the following: Return on Federal Investment estimates how many dollars in environmental investment have been generated for every federal dollar spent through the program. Ratio of Executed Loans to Funds Available for Loans (often referred to as the “pace” at which loans are made) measures the cumulative dollar amount of executed loan agreements relative to the cumulative dollar amount of funds available for loans. It is one indicator of how quickly funds are made available to finance CWSRF eligible projects. Ratio of CWSRF Loan Disbursement to Executed Loans measures the speed at which projects are proceeding toward completion by comparing the cumulative dollar amount of CWSRF loan disbursements with the cumulative dollar amount of executed loan agreements and expressing this as a percentage. Estimated Additional CWSRF Loans Made Due to Leveraging estimates the dollar amount of additional projects that have been funded, that otherwise might not have been, had leveraged bonds not been issued. This is done by comparing the cumulative amount of CWSRF executed loans with the cumulative amount of funds available after subtracting the net funds provided by issuing bonds. Sustainability of the Fund gauges how well the CWSRFs are maintaining their invested or contributed capital, without making adjustments for loss of purchasing power due to inflation. EPA’s second measure to evaluate effective and efficient use of CWSRF dollars is its annual review and accompanying written PERs conducted by EPA’s regional offices of each state program. According to the EPA’s annual review guidance, the review is intended to, among other things (1) evaluate the success of the state’s performance in achieving goals and objectives identified in its Intended Use Plan (which identifies the intended uses of the amounts available to its CWSRF) and the state’s Annual Report (which describes how the state has met the goals and objectives of the previous fiscal year as identified by the Intended Use Plan), (2) determine how the CWSRF is achieving the intent of the Clean Water Act, (3) assess the financial status and performance of the fund, and (4) evaluate progress in identifying the environmental and public health benefits of the program. The review, based on the information collection and evaluation process, ends with the issuance of the PER. EPA’s third measure is the annual financial audit. The Clean Water Act requires the 51 state-level CWSRF programs to undergo these audits to determine whether the CWSRF financial statements are presented fairly in all material respects in conformity with Generally Accepted Accounting Principles (GAAP) and whether the state has complied with the laws, regulations, and the provisions of CWSRF capitalization grants. The audit, conducted under the Single Audit Act, focuses on the state’s overall CWSRF program, rather than individual capitalization grants awarded to states by EPA. In addition, independent audits are conducted in 43 states by auditors contracted by the state; EPA’s Office of Inspector General currently conducts audits for the remaining eight programs. Quantifying an environmental program’s financial transactions is an inherently more straightforward exercise than quantifying its environmental benefits. Nonetheless, the EPA Office of Water’s Environmental Indicator Task Force has been developing environmental indicators for the CWSRF since at least 1991. This task force, comprised of federal and state officials, identified obstacles to measuring benefits and shared ideas for solutions. It attempted to develop key environmental indicators, such as the number of pounds of pollutants removed from wastewater treatment plant effluent. However, a number of obstacles prevented collection of comprehensive environmental benefits measurements—most notably (1) a lack of baseline environmental data and (2) technical difficulties in attributing benefits specifically to the CWSRF. EPA headquarters officials also explained that environmental monitoring activities are not an allowable use of CWSRF funds, even as an administrative expense. Despite these complications, the requirements of the Government Performance and Results Act and EPA’s own Strategic Plan have long recognized the need for outcome-based measures for the agency’s programs. Moreover, according to EPA headquarters officials, recent reviews by the Office of Management and Budget (OMB) and EPA’s Office of Inspector General provided further impetus to quantify environmental outcomes of the CWSRF. In particular, a 2004 EPA Office of Inspector General report criticized the program for not developing a comprehensive plan for measuring results and recommended that such a plan be developed. In a similar vein, OMB’s Program Assessment Rating Tool (PART) review of the CWSRF cited its inability to link dollar expenditures with environmental improvements. In response, representatives of a state-EPA work group and of the Association of State and Interstate Water Pollution Control Administrators (assisted by an EPA contractor) developed the Environmental Benefits Reporting System in July 2005. This system strives to capture anticipated environmental benefits that are expected to result from CWSRF-funded projects. The system does not require any environmental monitoring, focusing instead on anticipated environmental benefits. According to EPA headquarters officials, all 51 programs have agreed to use the system to report the environmental benefits of their CWSRF-funded projects and must report on all loans made from capitalization grants received after January 1, 2005. By July 2005, states were able to enter data about anticipated environmental improvements to bodies of water resulting from CWSRF- funded projects. Unlike the National Information Management System data, which is submitted by the states each year in the aggregate, the environmental benefits data is submitted on a per-project basis, at the time of loan execution. As of February 2006, 42 states have begun using it to report CWSRF-supported projects, including nonpoint source projects. Some states are attempting to go beyond EPA’s requirements by gathering data on actual environmental benefits from their CWSRF-funded projects, including nonpoint source projects. Washington State, for example, recently required applicants to monitor the environmental impact of all of its CWSRF projects 3 to 5 years after project completion. Between 2001 and 2003, Oklahoma conducted water quality monitoring on 19 receiving streams, both upstream and downstream of CWSRF-funded improvements to remove pollutants and increase dissolved oxygen in effluent entering the streams. However, the study could not determine the extent to which these particular projects improved overall water quality in the streams, largely because baseline environmental data were unavailable. Other states are going beyond the minimal requirements of the EPA system by estimating the degree to which pollution is prevented by specific CWSRF-funded projects. Delaware CWSRF officials, for example, explained that since 2000, they have used estimates of the amount of pollutants a proposed CWSRF project would remove from the waste stream to develop the state’s Project Priority List. As another example, according to EPA’s Seattle regional officials, Oregon has begun to award additional points to CWSRF project applicants (thus increasing the priority of the project) if they agree to conduct their own environmental monitoring and evaluation. As EPA and the states have long known, quantifying environmental programs’ benefits with any degree of precision is a challenging exercise. Nonetheless, their efforts to do so regarding the CWSRF are particularly important, given the sizable investment of both federal and state dollars in the program. EPA reviewed a draft of this report and provided technical comments, which have been fully incorporated. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees; interested Members of Congress; the Administrator, Environmental Protection Agency; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff need further information, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. GAO’s review focused on the following questions: To what extent are states currently using their Clean Water State Revolving Funds (CWSRF) to support conventional wastewater treatment plant construction versus other qualifying expenses? What strategies do states use to allocate their CWSRF dollars among qualifying expenses? What measures do states use to ensure that their allocation strategies are resulting in the most efficient and effective use of their CWSRFs? To determine the extent to which states are currently using their CWSRFs to support conventional wastewater infrastructure versus other qualifying expenses, we summarized data from the Environmental Protection Agency’s (EPA) National Information Management System (NIMS), the database EPA uses to track expenditures for all 51 CWSRF programs. To assess the reliability of the NIMS data, we interviewed knowledgeable EPA officials regarding EPA’s procedures for collecting NIMS data from states and monitoring the quality of data submitted by states. We also reviewed EPA-issued guidance for states inputting data to the NIMS database. Based on these interviews and guidance we determined that the data about the usage of CWSRF dollars were sufficiently reliable for the purposes of this report. Moreover, CWSRF programs must comply with the Single Audit Act and Generally Accepted Accounting Principles (GAAP) and undergo independent financial audits. However, we determined that data about the number of CWSRF loan agreements were of less certain reliability to identify the exact percentage of loan agreements between qualifying expenses, given that states vary in the way that they account for the number of loan agreements. For example, states do not use common standards to report the numbers of projects supported by a loan agreement, such as the number of projects that are point source versus nonpoint source in nature. Therefore, in figure 2, we reported data about the number of loan agreements with appropriate caveats. To examine the strategies states use to allocate their CWSRF dollars among qualifying expenses, we interviewed EPA and state-level agency officials and reviewed annual reports and other official EPA and state- level documents. These interviews included officials at EPA headquarters, in all 10 EPA regional offices, and select state-level agency officials. We conducted field visits to Delaware, Minnesota, Montana, North Carolina, Ohio, Texas, and Washington to obtain detailed information about CWSRF allocation strategies. We selected the states using a number of factors, including the following: geographic diversity, to accommodate variation in water quality issues; diversity of total amount of CWSRF support; diversity in CWSRF-supported projects to include states that do and do not support nonpoint source projects with CWSRF dollars and states that support varying or unique types of wastewater or nonpoint source projects; and a balance of states with and without an Integrated Project Priority Setting System. Balancing these criteria, our selected states allowed us to make the following field visits: seven states in 6 of the 10 EPA regions; the second largest program (Texas) and the second smallest (Delaware); five states that supported nonpoint source projects, to varying degrees; and four states with an Integrated Project Priority Setting System and three states with a traditional project prioritization system. These field visits and the documents provided by state-level officials allowed us to include information on a broad range of criteria states use to prioritize projects and determine funding. During these field visits, we conducted interviews with state-level CWSRF program officials and selected recipients of CWSRF loans. To gather information on additional states, we conducted semistructured phone interviews with EPA officials from all 10 regional offices, and we followed up with selected state-level CWSRF officials to discuss allocation strategies and other aspects of their programs. We used these interviews to identify the role the EPA regional offices may have in shaping the state-level CWSRF programs and to gather information on regional trends and EPA initiatives regarding the CWSRF. We also reviewed each state’s most recent EPA-conducted annual CWSRF Program Evaluation Review. To examine how states ensure that their allocation strategies result in the most efficient and effective use of their CWSRFs, we interviewed EPA and state officials about the financial and environmental measures they use to assess CWSRF performance. The examination of the most recent Program Evaluation Review also provided information on the financial and program performance of each state’s CWSRF. In addition, we reviewed EPA’s electronic CWSRF Environmental Benefits Reporting System by interviewing the contractor that designed it and other knowledgeable EPA and state-level officials regarding the process and mechanisms that states use to input data. We conducted our work between July 2005 and April 2006 in accordance with generally accepted government auditing standards. The following tables (tables 2-6) and figure (fig. 9) present selected Clean Water State Revolving Fund (CWSRF) financial data. In addition to the individual named above; Steven Elstein, Assistant Director; Mark Braza; Greg Marchand; Tim Minelli; Justin L. Monroe; Jonathan G. Nash; Alison O’Neill; and Amber Simco made key contributions to this report.
|
Communities will need hundreds of billions of dollars in coming years to construct and upgrade wastewater treatment facilities, sewer systems, and other water infrastructure. To finance these efforts, they will rely heavily on low-interest loans from the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund (CWSRF) program to supplement their own funds. Through fiscal year 2005, states have used their CWSRFs to provide communities over $52 billion for a variety of water quality projects. The Clean Water Act allows states to use their CWSRFs to (1) construct or improve conventional wastewater infrastructure, (2) control diffuse (nonpoint) sources of pollution such as agricultural runoff and leaking septic systems, and (3) protect federally-designated estuaries. Given the states' flexibility in determining how to spend CWSRF dollars, GAO was asked to examine (1) the extent to which states use their CWSRF dollars to support conventional wastewater treatment infrastructure versus other qualifying expenses, (2) the strategies states use to allocate their CWSRF dollars among qualifying expenses, and (3) the measures states use to ensure that their allocation strategies result in the most efficient and effective use of CWSRF dollars. EPA reviewed a report draft, providing technical comments that were incorporated. Since 1987, states have used 96 percent (about $50 billion) of their CWSRF dollars to build, upgrade, or enlarge conventional wastewater treatment facilities and conveyances. Projects to build or improve wastewater treatment plants alone account for over 60 percent of this amount, with the remainder supporting the construction or rehabilitation of sewer and storm water collection systems. CWSRF assistance for nonpoint source activities represents only 4 percent (about $2 billion) of CWSRF dollars, although it accounts for over a quarter of all CWSRF projects financed. To date, 37 states report using some portion of their CWSRF funds to directly support nonpoint source activities. Nationwide, 23 percent of CWSRF funds (64 percent of all CWSRF loan agreements) were devoted to water quality projects in communities with populations of less than 10,000 people. The 50 states (and Puerto Rico) have used a variety of strategies to allocate CWSRF funds to meet their individual needs. For example, the state of Washington sets aside 20 percent of its CWSRF dollars to support nonpoint source projects, while Alabama state law defines only traditional public wastewater treatment facilities as appropriate projects under its CWSRF program. Other states have designed their programs to target selected types of borrowers. Pennsylvania, for example, has targeted borrowers in small or rural communities during the allocation process. According to EPA and state officials, states' allocation strategies may change as certain states' priorities and clean water needs shift. Among the reasons are (1) aging wastewater infrastructure in need of rehabilitation or replacement; (2) population growth and redistribution; (3) changes in EPA enforcement priorities; and (4) stricter EPA and state water quality standards for temperature, nutrients, and sediments. EPA and the states use a uniform set of financial and environmental measures to help determine efficient and effective use of CWSRF resources. Financial measures include, among others, return on federal investment, the pace at which available funds are loaned, and the sustainability of the fund. EPA regional officials conduct annual reviews of each state program to help ensure the fiscal integrity of the state programs. All programs are also subject annually to independent financial audits. To measure environmental outcomes of CWSRF-funded projects, in fiscal year 2005, EPA developed an electronic benefits reporting system that all 51 programs have agreed to use. Currently, the system collects data only on anticipated environmental benefits associated with CWSRF-funded projects. However, to varying degrees, some states such as Oklahoma and Washington are attempting to gather data on actual environmental benefits from their CWSRF-funded projects, including nonpoint source projects.
|
China’s December 2001 accession to the WTO resulted in commitments to open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. U.S. investment and trade with China has grown significantly over the past 10 years, and trade between China and the United States exceeded $180 billion in 2003, based on U.S. trade data. Consequently, China was the United States’ third largest trading partner in 2003. U.S. goods exported to China increased by 29 percent to $28.4 billion in 2003 from $22.1 billion in 2002. U.S. imports from China are also rising and reached a level of $152.4 billion in 2003. According to 2003 U.S. trade data, the U.S. trade deficit with China ($124 billion) is larger than that of any other U.S. trading partner. The U.S. government’s efforts to ensure China’s compliance with its WTO commitments are part of an overall U.S. structure to monitor and enforce foreign governments’ compliance with existing trade agreements. USTR has primary responsibility for monitoring and enforcing trade agreements. Among other things, USTR is required by law to identify any foreign policies and practices that constitute significant barriers to U.S. goods and services, including those that are covered by international agreements to which the United States is a party. At least 16 other agencies are involved in these monitoring and enforcement activities, but USTR and the Departments of Commerce, State, and USDA have the primary responsibilities regarding trade agreement monitoring and enforcement. Each of these four key agencies we reviewed has within its organizational structure a main unit that focuses on China or the greater Asian region. These main units have primary responsibility for coordinating the agencies’ China-WTO compliance activities, although numerous other units within the agencies are also involved. The main units routinely draw on assistance from experts in these other units to obtain information and expertise as needed. Additionally, the key agencies have units in China or at the WTO, and staff in those overseas units are also involved in the agencies’ compliance activities. Table 1 lists the main units with China-WTO responsibilities, as well as examples of other offices with which the units coordinate on an intra-agency basis. In its 2002 and 2003 reports to Congress on China’s WTO compliance, USTR reported that China had successfully implemented many of its numerous WTO commitments, including rewriting hundreds of trade-related laws and regulations and making required tariff reductions. Nevertheless, USTR’s reports identified over 100 individual compliance problems concerning China’s implementation of its WTO commitments, according to our analysis. These problems spanned all commitment areas and ranged from very specific, relatively simple problems— such as late issuance of particular regulations—to broader concerns over transparency in China’s rule-making process, which are more difficult to implement and assess. Most compliance problems identified in 2002 persisted into 2003. U.S. officials noted this continuation was an indicator that China was able to address the more easily resolvable problems during 2002 but that the remaining issues had proven to be more difficult for China to address. USTR reported that China had mixed success in resolving compliance problems in 2002 and 2003. Additionally, new problems emerged in 2003, with many of them arising from phased-in commitments that China was due to implement in 2003. In 2004, USTR and the other key agencies continued to pursue resolution of problems and noted many positive developments in resolving a number of these outstanding compliance issues. USTR noted several areas in which China has successfully implemented its commitments since joining the WTO in December 2001. China’s WTO commitments, which are described in over 800 pages of legal text, are broad in scope and range from general pledges for how China will reform its trade regime to specific market access commitments for goods and services. In 2002, USTR reported that China reviewed more than 2,500 trade-related laws and regulations for WTO consistency, repealed or amended nearly half of these, and issued many new laws and regulations. China also restructured government ministries with a role in overseeing trade, embarked on an extensive educational campaign on the benefits of WTO membership, and made required tariff reductions. USTR reported that, in 2003, China also took steps to correct systematic problems in its tariff-rate quota regime for bulk agricultural commodities, reduced capitalization requirements in certain financial sectors, and opened up the motor vehicle financing sector (see table 2). During this period of reform in China, U.S. exports to China rose 48 percent between China’s WTO accession in 2001 and 2003. Although China made progress in realizing many of its WTO commitments, USTR reported over 100 compliance problems in 2002 and 2003, according to our analysis. These problems spanned all areas in which China had made commitments, and many problems identified in 2002 persisted through 2003. We found USTR’s annual reports to Congress to be the most complete official U.S. source of information with which to analyze China’s WTO compliance in 2002 and 2003. In conjunction with China’s 2001 accession to the WTO, USTR was mandated to identify compliance with commitments and annually report these findings to Congress. These annual reports incorporate a broad range of input from key federal agencies and the business community. We systematically cross-checked the reports with industry testimony, industry association reports, and other U.S. government documents and found the reports to be fair and complete representations of industry concerns. Further, USTR officials stated that these reports represented the most complete public summary of China- WTO compliance problems that the U.S. government is monitoring and the actions taken to resolve the issues. As a result, we relied extensively on the narrative descriptions of compliance problems set forth in the USTR reports to analyze the number, type, and disposition of the problems that the U.S. government was working to resolve. Our analysis of USTR’s 2002 and 2003 reports to Congress identified 106 individual compliance problems. China’s compliance problems can be the result of several factors, from political resistance to lack of technical capacity to problems of resources and coordination among Chinese ministries. These compliance problems fell within all nine broad areas of China’s trade regime and varied from import regulation to export regulation (see table 3 and app. II for descriptions of these nine areas). China’s WTO commitments are broad and complex, and compliance problems also ranged in scope from specific issues to more general concerns. For example, some commitments require a specific action from China, such as reporting information about China’s import-licensing requirements to the WTO. Other commitments are less specific in nature, such as those that confirm China’s general obligation to adhere to WTO principles of nondiscrimination in the treatment of foreign and domestic enterprises. Accordingly, compliance problems identified as of 2003 also ranged from specific, relatively simple issues, such as the late issuance of regulations, to broader and more crosscutting concerns, such as concerns about judicial independence, which are more difficult to implement and assess. It is important to note that not all problems equally affect U.S. exports to China and that some problems are more easily resolved than others. For example, weak intellectual property right enforcement, which may entail industry losses of nearly $2 billion according to some industry estimates, could impact more trade than the late issuance of regulations. Thus, while USTR’s reports identify priority areas, the economic importance of many individual problems cannot be easily quantified and cannot be reported, nor did we attempt to calculate the importance or otherwise prioritize or rank the problems in our analysis. In addition, our analysis of business views on China’s implementation shows that the business community expected intellectual property rights commitments to be the most difficult to put into practice, whereas they expected tariff reductions to be relatively easier to implement. We found that about two thirds of the USTR-identified compliance problems persisted from 2002 through 2003. U.S. officials noted that this continuation might have been attributed to the fact that China resolved the more easily implemented commitments during the first year, and the remaining “holdover” issues proved to be more difficult to address. Other U.S. officials and industry representatives cited both Severe Acute Respiratory Syndrome (SARS) and major political and bureaucratic transitions in China as contributing to the apparent slowdown in implementation. However, USTR stated that these factors did not excuse the apparent deceleration in China’s implementation in 2003. In addition to the problems that persisted from 2002, about a quarter of all compliance problems were new in 2003, with many of these problems arising from phased-in commitments that China was due to implement in 2003. China has had mixed success in resolving compliance problems. Based on our assessment of USTR’s 2002 and 2003 reports, we found that China had resolved or made some progress on just under half of the individual problems described by U.S. officials. We also found that China’s progress in resolving compliance problems also varied widely by area. For example, among the key areas that USTR identified as priorities, China resolved or made progress on well over half of the various problems in agriculture and services, while progress on intellectual property rights was limited to less than a quarter of the individual problems reported as of December 2003. Since USTR’s December 2003 report, the U.S. government has continued to pursue resolution of China’s WTO compliance problems. Notably, the United States and China reached agreements in several key areas through the Joint Commission on Commerce and Trade (JCCT). As discussed in more detail later, the JCCT is a high-level government-to-government consultative forum for China and the United States to discuss key trade issues. The April 2004 JCCT meeting resulted in the formation of working groups, several memoranda of understanding and letters of intent, and several more specific agreements to improve China’s implementation. For example, China agreed to take steps to strengthen intellectual property rights enforcement and agreed to indefinitely suspend implementation of discriminatory computer standards and services rules, according to USTR officials. China also announced the publication of rules granting foreign companies trading rights in China ahead of schedule. (See table 4.) In July 2004, USTR announced that the United States and China reached an agreement to resolve a dispute over China’s discriminatory value-added tax refund policy for semiconductors. This agreement followed the United States’ March 2004 filing of the first WTO case by any member against China. USTR’s next report on China’s compliance is due in December 2004. Compared with 2002, U.S. actions in 2003 to resolve compliance problems reflected a strategy that emphasized high-level bilateral engagement with China. For example, the United States sent more cabinet and subcabinet level delegations to China in 2003 and elevated existing and initiated new trade dialogues with China. We found that formal and informal interagency coordination on these bilateral efforts was generally effective. Multilaterally, the U.S. continued to engage China in the WTO at regular council and committee meetings throughout the year. At the same time, the U.S. actively participated in the WTO’s second annual review of China’s implementation, referred to as the Transitional Review Mechanism (TRM). However, despite U.S. officials’ hopes to the contrary, overall WTO member participation in the review declined, and the review’s potential was impaired by less timely U.S. submission of questions. Furthermore, procedural and other types of problems that arose during the 2002 review continued to limit the effectiveness of the 2003 TRM. Nevertheless, both WTO and other WTO member trade officials indicated that the TRM process had gone more smoothly in 2003 than in 2002 and that future TRM’s would probably not vary in form from that used in 2003. In general, U.S. officials noted that, despite some benefits, the TRM was a less effective tool for resolving compliance issues compared with bilateral engagement. U.S. government efforts to resolve WTO compliance issues with China in 2003 reflected an emphasis on high-level bilateral engagement. Several U.S. officials noted that bilateral engagement—particularly at the highest levels of government—had proven to be the most effective means of resolving WTO compliance issues with China. As one U.S. official stated, “change in China starts at the top, so that’s where we focused much of our bilateral compliance activity in 2003.” Accordingly, the United States undertook a range of efforts reflecting this emphasis in 2003, including sending more cabinet and subcabinet level delegations to China, utilizing bilateral consultative mechanisms, and continuing to coordinate policy through the interagency process. Compared with 2002, the U.S. government sent more cabinet and subcabinet delegations from the key economic and trade agencies in 2003 to engage their Chinese counterparts on trade issues. For example, senior- level delegations to China from the various agencies increased from 13 in 2002 to 23 in 2003, according to information provided by U.S. embassy officials. U.S. officials also said that an increased number of high-level delegations from China, including a visit from China’s Premier, also came to the United States in 2003 and that trade issues were routinely a part of those visit agendas. Finally, embassy officials noted that, because the SARS outbreak interrupted travel to China for several months in 2003, most of the delegations’ visits were concentrated within an 8-month period. The U.S. government utilized two formal consultative mechanisms to address trade issues with China, both of which further demonstrated an emphasis on high-level, bilateral engagement. First, the United States agreed to China’s request to elevate and transform the JCCT, a forum for dialogue on bilateral trade issues and a mechanism to promote commercial relations, to include three cabinet-level U.S. officials for 2004. Consequently, in 2004, the Secretary of Commerce and the U.S. Trade Representative headed the JCCT meetings for the United States, while a vice premier headed China’s delegation. The U.S. Secretary of Agriculture also participated in the newly elevated JCCT. Moreover, U.S. officials noted that the JCCT was transformed from a trade promotion dialogue into a mechanism to resolve trade disputes. Second, the United States initiated the U.S.-China Trade Dialogue as a means for U.S. trade and economic agencies to address trade issues with various Chinese officials at the subcabinet level. The United States created the Trade Dialogue at the end of 2002, with meetings scheduled to take place quarterly. However, due in part to the SARS outbreak, only two such dialogues took place in 2003. We found that both formal and informal, day-to-day coordination within and among the key units on policy issues was generally effective. Formal interagency coordination was accomplished through three main structures: (1) the Trade Policy Review Group, (2) the Trade Policy Staff Committee, and (3) the Trade Policy Staff Committee’s Subcommittee on China-WTO Compliance. Additionally, several officials noted that the National Security Council played a greater role in coordinating interagency policy on China compliance issues in 2003 than 2002. Officials said that this greater role was to ensure a more unified U.S. position with regard to economic relations with China, beyond the Trade Policy Review Group. Our interviews with over 50 staff and managers in the main China units of the four agencies indicated that interagency coordination was generally effective at the working level and on specific issues. With respect to informal contact, managers and staff at both headquarters and overseas offices said that day-to-day coordination and information sharing among the agencies on China compliance greatly enhanced their ability to respond to compliance problems. Yet some staff believed that interagency coordination could be improved. For example, they suggested that coordination could be enhanced through better communication from the Washington, D.C., units to the China units about interagency meetings and China-related activities at the WTO. Also, within China, some staff noted that interagency meetings had been suspended in 2003 because of SARS and that they had not been resumed by the end of the year, so their coordination efforts were hindered. Some China-based staff at State and Commerce complained that lack of communication had led to misunderstanding about the respective units’ roles and responsibilities and that this had caused some confusion about engaging the Chinese in a few cases. Embassy officials told us that the interagency meetings had been resumed in 2004, but the schedule was driven primarily by the need to coordinate on upcoming events and that the meetings had not yet resumed on a consistent schedule. During China’s membership negotiations, the United States successfully pushed for an annual review of China’s implementation to take place within the WTO’s General Council and 16 subsidiary bodies. This effort was based on concerns about China’s ability to implement its WTO commitments and the fact that China was allowed to join before making all of its trade-related laws and regulations WTO-consistent. Compared with the 2002 review, the 2003 TRM was less contentious but reflected less WTO member participation and less timely U.S. submission of questions. Despite the TRM’s continued limitations and although procedures for the TRM are unlikely to change for future reviews, U.S. officials cited benefits from using this multilateral forum as part of their overall approach for monitoring and enforcing China’s compliance. U.S. officials said they put more emphasis on engaging China outside of the TRM in regular WTO meetings in 2003. Additional multilateral monitoring may occur when China is expected to undergo a separate WTO review of its trade policies, possibly in early 2006. U.S., WTO Secretariat, and other WTO member government officials noted that there was less debate about how to conduct the TRM in 2003 than 2002. As we reported previously, the initial TRM did not result in the thorough and detailed review of China’s compliance that U.S. officials had envisioned. Chinese officials told us that while they will abide by their TRM commitments, they view the TRM as a discriminatory mechanism that was imposed on China during their WTO membership negotiations. With this as the prevailing sentiment from China, the 2002 review was marked by contention between China and some of the other WTO members regarding the form, timing, and specific procedures for the TRM. The United States and some other members were disappointed that China refused to provide written answers to members’ written questions in advance of TRM meetings. Additionally, some members were disappointed that the review did not result in any conclusions or recommendations regarding China’s implementation. Following the conclusion of the first TRM, U.S. and European Union officials stated that they would seek improvements for subsequent reviews. However, in 2003 officials concluded that there would have been little use in reopening the previous year’s debates about the procedures for the TRM given the lack of specificity in China’s WTO commitments regarding procedural aspects of the review. U.S. officials and some other members noted that, because any changes would require consensus from all members, China would most likely have blocked any attempt to clarify the TRM procedures regarding written responses and furthermore would not likely approve a WTO report with recommendations regarding China’s implementation. Thus, in 2003 there were no formal proposals from WTO members for changing the TRM, although there were informal discussions among some committee chairpersons regarding overall procedures. Because there was less debate regarding procedures, U.S., WTO, and foreign officials told us that the 2003 TRM went more smoothly, compared with the previous year. However, as in 2002, China did not respond in writing to member questions during the 2003 review, nor did the TRM result in a WTO report with conclusions or recommendations. U.S., WTO, and other foreign officials told us that they expected future TRM reviews to operate similar to the 2003 review, with no substantive changes in procedures or outputs. USTR officials told us they expect fewer issues to be taken up in future TRM reviews, after China revises and issues various laws and regulations as its remaining commitments are phased in. Although the United States continued to take a leading role in the TRM, participation by other WTO member governments decreased between the 2002 and 2003 reviews despite U.S. officials’ hopes for members’ increased involvement. For example, the number of WTO members that submitted written questions to China in advance of the TRM meetings declined from 11 in 2002 to 7 in 2003. Similarly, the number of WTO members that asked questions or made statements during the TRM meetings decreased from 23 to 11 over the same time period. WTO Secretariat and member government officials that we interviewed cited several possible reasons for the decreased participation in the 2003 TRM. Some developing country WTO members stated that they viewed the TRM as mainly a political tool for developed country WTO members to put pressure on China and that the TRM was of little use to them, in terms of raising and resolving trade issues with China. Additionally, other WTO member governments were less active in the 2003 TRM because those governments elected to focus on engaging China bilaterally on trade issues. Compared with 2002, we found that the United States’ submission of questions to China was less timely in 2003. USTR officials told us that part of their overall strategy for the 2003 TRM was an internal deadline to submit questions to China 4 to 6 weeks in advance of the TRM meetings to ensure that China had enough time to prepare responses. On average, the U.S. submitted questions to China 34 days in advance of the committee meetings in 2002. However, in 2003, the United States submitted questions only 9 days in advance of the meetings, on average. In a few of the committee meetings in 2003, the Chinese representative stated that he was unable to prepare and provide answers to questions that were received just prior to the meetings. The timeliness of China’s submissions to the various committees also affected TRM proceedings. China’s accession agreement describes various types of information that China is required to submit to the WTO subsidiary bodies in advance of the TRM, but the agreement does not set forth specific timelines for the submissions. In 2003, China’s submissions predated the TRM meetings by an average of 6 days, compared with an average of 16 days in 2002. During the meetings, some members commented that they were unable to prepare a complete set of questions since they had not had sufficient time to review China’s submissions. (See app. III for more details on the 2002 and 2003 TRMs.) U.S. officials acknowledged the continuing limitations of the TRM in 2003, but cited three major benefits of the review: (1) the TRM increased China’s transparency on trade issues, (2) the TRM resulted in a useful exchange of information and fostered better coordination among key Chinese ministries, and most importantly, (3) the TRM provided the United States with a formal multilateral forum for raising compliance problems. First, U.S. officials stated that the TRM was an effective way to urge China to disclose information about its implementation in a formal, public multilateral forum. Officials said it was important to demonstrate to China that the United States and other concerned members would be actively seeking information about China’s implementation on an annual basis. Second, several U.S., WTO Secretariat, and other member government officials said that China sent more experts from the relevant ministries to attend the TRM in 2003, and many officials stated that this had resulted in a more effective exchange of information during the reviews. Further, U.S. and foreign officials, including China’s ambassador to the WTO, indicated that the TRM process was effective in helping China’s main trade ministry, the Ministry of Commerce, gain cooperation and coordination from other Chinese ministries that might not have understood the problems or might have been reluctant to cooperate otherwise. Third, U.S. officials said that the TRM provided the United States with an opportunity to highlight specific areas of concern about China’s implementation and obtain an official, public position from China on key issues. U.S. officials further noted that, although the TRM was never intended to supplant the dispute settlement process, the TRM could help lay the groundwork for any potential areas where the United States would initiate a WTO dispute settlement case with China. U.S. officials also said that part of the U.S. multilateral strategy for resolving compliance problems with China in 2003 was to raise issues with China during other WTO committee meetings outside of the TRM. Regular WTO business takes place in the subsidiary bodies mentioned above, which formally meet anywhere from one to four, or more, times a year, and the United States is a very active participant. Established WTO practice holds that members are to respond in writing to each other’s questions that are submitted through the normal (i.e., not TRM) WTO committee structure. Additionally, U.S., WTO, and other foreign officials noted that China is generally cooperative during regular, non-TRM WTO meetings. The degree to which members (including the United States and China) review and question each other’s laws, regulations, and trade practices varies by committee. WTO Secretariat officials told us that, compared to what they had observed in some of the TRM meetings, similar or even more technical information was routinely exchanged between members in a few of the committees—like the Committee on Antidumping Practices—whereas such exchanges were relatively rare in other committees, like the Committee on Trade-Related Investment Measures. U.S. and other officials also pointed out that the WTO’s Trade Policy Review Mechanism would provide an additional opportunity for a meaningful review of China’s trade policies. The Trade Policy Review Mechanism, which is unrelated to the TRM, provides for a broad review of all WTO members’ trade policies and practices, trade policy-making institutions, and macroeconomic conditions. Each member undergoes these reviews on a scheduled basis, and the frequency of an individual member’s review depends on its share of world trade. Based on its total volume of trade, China is expected to undergo the review every 2 years, although the exact timing of China’s initial review has yet to be determined, according to WTO Secretariat officials. While the Trade Policy Review is not a review of members’ implementation of WTO commitments, the review does provide an opportunity for members’ to submit questions to and receive written responses from the reviewee. The reviews also result in a summary report that describes the findings of the review. Although the key agencies’ formal plans address trade monitoring and enforcement activities, it is difficult to assess the effectiveness of the agencies’ China-WTO compliance efforts based on their performance management reports. Planning and measuring results are important components to ensuring that government resources are used effectively to achieve the agencies’ goals. Good planning and management links overall agency goals to individual unit activities and priorities. USTR, Commerce, State, and USDA’s plans reflect China-WTO compliance efforts, albeit to varying degrees and in different ways. However, in most cases, we found weaknesses in these key agencies’ performance management efforts; and these weaknesses prevented the agencies from providing a clear or accurate assessment of their performance in this regard. Moreover, the specific units within the agencies that are most directly involved with China compliance activities lacked specific strategies for ensuring that they supported their agency’s goals, and they did not measure their unit’s results. The Government Performance and Results Act of 1993 (GPRA) requires federal agencies to engage in a results-oriented strategic planning process. GPRA requires agencies to set multiyear strategic goals in their strategic plans and corresponding annual goals in their performance plans, measure performance toward the achievement of those goals, and report on their progress in their annual performance reports. These reports are intended to provide important information to agency managers, policymakers, and the public on what each agency accomplished with the resources it was given. Moreover, GPRA calls for agencies to develop performance goals that are objective, quantifiable, and measurable and directs agencies to establish performance measures that adequately indicate progress toward achieving those goals. Thus, GPRA requires agencies to report on program performance for the previous fiscal year, based on their established goals and measures. Agencies are to compare performance with the established goals, summarize findings of program evaluations, and revise or describe the actions needed to address any unmet goals. Agencies have flexibility in establishing goals and in using performance measures, as long as they reflect the major activities carried out as part of their particular missions. Furthermore, with Office of Management and Budget (OMB) concurrence, agencies can express their performance goals for particular programs in an alternative form when they are not able to define goals in an objective and quantifiable form, as long as it allows for actual performance to be compared to the goal. Our previous work has noted that a lack of clear measurable goals makes it difficult for program managers and staff to link their day-to-day efforts to achieving the agency’s intended mission. Lastly, good planning and performance measurement at both the overall agency and unit levels enhances program oversight and is a critical component to effective and informed decision making. Consistent with GPRA’s requirements, the four key agencies set long-term and annual goals that address China compliance efforts in their most recent strategic and performance plans; however, the degree of specificity can and does vary in these goals. USTR and State (China-mission level) plans include China-specific goals related to their WTO compliance efforts, whereas USDA and Commerce include their China-WTO compliance efforts within broader goals of monitoring and enforcing WTO agreements and ensuring market access for U.S. companies. More specifically, USTR’s most recent strategic plan, which spans fiscal years 2000 to 2005, includes a general goal related to monitoring and enforcing trade agreements, while the 2004 performance plan includes a specific annual performance goal for USTR to monitor and review China’s implementation of WTO commitments to ensure compliance. State’s agencywide strategic planning documents describe broad goals for creating open markets and supporting U.S. businesses, while the 2004 Mission Performance Plan for the overseas posts in China is linked to these broad goals and sets forth a related performance goal specific to China. Although the most recent Commerce and USDA planning documents do not include specific goals relating to China’s WTO compliance, the plans do include more general strategic and performance goals for ensuring fair trade and enforcing existing trade agreements, which according to agency officials, broadly reflect their China-related activities. Agencies’ strategic and performance goals related to China-WTO compliance are summarized in table 5. We found that it was not possible to clearly determine the outcome of the key agencies’ China-WTO compliance efforts based on the agencies’ performance reports. Agencies should, at a minimum, have objective measurable (preferably quantifiable under GPRA) measures that allow for accurate and measurable evaluation of key agency programs, which we believe could include those covering China trade compliance. Based on GPRA’s provisions, we found problems in USTR, State, and Commerce’s assessment of program performance relevant to China-WTO compliance activities. For USDA, we found it was difficult to determine the effectiveness of the agency’s efforts with regard to China-WTO compliance because that agency chose goals and measures that were not specific to China or monitoring and enforcement, but agency officials did demonstrate how their China activities contributed to their performance measurement. Table 6 summarizes the key agencies’ relevant performance measures and the results they have reported. USTR reported its 2003 results in the President’s Annual Report on the Trade Agreements Program, but not in a measurable way that compares the agency’s performance against a predetermined annual target or objective. USTR’s performance plan identifies a quantifiable, measurable indicator of performance specific to China’s compliance, namely, the number of trade problems resolved and the number pending. These measures, if used, would have allowed for numerical evaluation of USTR’s China compliance activities if a target outcome was chosen. However, the aforementioned report only provides a narrative description of the status of China’s compliance problems and the U.S. responses; and as such, the report does not address USTR’s performance measure and thus does not allow for a clear measurable assessment of whether USTR is achieving its intended China compliance goal. We believe sound performance management requires an agency to specifically address its performance measures when it reports results. Furthermore, USTR’s performance measures should have been accompanied by targets that would have allowed the agency to clearly report results. For example, USTR could set a target to resolve some percentage of high priority compliance problems, or more generally, to eliminate some particular outstanding problems. Instead, USTR’s FY 2004 Performance Plan and FY 2002 Annual Performance Report states, “It is difficult to predict with accuracy whether or not implementation/ negotiation will be completed in any one year.” Furthermore, despite setting forth quantifiable measures in its performance plan, USTR officials said that they did not believe it was appropriate to quantify their performance results because of the many intangible factors that affect the interpretation of results, especially the various weights of different problems in terms of trade importance. As noted earlier, while quantitative measures are preferred under GPRA, GPRA provides agencies the flexibility, when appropriate, to use alternative (that is, nonquantifiable) measures—such as descriptive statements—as long as they allow for an accurate and independent determination of whether the agency is meeting its intended goal. Nevertheless, since USTR did establish quantitative measures, sound performance management would have dictated that it establish targets and report the results related to those measures. Commerce has established reasonably objective, quantifiable measures for its China-WTO compliance related efforts, but there are potential weaknesses in the reliability of the data used to judge results, as noted by the Commerce Inspector General. Commerce’s three related measures are the numbers of market access and compliance cases (1) initiated, (2) concluded, and (3) dollar value of trade addressed. The measures apply generally to all market access and compliance cases, but they also include information specific to China. Commerce uses a centrally maintained database to track market access and compliance cases it is working to resolve. Commerce has taken several important steps to improve the quality of the database, including providing training to staff on how to use the database, creating a users’ manual, and overseeing the timeliness and completeness of staff entries. However, some staff we interviewed noted that the quality of information in the Trade Compliance Center database is dependent on how thorough staff members are in entering information. Staff said that certain types of crosscutting issues or company-specific problems are still not always entered into the database. Others noted that information in the database on some issues is often incomplete, which raises new concerns about the reliability of the data. Commerce collects information on China market access and compliance cases, and agency officials provided us data that demonstrated the extent to which China- related cases contribute to the agency’s overall performance measures. (See table 7 for the China-specific results.) We reviewed State’s 2004 Mission Performance Plan for China because, unlike the overall agencywide plan, it explicitly addressed China-WTO compliance activities. Although the plan sets forth broad baseline indicators and targets, the mission plan does not indicate progress toward achieving the mission’s goals in this regard. The mission plan includes two relevant strategies to accomplish its performance goal relating to integrating China into the world economic system: one specifically related to monitoring China’s WTO compliance and another related to promoting U.S. economic interests. Furthermore, although the mission plan provides a useful discussion of the tactics proposed to achieve each of these strategies, we found the mission’s two annual performance indicators do not allow for quantifiable, measurable results. In its agencywide plan, State uses performance indicators for trade-related goals that do not specifically target either China or monitoring and enforcement-related activities. These worldwide indicators focus on concluding various types of negotiations, the acceptance of biotechnology in the agricultural sector, and the adoption of favorable international telecommunication practices. State’s report does not include how these measures apply to specific countries. USDA’s plan includes several relevant quantifiable measures, as well as a useful discussion of the means and strategies that the agency employs to increase international marketing opportunities for U.S. agricultural exporters. Furthermore, in several instances USDA reports its efforts in China are part of its strategy for achieving its goals. Because the agency chose goals and measures that were neither specific to China nor to monitoring and compliance, it is not possible to determine the effectiveness of the agency’s efforts with China-WTO compliance from its performance reports. However, agency officials were able to demonstrate that their China activities contributed to their performance measurement. Nevertheless, we found that several of the measures USDA uses to assess performance against the broad goal of expanding international marketing opportunities can be significantly impacted by external factors that affect trade in general. USDA’s fiscal year 2004 plan includes a brief list of the factors that may impact the agency’s progress toward achieving the goals, but the discussion of those factors does not present the agency’s strategies for mitigating those potential effects. As a result, it is difficult to determine the extent to which performance results are attributable to agency efforts or to external factors. USDA officials said that they understood their measures were problematic and said the agency was in the process of developing more effective measures, including country-specific performance measures. Agency officials told us about the substantial high-level effort they make to establish and follow an aggressive strategy to ensure China’s continued implementation of its WTO commitments. Furthermore, officials engage in significant interagency planning and regularly adjust priorities at the most senior levels in order to achieve results. However, these strategies and priorities are not reflected in agencies’ performance management activities that help guide lower level unit activities. Although GPRA requirements do not apply specifically to the planning activities of individual units, unit-level planning and performance reporting is essential to an agency’s oversight of its key programs. Such performance management activities help managers focus their efforts and resources on long-term priorities in the face of ongoing short-term exigencies. Furthermore, we found that the lower level units most directly involved with China-WTO compliance activities do not establish longer term annual unit-level objectives or priorities for their unit’s activities. Managers in all four key agencies said they did not set specific measurable performance goals or objectives for their units in support of agency overall performance management goals, nor did they set their own priorities and align resources to those priorities in any unit-level plan. Instead, managers indicated that their units’ priorities were adjusted frequently to respond to compliance problems as they arose depending on the level and number of companies complaining about a compliance problem, the amount of trade affected by a problem, the scope and magnitude of a problem, and which issues the Administration or Congress were focused on at the time, among other considerations. Units undertook various activities as needed to support these changing priorities. Many managers and staff believed that there was very little their units could do to predict which areas of China’s implementation would falter and that the units needed to remain flexible in order to respond to any compliance problems that might arise or to take advantage of any opportunity to solve a compliance problem. They believed this despite the fact that many of the compliance problems that have arisen to date have persisted since China’s accession. This approach was reflected in our interviews with staff. Most staff said prioritization of their unit’s activities was informal and ad hoc; no staff reported a formal prioritization scheme for addressing compliance issues. Furthermore, many staff and some managers were unable to articulate longer-term performance plans for their unit’s efforts. Of staff familiar with relevant performance goals, managers and staff at the four key agencies also said that they believe that their agencies’ existing performance measures do not fully capture their unit’s activities. For example, Commerce staff noted that much of the work they do regarding trade capacity building programs with the Chinese government and outreach to the private sector is not included in the database used to measure their performance in monitoring and enforcement, although those efforts can have a positive effect on China’s compliance. Because the units’ activities are not clearly tied to agency performance management efforts, China unit managers are not able to assess the results of their unit’s activities and use this information to guide future work. Managers could not comment on whether they had achieved predetermined objectives for any one year or specifically how their unit contributed to their agencies’ overall performance goals. In 2003, the key agencies continued to add resources to meet the demands of monitoring China’s compliance with its WTO commitments, especially in headquarters units. However, we found that high rates of planned and unplanned staff turnover in the main China units presented challenges to the agencies’ compliance efforts. Despite anticipated staff turnover in the units we examined, staff in those units lacked the opportunity to receive specific training related to carrying out their assigned responsibilities. Instead, the units generally relied on on-the-job training (OJT) for new staff. Consequently, staff with relatively short rotations in units focused on China’s WTO compliance spent a significant portion of their total tenure in the office getting up to speed on complex China trade issues. In response to the increased responsibilities arising from China’s WTO membership, USTR, Commerce, State, and USDA increased staff resources at both headquarters offices and in China. The estimated number of FTE staff in the units most directly involved with China-WTO compliance efforts across the four key agencies increased from about 25 to 58 between fiscal years 2000 and 2003. Staff in the main China-related headquarters units in Washington, D.C, increased at a rate of about 3 to 1 over China-based units over the same period, and over 70 percent of the staff resources were located within the agencies’ headquarters units by 2003. Commerce added the largest number of staff, as estimated FTE staff increased from about 9 to 35 between fiscal years 2000 and 2003. (See table 8.) The 2004 Appropriations Act for Commerce, Justice, and State intended additional staff increases and funds necessary for the U.S. government’s China compliance efforts. Specifically, the Congress called for Commerce to reorganize and dedicate more resources to China compliance efforts by, among other things, establishing an enforcement office within the Market Access and Compliance division to provide legal and investigative assistance to companies seeking to enforce their rights under existing trade agreements and reorganizing the Import Administration to include an office that deals specifically with antidumping cases involving China and other nonmarket economy countries. The Congress also called for USTR to dedicate more resources to China trade issues by adding three positions in the agency’s main China trade unit and six other positions in other offices that have a role in monitoring and enforcing China’s trade commitments. Notably, these intended changes would continue the trend of increasing staff at the headquarters instead of the agencies’ field units. Although the key agencies (excluding USTR) have many staff located in China, a relatively small proportion of those staff have a direct role in the U.S. government’s China-WTO compliance activities. Officials in various overseas units assist the agencies’ China-WTO compliance efforts, but this assistance is not a primary component of their responsibilities. For example, Commerce had 23 officers located in China in fiscal year 2003, but only four officers had explicit China-WTO compliance responsibilities; the other officers were Commercial Officers, and their primary duties involved trade promotion. Similarly, only about a quarter of the 26 Foreign Service officers and staff within the Beijing embassy’s economic section focused primarily on China-WTO compliance in 2003. About the same proportion of USDA’s Foreign Agriculture Service (FAS) officers in China had an explicit role in China-WTO compliance issues, while the other officers were primarily focused on promoting and facilitating U.S. agricultural exports. Relatively high rates of planned and unplanned staff turnover in several of the main China units with primary responsibility for monitoring and enforcing China’s WTO commitments presented challenges for the agencies’ China compliance efforts. Managers and staff in the units we reviewed cited several negative effects of turnover on their units’ compliance efforts. Turnover across all executive branch agencies averaged 5.8 percent in fiscal year 2003, but turnover in several of the agencies’ main China units was significantly higher. For example, between fiscal years 2000 and 2003, the average annual turnover rate in the Office of China Economic Area at Commerce was about 25 percent, and the rate was about 32 percent in USTR’s 3 to 5 person China office, over the same period. Additionally, according to State data, six of the eight staff (75 percent) in the section that oversees the embassy’s China compliance efforts turned over in 2002 alone. Lastly, although turnover has not been an issue in USDA’s Asia and the Americas Division, staff noted that because of the small size of the office, staff departures could create a substantial loss of institutional memory. In some instances, turnover in the units is part of a planned staffing process. For example, a core principle of State’s staffing model is to create generalists who can serve in any overseas mission. Consistent with this objective, most entry- and mid-level Foreign Service officers are rotational and change posts every 2 to 4 years. USDA’s FAS officers are subject to minimum 3-year rotations. Additionally, in 2003, two of the five staff in USTR’s China office were temporary detailees from other agencies, and these staff typically rotate back to their home agency after 1 year. In other cases, staff turnover resulted from unplanned staff separations, such as when staff left to take positions in another agency or in the private sector. Lastly, we previously reported that the core officials that actively participated in China’s WTO accession negotiations had changed jobs or left the government by 2002. Managers and several staff in the key headquarters and field units said that turnover had generally negative effects on the units’ activities. First, several officials said that turnover in the units meant that new staff sometimes did not have sufficient time to develop expertise on complex China trade issues before they rotated to another position or left the agency. Consequently, staff with relatively short rotations (1 to 3 years) in China compliance-focused units spent a significant portion of their tenure learning the issues rather than focusing on actively resolving compliance problems. Second, other officials noted that this problem is compounded because outgoing and incoming staff sometimes only overlap for a brief period, if at all. For example, all officers and staff in two units at State that were involved in China-WTO issues were scheduled to rotate at the same time in 2004, so there would likely be little or no overlap with their successors. Lack of overlap between transitioning staff requires incoming staff to learn their assigned portfolio of issues without the benefit of guidance from their predecessors. Third, one embassy official pointed out that staff turnover makes it difficult for officers to effectively establish and cultivate contacts with their counterparts in the Chinese government. The main China units at the four key agencies lacked specific training relevant to executing China-WTO compliance responsibilities, or, to the extent that the agencies’ offered specific training, staff generally lacked sufficient opportunities to receive it. Additionally, as noted in our previous reports, some agencies’ efforts continued to be hampered by shortfalls in Chinese language training. We found that agencies relied almost exclusively on OJT to give new staff the skills necessary to do their jobs. About half of the staff and managers we interviewed in the main headquarters and field units indicated that formal training opportunities for staff were limited or that additional training would enhance their units’ effectiveness. Our model of strategic human capital management emphasizes the importance of structured training as a means to develop and retain staff and describes the important linkages between training and effectively attaining an agency’s strategic and performance goals. However, none of the units we reviewed offered or required staff to take part in formal training curricula related to carrying out the mission of the unit. In some cases, the agencies’ offered trade-related training courses, but staff in each of those offices said that their opportunities to take those courses were limited by time and workload constraints. For example, State’s Foreign Service Institute offers several courses on trade issues, including trade agreement implementation, the WTO dispute settlement process, trade law, and trade and environment issues. While officers in the China embassy’s WTO Group and Commerce’s Trade Facilitation Office took part in the course on trade agreement implementation, the officers we interviewed noted that they had taken few, if any, other courses. Furthermore, USTR officials told us that USTR only hires experienced personnel, who do not need training. A factor that illustrates the importance of training is that many vacancies in the main China units are filled by junior and mid-level staff who would benefit from more training. For example, in Commerce’s Office of China Economic area, 11 of the 17 new staff hired between fiscal years 2001 and 2003 were at the GS-9 level or below. Similarly, mid- and junior-level officers (FS-03 and lower) filled five of the eight positions at the WTO unit at Embassy Beijing in fiscal year 2003. Embassy officials, including the Deputy Chief of Mission, noted the need for greater expertise and experience among officers at the post to deal with complex China trade issues. Even staff that had experience working in China or working on trade issues who were hired or rotated into China-trade units indicated that they would benefit from training on issues other than China-WTO compliance, such as training on writing cables and briefing papers. We previously reported on the shortfalls of foreign language skills, including gaps in Mandarin Chinese at State’s overseas posts and within Commerce’s Foreign Commercial Service. Despite State’s recent improvement in addressing this shortfall, many staff we interviewed noted that Chinese language training opportunities were limited, even for State Foreign Service officers. At the same time, managers and staff in the key units said that, while Chinese language is not essential for all positions, it is difficult to effectively engage their Chinese counterparts on complex trade issues without having sufficient language skills. Embassy staff said that due to the heavy visitor schedule and workload, they found it difficult to consistently take advantage of the language instruction available at the post. Furthermore, they noted that 1-year rotational officers who have not had adequate language training filled many positions in the embassy’s WTO Group. All of the units we reviewed relied almost exclusively on OJT to acquaint staff with how to carry out their China-WTO compliance responsibilities. Without formal guidance regarding their responsibilities, many staff said they generally relied on colleagues and supervisors for further direction. Although OJT is essential to developing expertise on complex China-WTO trade issues, it cannot ensure that new staff have all the information they need to perform their duties. Our previous work notes that effective utilization of human capital is best achieved through a comprehensive mix of both formal and OJT. Additionally, we identified other problems with the agencies’ reliance on OJT. For example, we found that inconsistencies in how the main units track and share information on China compliance can limit the effectiveness of OJT. Not only is tracking information an important aspect of the overall monitoring and enforcement process, but it can also help mitigate the effects of turnover and is an important OJT tool for acquainting new staff with their assigned portfolio of responsibilities. Staff across the four key agencies said that there was little or no internal guidance about the types of information that should be collected and how the information should be compiled and shared. Finally, as one high-level embassy official noted, relying on OJT can significantly add to the workload of more senior staff, who must be diverted from their own portfolios in order to provide informal guidance to new staff. Ensuring China’s compliance with its WTO commitments is a continuing priority for the U.S. government. The complexity, breadth, and ongoing nature of many of the problems that have arisen to date demonstrate the need for a cohesive and sustained effort from the key U.S. agencies to monitor and enforce China’s implementation of WTO policies. The key agencies have done much to enhance their capacity to carry out these efforts by coordinating on policy issues and increasing staff resources. However, there are three areas in which USTR, Commerce, State, and USDA should take steps to improve these efforts and maximize the effectiveness of the resources allocated to the task of securing the benefits of China’s membership in the WTO. First, while U.S. monitoring and enforcement activities in 2003 reflected increased high-level bilateral engagement by executive branch officials, some multilateral efforts did not achieve their full potential. Specifically, the WTO’s annual TRM was intended to be a thorough review of China’s implementation, but many U.S., WTO, and foreign officials agree that the mechanism has limitations. Nevertheless, the TRM and the benefits it provides could be enhanced by increased member participation and more timely U.S. preparation, which would improve the chances for full and informed responses from Chinese officials and maximize the potential exchange of information. Thus, even with a continued U.S. emphasis on bilateral and other multilateral engagement, the TRM can continue to provide an important avenue to pursue U.S. trade interests. Second, the U.S. government’s China-WTO compliance efforts would benefit from increased emphasis on planning and performance management within each of the key agencies. While we acknowledge that unit managers need to be flexible when reacting to compliance problems, setting clear unit priorities and measurable goals that support overall agency objectives need not reduce their flexibility. To the contrary, GPRA and our substantial body work on planning emphasizes the importance and usefulness of developing unit and program-level plans and measures that are connected to an agency’s overall mission. We acknowledge the challenges of developing measurable goals, given the extent to which external factors can influence agencies’ trade compliance efforts; however, we believe that it is possible to better measure results annually. Third, we found that these agencies have opportunities to better manage their human capital involved in the U.S. government’s China-compliance activities. Specifically, in an environment of high and regular staff turnover, new staff are called upon to take up monitoring and enforcement activities that involve complex, long-term issues. New staffs’ effectiveness and efficiency is reduced when no formal training is available to help them with their day-to-day activities, and when staffing gaps mean they cannot learn from their more experienced predecessors. Increased management attention to providing an adequate mix of OJT and formal training can help ensure that new employees have the necessary tools for doing their jobs well. To improve multilateral engagement with China on WTO compliance issues, we recommend that the U.S. Trade Representative (USTR) take steps to maximize the potential benefits of the Transitional Review Mechanism (TRM). These steps could include establishing and meeting internal deadlines to submit written questions to the Chinese delegation 4 to 6 weeks or more before each TRM and coordinating with other WTO members to increase participation in the review. Additionally, we recommend that the USTR and the Secretaries of Commerce, State, and Agriculture (USDA) take steps to improve performance management pertinent to the agencies’ China-WTO compliance efforts. Specifically, USTR should set annual measurable predetermined targets related to its China compliance performance measures and assess the results in its annual performance reports. The Secretary of Commerce should take further steps to improve the accuracy of the data used to measure results for the agency’s trade compliance- related goals. The Secretary of State should require the China mission to assess results in meeting their goals and report this information as part of the annual mission performance plan. The Secretary of USDA should further examine the external factors that may affect agency’s progress toward achieving its trade-related goals and present the agency’s strategies for mitigating those potential effects. Furthermore, the head of each agency should direct their main China compliance units to set forth unit plans that are clearly linked to agency performance goals and measures, establish unit priorities for their activities, and annually assess unit results to better manage their resources. Further, we recommend that USTR and the Secretaries of Commerce, State, and USDA undertake actions to mitigate the effects of both anticipated and unplanned staff turnover within the agencies’ main China- WTO compliance units by identifying China compliance-related training needs and taking steps to ensure that staff have adequate opportunity to acquire the necessary training. These actions could include determining which of the agencies’ existing courses would be appropriate for staff, determining what types of external training are available, developing training courses on relevant issues, and establishing a plan and timelines for existing and new staff to receive training. We provided draft copies of this report to the Office of the U.S. Trade Representative, and the Departments of Commerce, State, and Agriculture, and we received written comments from all four agencies (the agencies’ comments and our specific responses are reproduced in appendixes IV through VII). USTR and Commerce also provided technical comments, which we incorporated as appropriate. In general, the agencies noted they would consider our recommendations, but they raised various concerns and provided additional information for our consideration. USTR, Commerce, and State expressed similar concerns about our analysis of the scope and disposition of the compliance problems presented in USTR’s 2002 and 2003 reports on China’s WTO compliance. The agencies emphasized the importance of developments in resolving compliance problems that occurred in 2004 and believed that our characterization of the disposition of compliance problems was potentially misleading. We generally agreed with these comments and updated the report to provide more equal treatment of 2004 developments and modified the presentation of our analysis of the disposition of China’s compliance problems. Our responses to the agencies’ specific concerns on these issues are presented in appendixes IV through VI. USTR, Commerce, and USDA also made specific comments that our report did not adequately reflect extensive high-level strategic coordination efforts among the key agencies, and they provided additional information regarding these efforts. We modified the report to include further discussion of high-level strategic planning efforts and clarified that our assessment and recommendations focus on the agencies’ performance management efforts. The agencies expressed related concerns about the challenges associated with quantitatively measuring performance of their monitoring and enforcement efforts. State and USDA indicated that efforts are under way in those agencies to improve aspects of their performance planning and measurement, while USTR responded that their performance measurements were adequate and complied with GPRA and OMB guidance. We maintain that our assessments of the agencies’ performance management are accurate, especially in light of OMB guidance (set forth in OMB Circular No. A-11), and that moreover our recommendations, if implemented, would aid in better management of the U.S. government’s efforts to ensure China’s compliance. USTR commented that our discussion of the TRM overlooked the U.S. government’s efforts to engage China outside of the TRM through the regular WTO committee structure, and disagreed with our conclusion that greater lead time for U.S. TRM submissions to China would increase the potential for fuller oral responses from China. We amplified our discussion of U.S. multilateral efforts in the WTO, but we continue to believe that USTR should take steps to maximize the potential of the TRM, which would include providing greater lead time in submitting TRM questions to China. Commerce, State, and USDA indicated that training is a priority and that training opportunities exist for staff in the agencies China units. Furthermore, State believed our criticisms concerning training were overstated and did not take into account various structural constraints faced by that department. USTR said that USTR staff did not require training since the agency only hires experienced staff. We acknowledge that training opportunities, including OJT, do exist in the agencies and that many staff have extensive backgrounds on China trade issues. Nevertheless, we continue to believe that a more cohesive approach to training can help alleviate the effects of turnover and maximize staff effectiveness. Lastly, some of the agencies provided additional information regarding various activities and other contextual information associated with ensuring China’s compliance. To the extent that this information was within the scope of our review, we have modified the report as appropriate. We are sending copies of this report to the U.S. Trade Representative, the Secretaries of Commerce, State, and Agriculture, and interested congressional committees. We will make copies available to other interested parties upon request. In addition this report will be available at our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VIII. As part of a long-term body of work that the Chairman and the Ranking Minority Member of the Senate Committee on Finance, as well as the Chairman and the Ranking Minority Member of the House Committee on Ways and Means, requested, we examined how the U.S. Trade Representative (USTR) and the Departments of Commerce, State, and Agriculture (USDA) are positioned to monitor and enforce China’s compliance with its World Trade Organization (WTO) commitments. Specifically, in this report, we (1) examined the scope and disposition of China-WTO compliance problems that the U.S. government is working to resolve; (2) reviewed the U.S. government’s bilateral and multilateral approaches for resolving compliance problems; (3) assessed the agencies’ strategies, plans, and measures for ensuring China’s compliance; and (4) assessed how the U.S. government has adapted its staff resources to monitor and resolve China compliance problems. To examine the scope and disposition of compliance problems, we reviewed the USTR’s Report to Congress on China’s WTO Compliance from 2002 and 2003. These annual reports, mandated in conjunction with China’s 2001 accession to the WTO, incorporate a broad range of input from key federal agencies as well as the business community. We systematically cross-checked the reports with testimony and reports submitted to the Trade Policy Staff Committee, Subcommittee on China- WTO Compliance as part of its 2002 and 2003 hearings on China-WTO compliance and other relevant reports. Other reports included those issued by the U.S.-China Business Council and the U.S. Chamber of Commerce, which represent a broad cross-section of U.S. industries and companies doing business in China. We found the USTR reports to be a generally fair and complete representation of U.S. industry concerns. After verifying the content of USTR’s reports to the extent possible, we quantified the number of compliance problems in each area of China’s WTO commitment based on the report’s narrative descriptions of China’s compliance problems. To analyze the disposition of the compliance problems, we again relied extensively on the narrative descriptions provided in the reports to make a determination and assigned three broad categories to describe the disposition of the problems: No progress noted, some progress noted, and resolved. The determination that there was “no progress noted” on a particular problem was based on the fact that the report did not indicate that China took any action to resolve the issue after a range of enforcement strategies carried out by USTR and the other key agencies. If the reports indicated that China had undertaken actions to resolve a compliance problem, we coded this as “some progress noted.” The assessment of “some progress noted” included a range of steps that China took to address U.S. concerns, from delaying the implementation of problematic measures to exempting certain industries from WTO-inconsistent restrictions. We coded problems as “resolved” only if the report language clearly indicated that the compliance problem was resolved and the U.S. government was no longer pursuing a resolution of that particular problem. Although we note in the report that the U.S. government has indicated positive developments in resolving some of the problems in 2004, our analysis focused only on the issues raised in the 2002 and 2003 reports. Several of our staff reviewed these analyses to ensure consistency and consensus. See table 9 for our assessment criteria and examples. To assess U.S. bilateral and multilateral engagement strategies, we reviewed agency and WTO documents and interviewed agency officials both in Washington, D.C., and Beijing, China, as well as WTO Secretariat and other member governments in Geneva, Switzerland, and Brussels, Belgium. To assess overall U.S. government strategy, we interviewed USTR officials in Geneva and Washington D.C., and reviewed official testimony. Our review of the Transitional Review Mechanism (TRM) is based on analysis of official WTO documents, which include minutes and questions or comments submitted by member countries, as well as interviews with U.S. officials and with member countries officials in Geneva. To assess USTR, Commerce, State, and USDA strategies and plans, we examined planning documents such as annual performance reports, budget documents, and annual reviews. We reviewed each agency’s most recent performance and strategic plans to determine how China WTO monitoring and enforcement is incorporated into the agencies’ planning process. Our evaluation of agency planning efforts was informed by our previous studies on the Government Performance and Results Act of 1993 (GPRA). We enhanced our review of this information by interviewing agency officials regarding agency wide and unit-level planning and evaluation efforts. To gather staff and management perspectives on planning and performance measures, we conducted standardized interviews within all four agencies. To assess agency resources and other activities related to China’s compliance, we reviewed the four key agencies’ planning documents, budget and staffing data, and information on training. Our evaluation was informed by past GAO studies on human capital management issues. We asked each agency to provide us with the actual number of FTE staff and staff attrition rates in key units involved in China-WTO compliance efforts for fiscal years 2002 and 2003. In some cases, agencies were unable to provide us with actual staffing numbers because some staff did not work on China issues full-time. In those instances, we asked agency officials to estimate FTE staff working solely on China compliance. To ensure that the data were reliable to the extent possible and necessary for our review, we discussed criteria for making estimates with the agencies to ensure that the estimates were consistent between agencies. We compared agency information on staffing with information we received for previous reviews and discussed changes in staffing levels with cognizant agency officials. We determined that the staffing information was sufficiently reliable for our review. We asked each agency to supply us with training documents or manuals and minutes or records from coordinating meetings. We supplemented our review of this information by conducting individual interviews with over 50 staff and unit managers from the four key agencies that had China compliance as a main portion of their work portfolio. These were standardized interviews conducted individually, with the exception of USTR, which required a group interview. We were able to interview over two-thirds of U.S. government staff in the main units at the four key agencies who work primarily on China-WTO compliance. We conducted our work in Washington, D.C., Beijing, China, Geneva, Switzerland, and Brussels, Belgium. We performed our work from July 2003 to June 2004 in accordance with generally accepted government auditing standards. USTR’s 2002 and 2003 Report to Congress on China’s WTO Compliance identified compliance problems within nine broad areas related to China’s trade regime where China had made commitments to other WTO members. Our previous work used similar categories to analyze China’s commitments. Table 10 describes the commitment categories used in USTR’s reports. China’s commitments to the WTO provide for an annual review, referred to as the Transitional Review Mechanism (TRM), of China’s implementation to take place within the WTO’s General Council and 16 subsidiary bodies. Under the TRM, WTO members can submit written questions to China in advance of the meetings, and address China directly during the meetings. Additionally, China’s accession agreement describes various types of information that China is required to submit to the WTO subsidiary bodies in advance of the TRM. Tables 12 and 13 list the dates of the meetings where the TRM took place in 2002 and 2003 and summarize specific information regarding WTO members’ participation in the meetings for each year. The following are GAO’s comments on USTR’s letter dated September 21, 2004. 1. We updated our draft report to include additional information on developments in 2004, including the resolution of the dispute over China’s discriminatory value-added tax refund policy for semiconductors. Since our analysis of the 2002 and 2003 USTR reports cannot be updated in the absence of the forthcoming 2004 USTR report in December, we modified our presentation about the disposition of China’s compliance problems to provide a more balanced treatment of those issues. However, we still demonstrate the scope of compliance problems, how China’s compliance problems can persist for 2 years or more, and the extent to which China’s progress in resolving these issues has been mixed. 2. We amplified our discussion of other multilateral engagement through regular WTO committees in order to put the TRM in better context. Nevertheless, after reviewing the minutes of the various TRM meetings, we continue to believe that earlier U.S. submission of questions to China in advance of the TRM meetings could increase the chances for more thorough oral responses from the Chinese delegation. We also continue to believe that U.S. efforts to increase other WTO members’ participation in the TRM could improve its effectiveness, despite its ongoing limitations and support the other key agencies’ outreach efforts in this regard. 3. While USTR submits long narrative reports to Congress on China’s WTO compliance and the steps the U.S. government takes to address problems, these reports are not a substitute for a bottom-line performance management assessment of the degree to which the agency has achieved pre-determined measurable annual objectives. We noted that USTR has described numerical indicators in its performance plan, yet has not set targets or measured the agency’s performance against these indicators. Although we agree that USTR has some flexibility under GPRA to establish performance measures that are not strictly quantitative, the agency specifically sets forth numerical measures in its performance plan. Accordingly, USTR should have specifically addressed these measures in its results report. Additionally, while USTR states that its approach was approved by OMB and was in compliance with GPRA, the OMB guidance requires all agencies to report a comparison of actual performance with projected, target levels of performance; without establishing targets, we believe USTR is not able to make this required comparison. We revised our draft report to clarify and further emphasize our assessment. 4. We added some discussion to better recognize these senior-level policy coordination initiatives and also clarified that our findings about planning and prioritizing was in the context of performance management. Also, we refined our observations about how the China units in the key agencies would benefit from improved performance management that institutionalized high-level priorities and planning to better guide unit-level China compliance activities, which support these initiatives. The following are GAO’s comments on Commerce’s letter dated September 21, 2004. 1. We updated our draft report to better reflect high-level policy coordination on China compliance efforts and to clarify our focus on unit-level performance management activities. We also included additional information on developments in 2004, including the resolution of the dispute over China’s discriminatory value-added tax refund policy for semiconductors. 2. We did not assess the organizational changes that Commerce has recently implemented to enhance its monitoring and enforcement activities, but noted that these changes were under way. Nevertheless, these changes create an opportunity for the International Trade Administration to improve performance management and human capital management along the lines we recommend in order to maximize the effectiveness of these China compliance-related offices and the staff they have hired or reassigned. The following are GAO’s comments on State’s letter dated September 2, 2004. 1. We believe that measurable indicators provide policymakers with meaningful summary information, however the figure in our draft report was outdated and we removed it. As we indicate in our report, priorities can be established according to a number of factors, including trade volume/market value. We agree that such information might provide a more accurate measure of results. Our report points out that, while USTR’s annual China compliance reports identify “priority” areas, the economic importance of many individual problems cannot be easily quantified (as noted in State’s letter) and was not reported. As a result we did not attempt to calculate the importance or otherwise prioritize or rank the problems in our analysis. With regard to our discussion of the disposition of problems in 2002 and 2003, we explain our methodology for categorizing what USTR has reported to Congress in detail in appendix I. 2. We added information about other multilateral activities to put the TRM in better context. We agree that the TRM remains an important component of China compliance activities and appreciate that continued efforts to coordinate compliance issues multilaterally, as we recommend, are sometimes difficult and are not always effective. 3. We appreciate State’s performance management challenges and welcome the intention to refine its annual planning process and to develop measurable performance goals for its China-WTO compliance activities. We reiterate that assessing and reporting annual results at the mission (or bureau) level can help ensure that unit-level activities reflect agency priorities. 4. Previous GAO reports have discussed human capital management challenges at State more thoroughly. In this report, we discuss some particular issues as they relate to China compliance efforts, including planned and unplanned turnover of staff and how the press of daily business makes staff development difficult. We believe that mitigating the effects of turnover through greater attention to training, either in or outside of the classroom, can nevertheless help while solutions to longer-term human capital challenges are being pursued. The following are GAO’s comments on USDA’s letter dated September 8, 2004. 1. Our report summarizes the various components of USDA and the other key agencies’ strategic and performance plans. We did not review the forthcoming country-specific performance measures to which USDA refers; we concur that the development and implementation of effective measures that allow for more meaningful assessments of results would be a positive step in improving USDA’s performance management. 2. Our report acknowledges that many units within USDA play an important role in the U.S. government’s China-WTO compliance efforts, and our earlier work on China-WTO compliance issues provides an overview of the various intra-agency structures. We did not review staff turnover in each of the units USDA listed, but we continue to assert that relatively high planned and unplanned turnover in the units we reviewed underscores the need for greater attention to staff training. Lastly, we acknowledge that USDA has existing training programs and makes opportunities available to its staff, but we maintain that the agencies should undertake a more systematic approach to ensure that staff further develop necessary job skills. In addition to those named above, Jennifer Costello, Jane-yu Li, Jamie McDonald, Valérie Nowak, Richard Seldin, and Kimberly Siegal made key contributions to this report. World Trade Organization: U.S. Companies’ Views on China’s Implementation of Its Commitments. GAO-04-508. Washington, D.C.: March 24, 2004. World Trade Organization: Ensuring China’s Compliance Requires a Sustained and Multifaceted Approach. GAO-04-172T. Washington, D.C.: October 30, 2003. GAO's Electronic Database of China’s World Trade Organization Commitments. GAO-03-797R. Washington, D.C.: June 13, 2003. World Trade Organization: First-Year U.S. Efforts to Monitor China’s Compliance. GAO-03-461. Washington, D.C.: March 31, 2003. World Trade Organization: Analysis of China’s Commitments to Other Members. GAO-03-4. Washington, D.C.: October 3, 2002. World Trade Organization: Selected U.S. Company Views about China’s Membership. GAO-02-1056. Washington, D.C.: September 23, 2002. World Trade Organization: Observations on China’s Rule of Law Reforms. GAO-02-812T. Washington, D.C.: June 6, 2002. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
China's 2001 accession to the World Trade Organization (WTO) required China to reform its economy and trade practices. As part of ongoing work, GAO reviewed how the U.S. Trade Representative (USTR) and the Departments of Commerce, Agriculture, and State pursued China's WTO compliance in 2003. Specifically, this report (1) discusses the scope and disposition of China's compliance problems, (2) reviews the U.S. government's bilateral and multilateral approaches for resolving these problems, (3) assesses the key agencies' strategies and plans for ensuring compliance, and (4) assesses how the agencies have adapted their staff resources to conduct compliance activities. China has successfully implemented many of its numerous WTO commitments, but USTR reported that over 100 separate compliance problems arose in 2002 and 2003. These problems ranged from specific, relatively simple issues to broader, more systemic concerns. Most problems continued from 2002 to 2003, an indication that China was able to address the more easily resolvable problems, while the more complex issues persisted. Furthermore, new problems emerged, with many arising from phased-in commitments that China was due to implement in 2003. The U.S. government continued to pursue resolution of compliance problems in 2004, and the agencies noted the successful resolution of several major issues of economic importance to U.S. companies. The key U.S. agencies have done much to ensure China's compliance, but GAO found three areas in which these key agencies could take steps to improve their efforts: First, U.S. efforts to address compliance problems emphasized high-level bilateral engagement with China in 2003, with increased senior-level delegations to China and elevated participation in formal consultative mechanisms. U.S. multilateral engagement with China in 2003 reflected more emphasis on working through regular WTO committee business, because the WTO's annual review of China's implementation, the Transitional Review Mechanism (TRM), has ongoing limitations. Nevertheless, the TRM has benefits and these could be enhanced by increased member participation and earlier U.S. submissions, which would maximize the potential for full and informed responses from China. Second, although interagency and intra-agency coordination on policy and high level compliance strategies was generally effective, GAO found various performance management limitations that make it difficult to clearly measure and assess the outcome of the key agencies' China-WTO compliance efforts. GAO found that the specific units within the agencies that are most directly involved with these efforts could improve how the agencies measure and report the results of their activities. Furthermore, developing clearer linkages between unit-level results and agency goals that are established in accordance with the Government Performance and Results Act of 1993 could enhance the effectiveness of these units' activities. Third, turnover and lack of training limited the effectiveness of increased staff resources for China-WTO compliance activities. New staff members were called upon to take up complex monitoring and enforcement activities while relying primarily on on-the-job training, which was complicated by high and often predictable staff turnover. Attention to human capital management is particularly important, given the long-term challenges associated with ensuring China's compliance.
|
While federal IT investments can improve operational performance and increase public interaction with government, too often they have become risky, costly, and unproductive mistakes. Congress has expressed interest in monitoring and improving IT investments through hearings and other reviews over the past two decades. In response, we have testified and reported on lengthy federal IT projects that too frequently incur cost overruns and schedule slippages while contributing little to mission- related outcomes. Similarly, in 2010, OMB expressed concern about expansive federal IT projects that have taken years and have failed at alarming rates. OMB also noted that many projects follow “grand designs” to deliver functionality in years, rather than breaking projects into more manageable chunks and delivering functionality every few quarters. We recently reported on OMB’s progress on these reforms in GAO, Information Technology Reform: Progress Made; More Needs to Be Done to Complete Actions and Measure Results, GAO-12-461 (Washington, D.C.: Apr. 26, 2012). the principal interagency forum for improving agency practices related to the design, acquisition, development, modernization, use, sharing, and performance of federal information resources. Agile software development supports the practice of shorter software delivery. Specifically, Agile calls for the delivery of software in small, short increments rather than in the typically long, sequential phases of a traditional waterfall approach. More a philosophy than a methodology, Agile emphasizes this early and continuous software delivery, as well as using collaborative teams, and measuring progress with working software. The Agile approach was first articulated in a 2001 document called the Agile Manifesto, which is still used today. The manifesto has four values: (1) individuals and interactions over processes and tools, (2) working software over comprehensive documentation, (3) customer collaboration over contract negotiation, and (4) responding to change over following a plan. Appendix II provides additional information on the Agile Manifesto and its related principles. The Agile approach differs in several ways from traditional waterfall software development,of a sequence of phases. For example, the two approaches differ in (1) the timing and scope of software development and delivery, (2) the timing and scope of project planning, (3) project status evaluation, and (4) collaboration. which produces a full software product at the end Timing and scope of software development and delivery. In an Agile project, working software is produced in iterations of typically one to eight weeks in duration, each of which provides a segment of functionality. To allow completion within the short time frame, each iteration is relatively small in scope. For example, an iteration could encompass a single function within a multistep process for documenting and reporting insurance claims, such as a data entry screen or a link to a database. Iterations combine into releases, with the number of iterations dependent on the scope of the multistep process. To meet the goal of delivering working software, teams perform each of the steps of traditional software development for each iteration. Specifically, for each iteration, the teams identify requirements, design, and develop software to meet those requirements, and test the resulting software to determine if it meets the stated requirements. In contrast, waterfall development proceeds in sequential phases of no consistent, fixed duration to produce a complete system, such as one that addresses a comprehensive set of steps to manage insurance claims. Such full system development efforts can take several years. Waterfall phases typically address a single step in the development cycle. For example, in one phase, customer requirements for the complete product are documented, reviewed, and handed to technical staff. One or more phases follow, in which the technical staff develop software to meet those requirements. In the final phase, the software is tested and reviewed for compliance with the identified requirements. Timing and scope of project planning. In Agile, initial planning regarding cost, scope, and timing is conducted at a high level. However, these initial plans are supplemented by more specific plans for each iteration and the overall plans can be revised to reflect experience from completed iterations. For example, desired project outcomes might initially be captured in a broad vision statement that provides the basis for developing specific outcomes for an iteration. Once an iteration has been completed, the overall plans can be revised to reflect the completed work and any knowledge gained during the iteration. For example, initial cost and schedule estimates can be revised to reflect the actual cost and timing of the completed work. In contrast, in traditional waterfall project management, this analysis is documented in detail at the beginning of the project for the entire scope of work. For example, significant effort may be devoted to documenting strategies, project plans, cost and schedule estimates, and requirements for a full system. Project status evaluation. In Agile, project status is primarily evaluated based on software demonstrations. For example, iterations typically end with a demonstration for customers and stakeholders of the working software produced during that iteration. The demonstration can reveal requirements that were not fully addressed during the iteration or the discovery of new requirements. These incomplete or newly-identified requirements are queued for possible inclusion in later iterations. In contrast, in traditional project management, progress is assessed based on a review of data and documents at predetermined milestones and checkpoints. Milestones and checkpoints can occur at the end of a phase, such as the end of requirements definition, or at scheduled intervals, such as monthly. The reviews typically include status reports on work done to date and a comparison of the project’s actual cost and schedule to baseline projections. Federal IT evaluation guidance, such as our IT Investment Management guidance and OMB IT reporting requirements specify evaluations at key milestones, and annually, which more closely align with traditional development methods. For example, for major projects, OMB requires a monthly comparison of actual and planned cost and schedule and risk status and annual performance measures using, for example, earned value management (EVM). Collaboration. Agile development emphasizes collaboration more than traditional approaches do. For example, to coordinate the many disciplines of an iteration, such as design and testing, customers work frequently and closely with technical staff. Furthermore, teams are often self-directed, meaning tasks and due dates are done within the team and coordinated with project sponsors and stakeholders as needed to complete the tasks. In contrast, with traditional project management, customer and technical staff typically work separately, and project tasks are prescribed and monitored by a project manager, who reports to entities such as a program management office. See figure 1 for a depiction of Agile development compared to waterfall development. There are numerous frameworks available to Agile practitioners. One framework, called eXtreme Programming (XP), includes development techniques. Another framework, called Scrum, defines management processes and roles. The Scrum framework is widely used in the public and private sector, and its terminology is often used in Agile discussions. For example, Scrum iterations are called sprints, which are bundled into releases. Sprint teams collaborate with minimal management direction, often co-located in work rooms. They meet daily and post their task status visibly, such as on wall charts. Other concepts commonly used by sprint teams are user stories, story points, and backlog. User stories convey the customers’ requirements. A user story typically follows the construct of “As a <type of user> I want <some goal> so that <some reason>.” For example, “As a claims processor, I want to check a claim payment status so that I can promptly reply to a customer’s request for payment status.” Each user story is assigned a level of effort, called story points, which are a relative unit of measure used to communicate complexity and progress between the business and development sides of the project. To ensure that the product is usable at the end of every iteration, teams adhere to an agreed-upon definition of done. This includes stakeholders defining how completed work conforms to an organization’s standards, conventions, and guidelines. The backlog is a list of user stories to be addressed by working software. If new requirements or defects are discovered, these can be stored in the backlog to be addressed in future iterations. Progress in automating user stories is tracked daily using metrics and tools. An example of a metric is velocity. Velocity tracks the rate of work using the number of story points completed or expected to be completed in an iteration. For example, if a team completed 100 story points during a four-week iteration, the velocity for the team would be 100 story points every four weeks. An example of a tool is a burn-down chart, which tracks progress and the amount of work remaining for an iteration or for a release, which is made up of multiple iterations. Agile use is reported in the private sector for small to medium sized projects and is starting to be used for larger projects as well. Also, widely accepted industry guidance on software development has recently been revised to include more Agile approaches. Specifically, the Software Engineering Institute’s Capability Maturity Model® Integration updated some process areas to help those using Agile to interpret its practices. Furthermore, the federal government has begun to use Agile. For example, we have reported on several federal software development efforts that have used Agile techniques. Specifically, in December 2010 we reported that the Department of Veterans Affairs was using Agile to develop software to support a new benefit for veterans. We also reported that the Department of Defense was developing the Global Combat Support System-Joint system using Agile. In addition, the department sponsored studies that examined the possibility of more widespread use of Agile in its development projects. We identified 32 practices and approaches as effective for applying Agile to software development projects, based on an analysis of practices identified by experienced Agile users. Our analysis also found that the identified practices generally align with five key project management activities outlined in widely-accepted software development guidance: strategic planning, organizational commitment and collaboration, preparation, execution, and evaluation. Strategic planning describes an organization’s overall plans in an Agile environment. Six practices align with strategic planning. They are: Strive to be more Agile, rather than simply following Agile methods and steps. This approach encourages adoption of the philosophy, or mindset, rather than specific steps. This is also referred to as being Agile, or having agility versus using it. Allow for a gradual migration to Agile appropriate to your readiness. Migration steps might include combining Agile and existing methods, conducting pilots, and preparing technical infrastructure. Observe and communicate with other organizations implementing Agile. For example, those starting to use Agile can consult with others who have more experience, including academic, private sector, and federal practitioners. Follow organizational change disciplines, such as establishing a sense of urgency and developing a change vision. A clear vision of change helps staff understand what the organization is trying to achieve. Another organizational change discipline is communication strategies. Be prepared for difficulties, regression, and negative attitudes. This approach reinforces that Agile is not painless and users may backslide to entrenched software methods. Start with Agile guidance and an Agile adoption strategy. This practice advocates having these elements in place at the start, even if they must be copied from external sources. Organizational commitment describes the management actions that are necessary to ensure that a process is established and will endure. Collaboration in Agile typically refers to the close and frequent interaction of teams. Four practices align with organizational commitment and collaboration: Ensure all components involved in Agile projects are committed to the organization’s Agile approach. This practice encourages organizations to ensure that everyone contributing to a project understands and commits to the organization’s approach. This includes those working directly on the project and those with less direct involvement, such as those providing oversight. Identify an Agile champion within senior management. This practice calls for someone with formal authority within the organization to advocate the approach and resolve impediments at this level. Ensure all teams include coaches or staff with Agile experience. This practice stresses the importance of including on each team those with direct experience in applying Agile. While training is helpful, hands on experience helps the team members learn and adjust. Empower small, cross-functional teams. Empowered teams of 7 to 18 people decide what to deliver and how to produce it. The teams should not over-rely on one member’s skills. Taking certain preparatory steps prior to the start of an iteration can facilitate a rapid development pace. The following eight practices generally align with the preparation of people and processes. Train the entire organization in your Agile approach and mindset, and train Agile practitioners in your Agile methods. For example, managers must understand the approach so that they know how it will affect them and teams need to know the specific steps of an iteration to conduct it properly. Ensure that subject matter experts and business team members have the required knowledge. This practice stresses that staff involved in fast-paced iterations must truly be experts in the processes being automated in that iteration in order to reduce delays. For example, a team member representing financial customers must be fully familiar with the needs of those customers. Enhance migration to Agile concepts using Agile terms and examples. For example, use terms like user stories instead of requirements, and Agile Center of Excellence instead of Project Management Office. Provide examples, such as one illustrating the small scope of a user story to teams writing these stories. Create a physical environment conducive to collaboration. A common practice is to co-locate the team in a single room where they can continually interact. Other ways to enhance collaboration are to reorganize office space and use tools to connect remote staff. Identify measurable outcomes, not outputs, of what you want to achieve using Agile. An example of this practice is creating a vision statement of project outcomes (such as a decrease in processing time by a specific percent in a set time), rather than outputs (such as the amount of code produced). Negotiate to adjust oversight requirements to a more Agile approach. This practice notes that teams may be able to adjust oversight requirements by using frequent, tangible demonstrations to gain the trust of reviewers and investors, potentially reducing the need for more formal oversight documents. Ensure that the definition of how a story will be determined to be done is comprehensive and objective. Comprehensiveness includes defining what constitutes a finished product (i.e., packaged, documented, tested, and independently verified). Objective means measurable or verifiable versus subjective judgment. Make contracts flexible to accommodate your Agile approach. Contracts requiring waterfall-based artifacts and milestone reviews may not support the frequent changes and product demonstrations in iterations, and may inhibit adoption. Execution entails the concrete steps necessary to conduct the iteration following the designated approach. The seven identified practices that align with execution are: Use the same duration for each iteration. An example would be establishing that iterations will be four weeks each within a release to establish a uniform pace. Combine Agile frameworks such as Scrum and XP if appropriate. Disciplines from different frameworks can be combined. For example, use project management disciplines from Scrum and technical practices from XP. Enhance early customer involvement and design using test- driven development. Test-driven development refers to writing software code to pass a test. This practice maintains that involving customers in these tests helps to engage them in the software development process. Include requirements related to security and progress monitoring in your queue of unfinished work (backlog). Including activities such as security reviews and status briefings in the backlog ensures their time and cost are reflected and that they are addressed concurrent with, and not after, iteration delivery. Capture iteration defects in a tool such as a backlog. This practice calls for queuing issues so that they are resolved in later iterations. For example, lists of unmet requirements generated at end-of-iteration demonstrations should be queued in the backlog for correction in a future iteration. Expedite delivery using automated tools. For example, tools can track software modifications, and compliant development sites or “sandboxes” help customers conceptualize the software in an environment that meets architectural and security standards. Test early and often throughout the life cycle. The theme of this practice is that testing during software code delivery instead of after delivery reduces risk and remediation costs. Evaluations can occur at the project and organizational level. For example, at the project level, the iteration is reviewed at its completion in a retrospective. At the organizational level, processes are reviewed for opportunities to improve the approach. The following seven practices align with evaluation: Obtain stakeholder/customer feedback frequently and closely. For example, feedback is obtained during the iteration and at its completion at an iteration retrospective. This practice was linked to reducing risk, improving customer commitment, and improving technical staff motivation. Continuously improve Agile adoption at both the project level and organization level. This practice invokes the discipline of continuous improvement, meaning always looking for ways to improve. For example, improvements can be made by adding automated test and version control tools, and enhancing team rooms. These issues can be tracked in project and organizational-level backlogs. Seek to identify and address impediments at the organization and project levels. This practice encourages organizations to be frank about identifying impediments so that they can be addressed. Determine project value based on customer perception and return on investment. This practice recognizes that tracking progress only against cost or schedule criteria set before the project began could lead to inaccurate measurement of progress if, for example, major changes in scope occur. Instead, Agile encourages customer feedback as one measure of progress. Comparing solution value to the cost of the solution is also a gauge of success. Gain trust by demonstrating value at the end of each iteration. This practice includes demonstrating key requirements in early iterations, and showing customers that requirements in the backlog are delivered and not forgotten. Track progress using tools and metrics. Progress can be tracked using tools and metrics such as burn-down charts and velocity, which can be automated, and by success indicators such as “customer delight,” and reduced staff stress and overtime. Track progress daily and visibly. This practice stresses that status is checked daily and publicly. For example, a progress chart is posted openly in the team’s workspace, with timely revisions to reflect ongoing feedback. Officials who have used Agile on federal projects at five agencies generally agreed that the practices identified by the experienced users are effective in a federal setting. Specifically, each practice was used and found effective by officials from at least one agency. Ten of the 32 practices were used and found effective by officials at all five agencies (see table 1). Also, in most cases, a practice was still believed to be effective even if it was not used. For example, officials explained that they did not use a practice they indicated was effective because it was not appropriate for their project or that they used an alternate practice. Although the identified practices were generally described as effective, officials from three agencies each reported one practice they had used but found to be not effective. According to the agency officials, two practices were identified as ineffective because they were difficult to implement. These practices were: (1) ensuring commitment from components and (2) negotiating oversight requirements. The third practice, striving to be Agile rather than simply following Agile methods, was described by an agency official as not effective because he believed that strict adherence was necessary for a successful project. We identified 14 challenges with adapting to and applying Agile in the federal environment based on an analysis of experiences collected from five federal agencies that had applied Agile to a development effort. These challenges relate to significant differences in not only how software is developed but also how projects are managed in an Agile development environment versus a waterfall development environment. We aligned the challenges with four of the project management activities used to organize effective practices: (1) ensuring organizational commitment and collaboration, (2) preparing for Agile, (3) executing development in an Agile environment, and (4) evaluating the product and project. In addition to identifying challenges, federal officials described efforts underway at their agencies to address these challenges. As described in the effective practices, Agile projects require the ongoing collaboration and commitment of a wide array of stakeholders, including business owners, developers, and security specialists. One way Agile promotes commitment and collaboration is by having teams work closely together, in one location, with constant team communication. Officials at the selected agencies identified challenges in achieving and maintaining such commitment and collaboration from their stakeholders as follows. Teams had difficulty collaborating closely: Officials from three agencies reported that teams were challenged in collaborating because staff were used to working independently. For example, one official reported that staff were challenged when asked to relocate to a team room because the technical staff preferred to work alone. The official added that some staff viewed open communication, such as posting project status on team room wall charts, as intrusive. A second official said that technical staff did not like constantly showing their work to customers. The third official said that customers initially did not want to see such development, preferring to wait for a polished product. Teams had difficulty transitioning to self-directed work: Officials at two agencies reported that staff had challenges in transitioning to self-directed teams. In Agile, teams made up of customers and technical staff are encouraged to create and manage their tasks without project manager direction and to elevate issues to stakeholders who have the authority to resolve them. Cross functionality is also encouraged to allow teams to share tasks. One official reported that teams used to direction from a project manager were challenged in taking responsibility for their work and in elevating issues they could not resolve within the team to senior officials. A second official noted that it was a challenge to create cross-functional teams because federal staff tend to be specialists in one functional area. An example of this would be where a team could include someone to represent system users, but that person may not be familiar with the needs of all users. Specifically, a team developing an insurance system might include someone with a background in claims processing. However, that person may not be experienced with payment procedures. Staff had difficulty committing to more timely and frequent input: While Agile advocates frequent input and feedback from all stakeholders, four agency officials noted challenges to commit to meeting such input expectations. One agency official noted that individuals were challenged to commit to keeping work products, such as schedules, updated to reflect the status of every iteration because they were not used to this rapid pace. A second official stated that teams initially had difficulty maintaining the pace of an iteration because they were used to stopping their work to address issues rather than making a decision and moving on. A third official said that it was challenging incorporating security requirements at the rapid pace of the sprint. A fourth official said customer availability was a challenge because customers initially did not understand the amount and pace of the time commitment for Agile and needed to develop a mindset to attend meetings as well as frequently review deliverables. Agencies had trouble committing staff: Three agency officials reported being challenged assigning and maintaining staff commitments to projects. The frequent input expected of staff involved in projects requires a more significant time commitment than that required for waterfall development projects that allow more sporadic participation. For example, two officials said their agencies were challenged dedicating staff with multiple, concurrent duties to teams because staff could not be spared from their other duties while participating in the Agile teams. The third official said stakeholder commitment is challenging to maintain when stakeholders rotate frequently and new staff need to learn the roles and responsibilities of those being replaced. When an organization following waterfall software development migrates to Agile, new tools and technical environments may be required to support that approach, as well as updates to guidance and procurement strategies. Officials described challenges in preparing for Agile as follows. Timely adoption of new tools was difficult: As identified in the effective practices, automated tools may be used to support project planning and reporting. One official noted that implementing Agile tools that aid in planning and reporting progress was initially a challenge because there was a delay in buying, installing, and learning to use these tools. Technical environments were difficult to establish and maintain: Two agency officials noted that establishing and maintaining technical environments posed challenges because Agile calls for development, test, and operational activities to be performed concurrently. According to one agency’s officials, preparing and maintaining synchronized hardware and software environments for these three activities in time to support the releases was expensive to support and logistically challenging. Furthermore, one of these officials noted that his agency experienced a challenge running multiple concurrent iterations because this required more complex coordination of staff and resources. Agile guidance was not clear: Officials from three agencies identified a challenge related to the lack of clear guidance for Agile software development, particularly when agency software development guidance reflected a waterfall approach. For example, one official said that it was challenging to develop policy and procedure guidance for iterative projects because they were new, and the agency strategy aligned with the waterfall approach. As a result, it was difficult to ensure that iterative projects could follow a standard approach. A second official reported that deviating from waterfall- based procedural guidance to follow Agile methods made people nervous. For example, staff were nervous following team versus project manager directed tasks because this approach was not in their IT guidance. A third official said that their guidance mixed iterative and waterfall life cycle approaches, which staff found confusing. Procurement practices may not support Agile projects: Agile projects call for flexibility adding the staff and resources needed to meet each iteration, and to adapt to changes from one iteration to the next. One official stated that working with federal procurement practices presents a challenge where they do not support the flexibility required. For example, he said that federal contracts that require onerous, waterfall-based artifacts to constantly evaluate contractor performance are not needed in an Agile approach when the contractor is part of the team whose performance is based on the delivery of an iteration. Furthermore, the official said that they are challenged changing contractor staff in time to meet iteration time frames and that accommodating task changes from one iteration to the next is challenging because contracting officers require cumbersome traditional structured tasks and performance checks. As described in the effective practices, Agile projects develop software iteratively, incorporating requirements and product development within an iteration. Such requirements may include compliance with agency legal and policy requirements. Officials reported challenges executing steps related to iterative development and compliance reviews as follows. Customers did not trust iterative solutions: Agile software products are presented to customers incrementally, for approval at the end of each iteration, instead of presenting complete products for approval at waterfall milestones. Officials at two agencies reported a challenge related to customer mistrust of iterative solutions. Specifically, one agency official said customers expecting a total solution feared that the initial demonstrations of functionality provided in the current iteration would be considered good enough, and they would not receive further software deliveries implementing the remainder of their requirements. At another agency, an official said this fear contributed to customers finding it difficult to define done. Specifically, customers were challenged in defining when each requirement would be considered done because they were afraid that this would be viewed as meaning all related functions were being met, and that unmet requirements would be dropped and never implemented. Teams had difficulty managing iterative requirements: Teams provide input on prioritizing requirements, and deciding what to do with new requirements discovered during iterations. Two agencies’ officials reported challenges managing requirements. Specifically, one official reported that customers were initially challenged to validate and prioritize which requirements would be assigned to a release. Using the waterfall development model, they were used to identifying all requirements up front and not revisiting them as they were developed. The second official said they were challenged to accommodate new requirements within the fixed schedule for a product release. Compliance reviews were difficult to execute within an iteration time frame: Iterations may incorporate compliance reviews to ensure, for example, that agency legal and policy requirements are being met within the iteration. One agency official reported a challenge obtaining compliance reviews within the short, fixed time frame of an iteration because reviewers followed a slower waterfall schedule. Specifically, the official said that compliance reviewers queued requests as they arose and that the reviews could take months to perform. This caused delays for iterations that needed such reviews within the few weeks of the iteration. Agile advocates evaluation of working software over the documentation and milestone reporting typical in traditional project management. Officials described challenges in evaluating projects related to the lack of alignment between Agile and traditional evaluation practices. Specifically, officials explained that: Federal reporting practices do not align with Agile: Two agency officials noted that several federal reporting practices do not align with Agile, creating challenges. For example, one official said federal oversight bodies want status reports at waterfall-based milestones rather than timely statements regarding the current state of the project. The second official said OMB’s IT investment business case (known as the exhibit 300) and IT Dashboard, a publicly available website that displays detailed information on federal agencies’ major IT investments, are waterfall-based. For example, the IT Dashboard calls for monthly statistics instead of demonstrations of working software. He also noted that it is frustrating when dashboard statistics are flagged in red to note deviations, even when the deviation is positive, such as being ahead of schedule and under cost. Traditional artifact reviews do not align with Agile: Traditional oversight requires detailed artifacts in the beginning of a project, such as cost estimates and strategic plans, while Agile advocates incremental analysis. One agency official noted that requiring these artifacts so early was challenging because it was more worthwhile to start with a high-level cost estimate and vision to be updated as the solution was refined through iterations, rather than spending time estimating costs and strategies that may change. Traditional status tracking does not align with Agile: Officials from three agencies noted that project status tracking in Agile does not align with traditional status tracking methods, creating challenges. For example, one official said that tracking the level of effort using story points instead of the traditional estimating technique based on hours was a challenge because team members were not used to that estimation method, although eventually this method was embraced. Two other agency officials said EVM was challenging to apply in an Agile environment. Specifically, one official said that the required use of EVM was challenging because there was no guidance on how to adapt it to iterations. The second official found EVM challenging because the agency was required to use it to track changes in cost, schedule, and product scope through monthly reports, and changes were viewed as control problems rather than as revisions to be expected during an iteration. For example, the project’s scope was prioritized within every iteration based on the cost and schedule limits of the iteration and release. He also noted that risk tracking in Agile does not align with traditional risk tracking methods because issues are addressed within an iteration rather than queued, such as in a traditional monthly risk log. In addition to identifying challenges, federal officials described their efforts to address these challenges. For example, officials said they clarify policies to address the challenge of Agile guidance lacking clarity. To mitigate the challenge related to customers not trusting iterative solutions, an official said they call the iteration review a mini-critical design review. This helps customers understand that they must declare the iteration complete or not, known as committing to done. Another official said one way that they addressed the challenge related to teams having difficulty managing iterative requirements was to add an empty iteration to the end of the release schedule to accommodate requirements discovered during the iterations. In addition to the efforts at individual agencies to mitigate Agile challenges, the Federal CIO Council has begun an effort on a related topic. According to an official working with the Council, it is currently drafting a document on modular development. Consistent with OMB’s IT reform efforts, the document is expected to provide guidance for agencies seeking to use more modular development approaches, such as Agile. However, according to the official, the draft does not specifically address Agile effective practices. Also, in June 2012 OMB released contracting guidance to support modular development. This guidance includes factors for contracting officers to consider for modular development efforts regarding for example, statements of work, pricing arrangements, and small business opportunities. As Agile methods begin to be more broadly used in federal development projects, agencies in the initial stages of adopting Agile can benefit from the knowledge of those with more experience. The ongoing effort by the Federal CIO Council to develop guidance on modular development provides an excellent opportunity to share these experiences. The effective practices and approaches identified in this report, as well as input from others with broad Agile experience, can inform this effort. To ensure that the experiences of those who have used Agile development are shared broadly, we recommend that the Federal CIO Council, working with its chair, the Office of Management and Budget’s Deputy Director for Management, include practices such as those discussed in this report in the Council’s ongoing effort to promote modular development in the federal government. We provided a draft of our report to OMB and to the five federal agencies included in our review. In oral comments on the draft, OMB’s E- government program manager said that the draft recommendation was better addressed to the Federal CIO Council than to the OMB official who is the chair of the Council. Accordingly, we revised the recommendation to address it to the Council, working with its chair, the OMB Deputy Director for Management. Two of the five agencies provided written comments on the draft, which are reprinted in appendix V and VI. Specifically, the Department of Veterans Affairs Chief of Staff stated that the department generally agreed with the draft’s findings, and the Acting Secretary of the Department of Commerce stated that the Patent and Trademark Office concurred with our assessment. Two other agencies, the Internal Revenue Service and the Department of Defense, provided technical comments via e-mail, which we incorporated as appropriate. In an e-mail, a manager in the National Aeronautics and Space Administration (NASA) center included in our review said that NASA had no comments. As agreed with your offices, we will send copies of this report to interested congressional committees; the Secretaries of Defense, Commerce, and Veterans Affairs; the Administrator of NASA and the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact David A. Powner at (202) 512-9286 or Dr. Nabajyoti Barkakati at (202) 512-4499 or by e-mail at pownerd@gao.gov or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to identify (1) effective practices in applying Agile for software development solutions and (2) federal challenges in implementing Agile development techniques. To identify effective practices, we interviewed a nongeneralizable sample of nine experienced users and a tenth experienced user helped us pre- test our data collection process. To identify these users, we researched publications, attended forums, and obtained recommendations from federal and private officials knowledgeable about Agile. We selected individuals with Agile software development experience with public, private sector, and non-profit organizations. Using a structured interview, we asked them to identify effective practices when applying Agile methods to software development projects. We then compiled the reported practices and aligned and combined some with a broader practice. For example, practices related to preparation, such as mock and pilot iterations, were aligned and then combined into the final practice, “Allow for a gradual migration to Agile appropriate to your readiness.” If a practice did not align with other or broader practices, it was listed individually. We then sent the resulting list of practices in a questionnaire to our experienced users. This list was not organized into categories to ensure that each practice would be viewed individually. We asked our users to rate each practice as either (1) highly effective, (2) moderately effective, (3) somewhat effective, or (4) not applicable/do not know. We compiled the ratings and included in our list the practices that received at least six ratings of highly effective or moderately effective from the 8 experienced This resulted in 32 practices, users who provided the requested ratings. which we aligned to key project management activities in Software Engineering Institute guidance: strategic planning, organizational commitment and collaboration, preparation, execution, and evaluation. This alignment was based on our best judgment. The ninth experienced user was asked for input on the list of practices with the others, but did not respond in time to meet our reporting deadline. To identify federal challenges, we interviewed officials responsible for five federal software development projects that reported using Agile practices. To identify the projects, we researched our previous work, federal websites, and publications, and attended federal forums. We selected a nongeneralizable sample of projects designed to reflect a range of agencies, system descriptions, and cost (see app. IV for details about the projects and the responsible officials). We then asked officials from each project to identify federal challenges in implementing an Agile approach using a structured interview. We summarized the challenges and categorized them as aligning with either organizational commitment and collaboration, preparation, execution, or evaluation. Separately, we sent the federal officials a questionnaire listing the effective practices we compiled based on input from our experienced users. The questionnaire asked whether these practices were used and found effective. Although our results are not generalizable to the population of software development projects reporting the use of Agile practices, they provided valuable insight into both the effective use and challenges in applying Agile in the federal sector. We conducted our work from October 2011 through July 2012 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Agile development encompasses concepts that were previously used in software development. These concepts were documented as Agile themes and principles by 17 practitioners, who called themselves the Agile Alliance. In February 2001 the Alliance released “The Agile Manifesto,” in which they declared: “We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: individuals and interactions over processes and tools working software over comprehensive documentation customer collaboration over contract negotiation responding to change over following a plan.” The Alliance added that while they recognized the value in the second part of each statement (i.e., “processes and tools”), they saw more value in the first part (“individuals and interactions”). The Alliance further delineated their vision with twelve principles. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity—the art of maximizing the amount of work not done—is essential. The best architectures, requirements, and designs emerge from self- organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. The five federal software development projects that reported challenges in applying Agile practices are profiled as follows. In addition to the contact names above, individuals making contributions to this report included James R. Sweetman, Jr. (assistant director), Jenny Chanley, Neil Doherty, Rebecca Eyler, Claudia Fletcher, Nancy Glover, and Karl Seifert.
|
Federal agencies depend on IT to support their missions and spent at least $76 billion on IT in fiscal year 2011. However, long-standing congressional interest has contributed to the identification of numerous examples of lengthy IT projects that incurred cost overruns and schedule delays while contributing little to mission-related outcomes. To reduce the risk of such problems, the Office of Management and Budget (OMB) recommends modular software delivery consistent with an approach known as Agile, which calls for producing software in small, short increments. Recently, several agencies have applied Agile practices to their software projects. Accordingly, GAO was asked to identify (1) effective practices in applying Agile for software development solutions and (2) federal challenges in implementing Agile development techniques. To do so, GAO identified and interviewed ten experienced users and officials from five federal projects that used Agile methods and analyzed and categorized their responses. GAO identified 32 practices and approaches as effective for applying Agile software development methods to IT projects. The practices generally align with five key software development project management activities: strategic planning, organizational commitment and collaboration, preparation, execution, and evaluation. Officials who have used Agile methods on federal projects generally agreed that these practices are effective. Specifically, each practice was used and found effective by officials from at least one agency, and ten practices were used and found effective by officials from all five agencies. The ten practices are: Start with Agile guidance and an Agile adoption strategy. Enhance migration to Agile concepts using Agile terms, such as user stories (used to convey requirements), and Agile examples, such as demonstrating how to write a user story. Continuously improve Agile adoption at both the project level and organization level. Seek to identify and address impediments at the organization and project levels. Obtain stakeholder/customer feedback frequently. Empower small, cross-functional teams. Include requirements related to security and progress monitoring in your queue of unfinished work (the backlog). Gain trust by demonstrating value at the end of each iteration. Track progress using tools and metrics. Track progress daily and visibly. GAO identified 14 challenges with adapting and applying Agile in the federal environment: Teams had difficulty collaborating closely. Procurement practices may not support Agile projects. Teams had difficulty transitioning to self-directed work. Customers did not trust iterative solutions. Staff had difficulty committing to more timely and frequent input. Teams had difficulty managing iterative requirements. Agencies had trouble committing staff. Compliance reviews were difficult to execute within an iteration time frame. Timely adoption of new tools was difficult. Federal reporting practices do not align with Agile. Technical environments were difficult to establish and maintain. Traditional artifact reviews do not align with Agile. Agile guidance was not clear. Traditional status tracking does not align with Agile. Finally, officials described efforts to address challenges by clarifying previously unclear guidance on using Agile. In a related effort, the Federal Chief Information Officers (CIO) Council is developing guidance on modular development in the federal government, but it does not specifically address effective practices for Agile. GAO is recommending that the Federal CIO Council, working with its chair, OMBs Deputy Director for Management, include practices such as those discussed in this report in the Councils ongoing effort to promote modular development. After reviewing a draft of this report, OMB commented that the recommendation was better addressed to the Council than to its chair. GAO revised the recommendation to address it to the Council working with its chair.
|
BOP’s mission is to protect society by confining offenders in the controlled environments of prisons and community-based facilities that are safe, humane, cost-efficient, and appropriately secure, and that provide work and other self-improvement opportunities to assist offenders in becoming law-abiding citizens. BOP is organized into six regions of the country—Mid-Atlantic, North Central, Northeast, South Central, Southeast, and Western. BOP manages the construction of and operates institutions at five security levels—minimum, low, medium, high, or administrative security—to confine offenders in an appropriate manner. Institutions constructed for a given security level generally have the same design and features. For example, FCIs, which are medium-security institutions, generally have strengthened perimeter fencing, cell-type housing, and a wide variety of work and treatment programs. As such, FCI construction projects typically include a UNICOR facility that employs and provides job skills training to inmates. UNICOR is a government corporation administered by DOJ, with the Director of the Bureau of Prisons as its Chief Executive Officer. FCI construction projects also generally include an adjacent work- and program-oriented minimum- security Federal Prison Camp, where inmates help serve the labor needs of the larger, higher-security FCI. We have previously reported that BOP follows a centralized, long-term capacity planning process, with the aim of ensuring sufficient institutional capacity while maintaining prison populations at safe-and-secure targeted levels. BOP has two planning committees that are involved in the capital decision-making process to identify new facility prison construction projects: the Capacity Planning Committee (CPC) and the Long-Range Planning Committee (LRPC). According to BOP headquarters officials, CPC proposes new projects by BOP region using the Capacity Plan, which provides projections of inmate population and rates of prison overcrowding. BOP develops initial budget estimates for the projects that CPC proposes, and LRPC ranks the proposed new prison facility construction projects and makes specific funding recommendations to the Director of BOP. The new construction projects are ranked on the basis of agency need, funding, and the speed with which the projects can be constructed. BOP includes its proposed new construction projects in its annual Federal Prison System budget request made to DOJ. As part of the DOJ annual congressional budget submission, BOP also provides its Federal Prison System Status of Construction report (status report), which provides information on the status of construction for major projects that have received funding. The specific information provided is as follows: each project’s descriptive title, with name, type, and location; the amounts funded, by fiscal year; the total project cost estimates; the funds obligated to date; the estimated year of use; and a brief status of the project. However, detailed project information is not provided in this status report. Although BOP provides information to Congress about the specific projects that it plans to support with the funds it requests, funding for BOP construction is provided as a lump sum into its “Buildings and Facilities Account,” rather than by the specific project. As a result, BOP can shift funds within this account to fund cost increases on different projects. In the last 10 years, BOP has completed 30 prison projects at a cost totaling over $3.6 billion. BOP has received about $710 million for the 3 prison projects currently under construction—FCI Mendota, FCI Berlin, and FCI McDowell. BOP has plans for 10 additional prison projects that have received about $363 million in funding to date, as listed in its fiscal year 2009 congressional budget submission. To request funding for construction projects such as a prison, an agency must develop an initial project cost estimate several years before it plans to begin construction. Cost estimating requires both science and judgment. Since answers are seldom—if ever—precise, the goal is to find a reasonable “answer.” Cost estimates are based on many assumptions, including the rate of inflation and when construction will begin. Generally, the more information that is known about a project and is used in the development of the estimate, the more accurate the estimate is expected to be. OMB’s guidance for preparing budget documents identifies many types and methods of estimating project costs. The expected accuracy of the resulting project cost estimates varies, depending on the estimating method used. As part of the project planning and budgeting process, BOP officials develop an initial cost estimate when the need is identified for a prison in a particular region of the country. Given that its prisons for a specific security level generally have the same design features, BOP uses cost and pricing information from a previous project to create a national average cost for construction as the basis for its initial estimate of a new project. To develop an initial cost estimate, BOP adjusts its national average cost by assumptions for various factors, such as the difference in construction costs for different regions of the country, the difficulty of construction, and the expected inflation until construction is planned to begin. For example, in 1999, BOP created a national average construction cost for an FCI on the basis of the average of the 1998 bids for FCI Petersburg, Virginia, the most recently available FCI construction cost information. BOP adjusted FCI Petersburg’s pricing information to take into account inflation between 1998 and 1999 and the relative construction costs in Petersburg. To establish the initial estimate for FCI Mid-Atlantic—which became FCI McDowell, located in McDowell County, West Virginia—BOP adjusted the national average to take into account the relative construction costs and difficulty of construction for the Mid-Atlantic region of the country and inflation adjustment to 2001, which is when BOP expected to begin construction. BOP used this estimate as the basis for requesting funding. BOP’s process of using cost information from an earlier project to estimate the cost of a similar proposed project is one of the types of estimates discussed in OMB’s guidance. Because this type of cost estimate is based on a single overall project cost, guidance indicates that actual project costs may vary from such an estimate by as much as ± 40 percent. Actual costs may vary by this percentage even if the project begins as assumed in the estimate because detailed project information, such as quantities of particular construction components, was not used in developing the estimate. A BOP official stated that he believes BOP’s estimates are more accurate than ± 40 percent because it uses its own historical project information, and the similarities shared by BOP projects. Delays in starting project construction or disruptions in available funding, which interrupted construction, contributed to increases in cost estimates due to inflation and to unexpected increases in construction material costs. According to BOP officials, problems associated with selecting sites for FCI Mendota, FCI Berlin, and FCI McDowell and with receiving the funding later than planned in the initial estimates contributed to the increase in the cost estimates. During the time that the projects were delayed, construction costs rose at a rate higher than inflation. Also, cost estimates are imprecise and should be expected to vary from the initial estimates, but Congress and other stakeholders were not informed about the extent to which costs might vary from the initial estimates. According to BOP officials, all three projects experienced delays in beginning construction because of problems associated with selecting and approving the sites for the prisons as well as with the availability of funding. FCI Mendota also experienced disruptions in available funding that led to an interruption in construction. See appendix II for project estimates, budget requests, and funding for the three projects. In fiscal years 2001 and 2002, about $150 million was appropriated for a high-security United States Penitentiary in California as requested in the President’s budget. Funding was reduced in fiscal year 2002 when BOP applied a rescission of about $5.7 million to the project. When BOP initially estimated the cost for this project, it expected the contract to be awarded in fiscal year 2001 and the construction to begin in fiscal year 2002. However, BOP did not award the contract to design and construct the prison until fiscal year 2004. Mendota was selected as the prison site in fiscal year 2002, at which time BOP changed the project to a medium- security FCI. Subsequent environment impact studies and approvals, which included review and approval by the Environmental Protection Agency (EPA), were completed in fiscal year 2004. In addition, the continued availability of funding for this project came into question in fiscal year 2004, when Congress rescinded almost $52 million of funding. Furthermore, in fiscal year 2005 an amendment to the President’s Budget proposed canceling $55 million from the unobligated balances in the Buildings and Facilities Account previously provided for the FCI Mendota project. Despite this disruption in the available funding, BOP continued with the FCI Mendota project because it expected that the rescinded funds would be restored the following year. Partly as a result of the rescission, BOP officials separated the work for this project into several pieces. This decision enabled BOP to award a single contract for the project’s design and construction in September 2004. The contract was structured for the contractor to begin with design and allowed BOP to decide when and what pieces of the construction would be done on the basis of the availability of funding. In December 2004, with the funding it had, BOP directed the contractor to construct the central utility plant, water tower, and general housing units. The contract required BOP to award the remaining pieces necessary to complete the facility—such as the support structures, UNICOR factory, and Federal Prison Camp—no later than 2006 or the option to do this work under the contract would expire. BOP did not exercise the contract option because it had not received additional funds. As a result, when the contractor completed its work, BOP could not house prisoners at FCI Mendota. Figure 1 shows a comparison of the uncompleted FCI Mendota in California to the completed FCI Forest City in Arkansas. Before BOP could solicit for construction bidders to complete the required work at FCI Mendota, it had to contract for additional engineering services to prepare construction documents. This was necessary to inform bidders about what work had been done and what work remained to be completed. In September 2007 after it received additional funding, BOP awarded the contract to complete FCI Mendota. We have previously raised concerns about this type of construction management. For example, we have reported that nonconcurrent construction—that is where different phases of a project are constructed at different times—increases the overall cost to the government because it requires additional and expensive mobilization of contractor staff and equipment, security, work to procure building materials, and construction management oversight. We also have raised concerns in prior work about starting capital projects without all of the funding necessary to complete the project or, if the project is divisible into stages, to complete a stand- alone stage that would result in a useable asset. While BOP had funding for the pieces for which it awarded a contract, the pieces did not result in a usable asset because it did not have enough of the pieces of the project completed to house prisoners safely and securely. Although BOP shifted some funds to help pay for the Mendota project, according to BOP officials, sufficient funds were not available to fully fund the Mendota project without delaying or canceling other projects that BOP had told Congress it planned to begin. To date, the total funding for FCI Mendota has exceeded the initial fiscal year 2001 estimate by about $72 million, or almost 45 percent. However, the latest project estimate is over $6 million, or 2.8 percent, more than the current funding. BOP officials have told us that they do not plan to request any more funding for this project, and that BOP will shift funds within its Buildings and Facilities Account as necessary to complete FCI Mendota. In fiscal year 2004, about $154 million was appropriated for FCI Berlin, and about $40 million was appropriated for an FCI in the Mid-Atlantic region. When BOP initially estimated the costs for these projects, it expected to receive funding for design and construction of these facilities in fiscal years 2004 and 2002, respectively. BOP did not award contracts for the design and construction of these projects until fiscal years 2007 and 2006, respectively. According to BOP, both projects experienced delays in selecting the locations for the prisons and the environment impact studies. For FCI Berlin, the property was acquired and EPA completed its approval process in fiscal year 2007. For FCI McDowell, these events occurred in fiscal years 2006 and 2005. In addition, BOP officials stated that they were reluctant to proceed with construction because of OMB’s moratorium on new construction for fiscal years 2005 through 2007. Also, the President’s Budget included proposed cancellations of unobligated funds from BOP’s Buildings and Facilities Account for fiscal years 2004, 2006, and 2007. To date, the total funding for FCI Berlin and FCI McDowell has exceeded the initial estimates by about $93.5 million and $112.3 million, or 56 percent and 89 percent, respectively. However, the latest project estimates are more than $11 million and $9 million, or 4.2 percent and 3.7 percent, more than the current funding for FCI Berlin and FCI McDowell, respectively. BOP officials told us that they do not plan to request any more funding for these projects, and that BOP will shift funds within its Buildings and Facilities Account as necessary to complete them. BOP factors into its estimates the project’s expected start date and duration, on the basis of when BOP expects to receive funding. Generally, if a project does not start as assumed in the cost estimate, the estimated cost of the project should be expected to change at least by the rate of inflation that occurs during the time that elapses between the expected start date and the actual start date. BOP officials stated that during the time that these projects were delayed, construction industry costs increased at a rate greater than inflation. Costs for materials used in construction, such as concrete, steel, copper, and oil, rose substantially. For example, steel prices rose by about 60 percent and oil prices rose by almost 150 percent between 2003 and 2007—a time between when the initial cost estimates were prepared and when the projects were ready to proceed with construction. We analyzed national data on construction material costs from 2003 through 2007 to provide some context on increases to construction prices. Specifically, to identify nationwide trends in the costs of many of the materials used in construction—from concrete to electrical equipment— we analyzed the Department of Labor, Bureau of Labor Statistics’ Inputs to Construction Industries Producer Price Index (ICIPPI). As shown in table 1, from 2003 through 2007, the ICIPPI increased more than the consumer price index, indicating that construction costs increased at a higher rate than other costs. Because BOP estimates its initial project costs and requests funding early in the planning process, generally before the specific location for the prison has been selected, actual project costs can be expected to vary from the initial estimates to some extent. This variance would be in addition to any cost implications of a change in the project, such as a delay in beginning construction. As we have previously noted in this report, the extent to which one might expect actual costs to vary from estimates typically depends on the type of estimating process used. In developing its initial estimate prior to the selection of a site for the prison, BOP relies on the cost of a previous prison as the foundation of its estimate. However, BOP has more detailed information available than just the cost of a previous prison, which, if used, would likely result in a more accurate estimate. For example, BOP could analyze the design documents or itemized costs that contractors on previous projects included in their bills. In addition, when the project sites have been selected, actual local market pricing for labor and material costs could be used. More BOP resources would be needed to develop such an analysis. According to government guidance, BOP’s method of using total cost information from a prior project as the basis for its estimate may result in actual project costs varying from the estimate by as much as ± 40 percent. A BOP official stated that BOP’s estimating method results in more precise estimates than ± 40 percent, but an analysis of the accuracy of BOP’s estimating method has not been done. Regardless of the estimating method used, Congress relies on information provided by agencies when making funding decisions. Although BOP, like other agencies, is not required to communicate the extent to which actual costs may be expected to vary from its estimates in budget documents or reports on project status, we have recently identified providing such information as a best practice. BOP has not provided this information to Congress and other stakeholders. Thus, BOP has not alerted them of the risks that BOP might require additional funding to complete the projects as originally planned. OMB guidance points out that estimating inaccuracy—both overestimating and underestimating—can adversely affect other projects. With overestimating, an agency may request and be provided with more resources than it will actually need for the project, thereby resulting in less resources being available for other projects or programs. Underestimating projects can lead an agency to request less resources than it will actually need to complete the project, potentially leading to a significant reduction in the project scope, termination of the project, or the shifting of funds from other projects. Inaccurate estimates also reduce confidence in the accuracy of future estimates provided by an agency. Consequently, BOP’s ability to inform Congress and other stakeholders about the extent to which costs may vary from its initial project cost is important as it plans additional prison projects and submits subsequent funding requests. BOP eliminated or reduced portions of two projects, but did not clearly communicate these changes to Congress and other stakeholders. BOP also plans to use its construction management policies and procedures to control cost increases and schedule delays during construction. Congress appropriated funds for fiscal year 2007 that BOP indicated in its status report were required to complete the three prison projects. However, BOP eliminated portions of the FCI Berlin and FCI Mendota projects when it awarded contracts in May and September 2007, respectively, to keep the projects within the estimated costs provided to Congress. According to BOP officials, the contractors’ bids for FCI Berlin and FCI Mendota were higher than expected. In response, at FCI Berlin, BOP chose to eliminate the UNICOR facility where inmates were to be employed and provided with job skills training. Subsequently, UNICOR has agreed to pay for the cost of constructing a smaller than originally planned facility, which has now been added back to the project. At FCI Mendota, BOP eliminated both the UNICOR facility and the minimum security Federal Prison Camp. Eliminating or reducing the UNICOR facilities affects BOP’s mission to provide work and other self-improvement opportunities for inmates. As a result, these two projects are no longer the same as those for which BOP initially sought and received appropriated funds. While eliminating or reducing portions of two projects enabled BOP to award contracts, the resulting facilities will not provide the same range of services as originally planned. As part of its annual congressional budget submission, BOP reports on the status of projects that have received funding in the past. This status report includes the following information: each project’s descriptive title, with name, type, and location; the amounts funded, by fiscal year; the total project cost estimates; the funds obligated to date; the estimated year of use; and a brief status of the project. However, detailed project information is not provided in this status report. In reviewing the BOP Status of Construction report in DOJ fiscal year 2009 budget documents, we found that the report does not discuss the elimination of the UNICOR facility at FCI Mendota. Furthermore, BOP did not mention that the Federal Prison Camp had been eliminated from FCI Mendota. The only indication of this change is that the project title no longer includes the words “with camp.” While BOP receives a lump-sum appropriation for prison construction, Congress makes its appropriation on the basis of, among other things, the project information provided by BOP in its annual congressional budget submission. For FCI Berlin and FCI Mendota, BOP did not clearly communicate to Congress or other stakeholders that the facilities being constructed differed from those for which funds were requested and appropriated. In addition, if BOP should decide to construct these omitted facilities in the future and fulfill these projects’ initial designs, it would likely cost more than if the facilities had been constructed as one project. We have previously reported that nonconcurrent construction of a project increases the overall cost to the government because such construction requires additional and expensive (1) mobilization of contractor staff and equipment, (2) security, (3) work to procure building materials, and (4) construction management oversight. BOP officials told us that they will have more ability to control project costs because they have awarded design and construction contracts for the three projects. These officials believe that using their construction management policies and procedures will allow them to control cost increases and schedule delays. Controlling schedule delays is critical because such delays can lead to cost increases. BOP officials said they will use their Design and Construction Procedures and Construction Management Guidelines to manage the construction of prison projects. Within BOP, the Design and Construction Branch is responsible for the oversight and management of prison construction, and each project has a BOP project manager. BOP’s Design and Construction Procedures outlines the specific tasks that the Design and Construction Branch must complete to manage the coordination, execution, oversight, and monitoring of the activities required to construct the projects. For example, this guidance states that the project manager must monitor and report on the contractor’s performance during construction and review changes or a modification to the contract to evaluate the extent to which BOP can hold the contractor reasonably liable and, therefore, responsible for the resulting costs. To accomplish these tasks, BOP’s Construction Management Guidelines provides additional guidance, which identifies the processes that BOP staff must follow to monitor the requirements and implementation of BOP construction projects. For example, this guidance requires BOP officials and the contractor to hold numerous design and construction meetings throughout the duration of the project’s schedule to ensure good communication and effective management of the project. In addition, the Construction Management Guidelines requires specific reports—including weekly status reports, monthly project progress reports, performance evaluations of the contractor’s team members, and other reports—to facilitate oversight and monitoring of the projects within BOP. For example, this guidance requires the use of critical path method scheduling that breaks a project down into a sequence of necessary activities, which are placed into a project schedule that the project manager can closely monitor. With this management tool, BOP has the ability to monitor and track a project’s current progression of work in relation to its initial schedule. This tool also gives the project manager the ability to evaluate proposed construction changes or modifications to the project and to understand their resulting impacts on the project’s schedule. This evaluation step remains crucial because once BOP awards the contract, any construction changes that impact the project schedule may also lead to cost changes. BOP’s guidance establishes clear lines of responsibility and documentation requirements. The bureau’s Construction Management Guidelines also outlines a detailed process that BOP must follow to manage and approve any changes or contract modifications to the project. For example, this guidance states that BOP’s project manager should review any proposed changes or modifications to the project and determine if the changes or modifications need further review by BOP’s Design and Construction Branch chief. If further review is warranted, sufficient background information supporting the changes or modifications should be provided to the branch chief, along with the proposal. In addition, this guidance states that all change or modification proposals should be discussed at regularly scheduled—or specially scheduled—progress meetings with the contractor. If the changes or modifications will affect the work, more detailed information should be provided to justify them, and the project management team must also evaluate them to ensure they are in compliance with BOP’s contract requirements. Furthermore, to document the process, the guidance requires that detailed files be created and maintained for all changes or modifications. We found that BOP’s construction guidance—specifically, its Design and Construction Procedures and Construction Management Guidelines— generally conform to government and industry management practices. We compared BOP’s existing construction management policies and procedures with existing guidance from OMB, the General Services Administration’s Construction Excellence Features, the Department of Energy’s Project Management for the Acquisition of Capital Assets, and the Construction Industry Institute’s (CII) Guidelines for Implementation of CII Concepts: Best Practices for the Construction Industry. Our review showed that BOP’s construction management policies and procedures required systems for monitoring, tracking, analyzing, forecasting, and reporting the status of a project. For example, we found that BOP has procedures for reporting important project information—such as cost and schedule, and their deviations from trends—to the appropriate personnel, including management. Furthermore, BOP has procedures for corrective actions when deviations in cost and schedule occur as well as procedures for controlling project changes or modifications. In addition to the guidance, BOP officials said that the use of the Design- Build delivery system for two of these projects will help to reduce the risk of additional costs being incurred during construction. This type of project delivery system places the project design and construction under one contract. This can reduce the risk of design errors being identified during construction and leading to project delays or cost increases. To provide some context to the extent that Design-Build contracting is effective in managing construction, we reviewed the National Institute of Standards and Technology and the CII study of the performance of the Design-Build delivery method versus the traditional Design-Bid-Build delivery system. The study found that for maintaining the project’s schedule, as well as for managing any changes or rework needed during the project’s construction, when the project was managed by its owner the Design-Build system performed better than the Design-Bid-Build system. BOP officials stated that they have more ability to control costs while the project is under construction, and that for the FCI Mendota, FCI Berlin, and FCI McDowell prison projects, they plan to continue to carefully consider and approve changes after construction has begun to stay within its budget. Since construction of the projects has just begun, it is too early to evaluate the effectiveness of BOP’s construction policies and procedures in controlling cost increases and schedule delays. When BOP asks Congress or other stakeholders to fund or support projects, it is important for them to be aware of the extent to which actual project costs may vary from the initial estimate. Given the continual competition for limited funds, understanding that a proposed project may need an additional 30 percent in funding as opposed to an additional 10 percent may influence their approval and funding decisions. In addition, BOP is developing its estimates and requests funding on the basis of the various facilities it intends to include in each project. If elements of a proposed and funded project that can affect its functionality are eliminated, the project may not fulfill decision makers’ expectations. In addition, later construction of the omitted facilities would likely cost more than if they had been constructed as one project. As the need for prison space continues to grow, BOP’s ability to complete projects within budget and with the elements initially anticipated will be important to demonstrating BOP’s ability to manage its construction program. By providing information on the accuracy of its cost estimates and clearly communicating changes that could impact the projects functionality, BOP would establish more accountability and transparency to its stakeholders. To improve accountability and transparency, we are making two recommendations to the Attorney General of the United States to instruct the Director of BOP to clearly communicate to Congress and other stakeholders in DOJ’s annual congressional budget submission, in which BOP provides its requests for funding and reports on the status of construction projects: the extent to which project costs may vary from initial estimates and changes that may impact the functionality of projects. We provided a draft of this report to the Department of Justice for its review and comment. The Director of the Federal Bureau of Prisons provided written comments on this draft. BOP concurred with our recommendations and stated it would incorporate information on the extent to which project costs may vary from initial estimates and changes that may impact the functionality of projects in DOJ’s annual congressional budget submission. BOP also provided technical corrections, which we incorporated in this report where appropriate. BOP’s comments are reproduced in appendix III. We will send copies of this report to the appropriate congressional committees and the Attorney General of the United States. Additional copies will be sent to interested congressional committees and the Director of the Office of Management and Budget. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6923 or at dornt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To assess the extent to which costs changed for the three prisons currently under construction and the reasons for those changes, we obtained and analyzed the President’s budgets, the Department of Justice’s budget justifications for the federal Bureau of Prisons (BOP) for fiscal years 2001 through 2009, and appropriation laws for fiscal years 2001 through 2008. We obtained and analyzed BOP’s project files for three Federal Correctional Institution (FCI) construction projects in Mendota, California; Berlin, New Hampshire; and McDowell, West Virginia. To determine the rate of inflation and other price indicators for the duration of the three projects’ schedules, we obtained and analyzed the following Department of Labor, Bureau of Labor Statistics’ data: (1) Producer Price Index-Commodities, Metals and metal products, Steel mill products; (2) Producer Price Index-Commodities, Fuels and related products and power, Crude petroleum (domestic production); (3) Producer Price Index Industry Data, Inputs to construction industries; and (4) Consumer Price Index-All Urban Consumers. We obtained and analyzed (1) BOP’s initial cost estimates for FCI Mendota, FCI Berlin, and FCI McDowell and (2) information on prison projects completed from 1998 to 2007—10 years prior to our review—to learn about BOP historical costs. We reviewed BOP’s capital planning guidance. We obtained and analyzed the Office of Management and Budget’s (OMB) guidance for Capital Planning and Budget Submission. We interviewed BOP construction, budget, and financial officials in Washington, D.C. To assess the actions BOP has taken—or plans to take—to control cost increases and schedule delays on the three current construction projects, we obtained and analyzed BOP’s construction guidance and BOP’s project files for FCI Mendota, FCI Berlin, and FCI McDowell. We obtained and analyzed government and construction industry data concerning project cost management and guidance from OMB’s Circular A-11, the General Services Administration’s Construction Excellence Features, the Department of Energy’s Project Management for the Acquisition of Capital Assets, and the Construction Industry Institute’s (CII) Best Practices for the Construction Industry. We obtained and analyzed government and construction industry data concerning the performance of the Design-Build delivery method versus the traditional Design-Bid-Build delivery system from the National Institute of Standards and Technology and the CII study. We interviewed BOP construction, budget, and financial officials in Washington, D.C. We did not evaluate the effectiveness of BOP’s construction policies and procedures in controlling cost increases and schedule delays on these projects because although design and construction contracts were awarded, little construction had been done. We conducted this performance audit from June 2007 through May 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: FCI Mendota, FCI Berlin, and FCI McDowell: Project Estimates, Budget Requests, and Funding for Fiscal Years 2001–2009 (48,895) (55,000) Fiscal year 2009 funds have not been determined. In addition to the individual listed above, Maria Edelstein, Assistant Director; George Depaoli; Anne Dice; Carlos Diz; Colin Fallon; and Susan Michal-Smith made significant contributions to this report.
|
The federal Bureau of Prisons (BOP) is responsible for the custody and care of more than 201,000 federal offenders. To provide housing for the federal prison population, BOP manages the construction and maintenance of its prison facilities and oversees contract facilities. GAO was asked to look into recent increases in estimated costs for Federal Correctional Institution (FCI) construction projects located in Mendota, CA; Berlin, NH; and McDowell, WV, which have led to almost $278 million or 62 percent more being provided in funding than initially estimated. This report addresses (1) the reasons for the changes to the estimated costs and (2) the actions BOP has taken--or plans to take--to control future cost increases and delays. GAO reviewed and analyzed BOP's fiscal years 2001 to 2009 budget documents, files for these three projects, and project management guidance. GAO also reviewed government and industry guidance on project management and met with BOP officials. For these three projects, delays in starting construction or disruptions in available funding that interrupted construction contributed to increases in cost estimates due to inflation and unexpected increases in construction material costs. According to BOP officials, delays resulted from problems with selecting and approving the sites for the prisons and with the availability of funding. BOP officials stated that they expected costs to increase by the inflation rate during the delay period, but did not anticipate that market forces would cause the construction costs to increase above the inflation rate, as they did. For example, steel prices rose about 60 percent and oil prices rose by almost 170 percent between the time that BOP prepared the initial cost estimates for these projects and when construction was ready to begin. In addition, because BOP estimates initial project costs early in the planning process, generally before an actual prison location is selected, variance from the initial estimates would be expected to some extent, even if the projects are not delayed. BOP, like other agencies, is not required to communicate how much it expects costs may vary from its estimates in its budget documents. Without such information, Congress and other stakeholders do not know the extent to which additional funding may be required to complete the project, even absent any project delays. BOP eliminated or reduced portions of two projects to remain within the amount that was funded and plans to use its construction management policies and procedures to control further cost increases and schedule delays. When awarding the contract for FCI Mendota in 2007, BOP eliminated a UNICOR facility, which would have provided additional employment and job skills training opportunities for inmates, and the minimum-security prison camp. At FCI Berlin, BOP eliminated the UNICOR facility when it awarded the contract in 2007, but subsequently added a smaller UNICOR facility to the project, which will be paid for by UNICOR. Intended to reduce costs, these changes also reduced the functionality of the two prisons, deviating from what BOP planned and requested funding for. In the subsequent budget submission to Congress and other stakeholders, BOP did not clearly communicate these changes, since BOP does not provide such detailed project information. Now that BOP has awarded the construction contracts for the three projects, BOP officials believe that their construction management policies and procedures will allow them to control cost increases and schedule delays. These policies and procedures reflect current government and industry project management practices to monitor and track projects, and to report on their status. Furthermore, BOP officials said that they plan to continue to avoid making changes that would increase construction costs after construction begins. GAO did not evaluate the effectiveness of BOP's construction policies and procedures in controlling cost increases and schedule delays on these projects because while construction contracts were awarded, little construction had been done.
|
In response to financial stresses in the railroad industry, the Congress passed legislation in 1976 and 1980 that dramatically reduced federal regulation over the industry. As a result of the 1976 and 1980 legislation, most rail traffic in the United States is not subject to the Surface Transportation Board’s (the Board) rate regulation, and fewer large railroads account for most of the industry’s revenue and mileage operated. The Board, established pursuant to the ICC Termination Act of 1995, is a bipartisan, independent, adjudicatory body that is organizationally housed within the Department of Transportation (DOT). The Board is responsible for the economic and rate regulation of freight railroads and certain pipelines, as well as some aspects of motor and water carrier transportation. The Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980 facilitated changes in the freight railroad industry. These acts provided the railroads with greater flexibility to negotiate freight rates and respond to market conditions. The Staggers Act in particular made it federal policy that freight railroads would rely, where possible, on competition and the demand for services rather than on regulation to establish reasonable rates. As a result of mergers and acquisitions fostered by these statutes, as well as changes in bankruptcies and changes in the definition of a class I railroad, the number of large railroads in the United States has declined substantially from the 63 class I railroads operating in 1976 to nine by 1997. In spite of the reduction in the number of class I freight railroads, these railroads accounted for 91 percent of the industry’s freight revenue and 71 percent of the industry’s mileage operated. In 1997, class I freight railroads originated almost 1.6 billion tons of freight, of which coal, farm products, and chemicals accounted for about 61 percent. The nine class I freight railroads in 1997 were the Burlington Northern and Santa Fe Railway Co.; CSX Transportation; Consolidated Rail Corporation; Grand Trunk Western Railroad, Inc.; Illinois Central Railroad Co.; Kansas City Southern Railway Co.; Norfolk Southern Corp.; Soo Line Railroad Co.; and Union Pacific Railroad Co. Since 1997, additional railroad consolidations have occurred. In July 1998, the Board approved the division of the Consolidated Rail Corporation’s (Conrail) assets between CSX Transportation and Norfolk Southern Railway. This will reduce the number of class I freight railroads to eight in 1999. Also, in July 1998, Canadian National Railway, the Canadian parent of Grand Trunk Western Railroad, Inc., requested the Board’s authorization to acquire Illinois Central Railroad Company. The Board’s proposed schedule provides for a final decision on the proposed acquisition no later than May 25, 1999. Officials from the Federal Railroad Administration (FRA) believe that within the next 5 to 10 years, the remaining class I railroads could be merged into two transcontinental railroads. The ICC Termination Act of 1995 eliminated the ICC and transferred its core rail adjudicative functions and certain non-rail functions to the Board. Among other things, the Board has economic regulatory authority over freight railroads, addressing such matters as the reasonableness of rates, mergers and line acquisitions, line constructions, and line abandonments. Under the statute, the Board is responsible for balancing shipper and railroad interests by assisting railroads in their efforts to earn adequate revenue to cover their costs and provide a reasonable return on capital while ensuring that shippers that depend on one railroad are protected from unreasonably high rates. The 1976 and 1980 acts provided railroads with significant flexibility to negotiate freight rates and respond to market conditions. The 1976 act retained federal rate regulation only for traffic where the railroad dominates the market, that is, it provides service for which there is no effective competition to otherwise control rates. In such cases, the ICC had jurisdiction to determine whether a challenged rate was reasonable and, if unreasonable, to award reparations and prescribe a maximum rate. The Staggers Rail Act built on the reforms of the 1976 act by establishing a threshold under which railroads would not be considered market dominant. The ICC Termination Act transferred this regulatory function to the Board. The Staggers Rail Act permitted railroads to negotiate transportation contracts containing confidential terms and conditions that are beyond the Board’s authority while in effect. As figure 1.1 shows, most rail tonnage in 1997—70 percent—moved under contracts between the railroads and shippers involved and therefore was not subject to the Board’s rate regulation. Shipments exempted from rate regulation accounted for an additional 12 percent of all rail tonnage moved in 1997. The Board is required to exempt any person or class of persons, or a transaction or service, from regulation, where regulation is not needed to carry out congressionally set rail transportation policy and either the transaction or service is of limited scope or regulation is not needed to protect shippers from an abuse of market power. For example, in April 1998, the Board exempted 29 nonferrous recyclable commodity groups from the Board’s regulation. The Board found that trucks play a significant role in the transportation of these commodity groups. Therefore, the Board found that railroads do not possess sufficient market power to abuse shippers. Other exemptions issued for the same reason include those for boxcar traffic, certain agricultural products, and intermodal transportation. The remaining traffic, potentially subject to rate regulation, accounted for 18 percent of rail tonnage in 1997. However, the Board’s jurisdiction over this traffic is further limited because it may only provide rate relief where the revenue-to-variable cost percentage exceeds 180 percent and where there is no effective competition. During the 1970s, the railroad industry was in weak financial condition with a rate of return on net investment of 1.2 percent in 1975, and a return on shareholders’ equity of about 1.9 percent. By contrast, manufacturing companies and utilities earned rates of return in 1975 of about 15 and 12 percent, respectively. The financial community was concerned about the railroads’ long-term viability, since the industry faced cash flow difficulties and marginal credit ratings. The 1976 act required the ICC to develop standards for determining whether railroads’ were earning adequate revenues to cover their operating costs and provide a reasonable return on capital. The act provided that railroads’ revenue should (1) provide a flow of net income plus depreciation adequate to support prudent capital outlays, ensure the repayment of a reasonable level of debt, permit the raising of needed equity capital, and cover the effects of inflation and (2) attract and retain capital in amounts adequate to provide a sound transportation system. Despite the reforms of the 1976 act, in 1980, the Congress found that railroads’ earnings were still insufficient to generate the funds they needed to make improvements to their rail facilities. While the 1976 act required the ICC to develop standards for the adequacy of railroad revenue, the Staggers Rail Act of 1980 required the ICC to determine annually which railroads were earning adequate revenues and to consider revenue adequacy goals when it reviewed the reasonableness of rates. According to Board officials, even today, the profitability of class I railroads is among the lowest of major industries. When two or more railroads seek to consolidate through a merger or common control arrangement, they must obtain the Board’s approval; transactions requiring the Board’s approval are not subject to the antitrust laws or other federal, state, and municipal laws. During a merger proceeding involving two or more class I railroads, the Board is required to consider, among other things, how the merger will affect competition among railroads (either in the affected region or in the national transportation system), railroad employees, the environment, and the adequacy of transportation provided to the public. As part of its regulatory responsibilities, the Board also addresses informal and formal complaints that railroads have failed to provide reasonable rail service to shippers. The Board oversees other rail matters, such as line constructions and abandonments. Railroads that want to either construct a new rail line or abandon an existing one must generally obtain the Board’s approval. Concerned about the potential barriers that shippers face in seeking relief from allegedly unreasonable rail rates, Senators Byron L. Dorgan, Conrad R. Burns, John D. Rockefeller IV, and Pat Roberts asked us to describe (1) the Board’s rate relief complaint process and how it has changed since the ICC Termination Act of 1995 became law, (2) the number and outcome of rate relief cases pending or filed since 1990, and (3) the opinions of shippers as to the barriers they face when bringing rate complaints to the Board and potential changes to the process to reduce these barriers. At their request, we are also providing information on McCarty Farms, Inc. et al. v. Burlington Northern, Inc. In addition, in the spring of 1999, GAO will issue a companion report that will address how freight railroad rates and service have changed since 1990. To describe the rate complaint process and how it has changed since the ICC Termination Act, we reviewed prior GAO reports and the Board’s documents, applicable statutes and regulations, and decisions. We met with Board officials, shippers’ organizations, and the AAR to gain a thorough understanding of the process. We then summarized the rate relief process and obtained comments on our summary from the Board. Board officials provided clarification where necessary, and their comments are included in our report. Our description of the rate relief process is contained in chapter 2. To determine the number and outcome of rate relief cases filed and/or pending since 1990, we obtained the rate complaints either filed with or pending before the Board or its predecessor, the ICC, from January 1, 1990, through December 31, 1998. We compared the number of complaints filed between January 1, 1990, and December 31, 1998, with the number of complaints that shippers filed from 1980 through 1990. We reviewed the complaints to determine, among other things, their nature, the complaint process used in filing them and their outcome. We also examined the complaints to determine the types of commodities involved, whether the railroad was found to be market dominant, how much time the agency required to make a rate-reasonableness determination, and the rationale for the determination. The Board’s requirement that railroads demonstrate product and geographic competition for traffic subject to a challenged rate was in effect during the course of our review. We did not independently verify the information provided by the Board regarding the number of complaints filed or pending since January 1, 1990. In addition, we did not review the merits of the ICC’s or the Board’s decisions or the appropriateness of the outcome. To determine shippers’ views on the rate relief process and suggestions for improvement, we mailed a questionnaire to members of 11 commodity associations that ship using rail in the United States. To identify the individual shippers of each commodity, we obtained the membership lists from each association we contacted. To identify these shippers, we selected three commodity classifications representing four commodities that constitute the largest volume of rail shipments—bulk grain, coal, chemicals, and plastics. In order to identify a sufficient cross-section of wheat shippers, we selected wheat associations in states with the largest wheat production (by volume) for 1997, using data on wheat production from the U.S. Department of Agriculture’s Economic Research Service. We selected the top three wheat-producing states—Kansas, North Dakota, and Montana—and contacted the grain shipper associations in these states. For the remaining commodity classifications, we contacted the national associations representing the shippers of each commodity. For the nine associations that provided a relatively small number of members, we surveyed all of the members contained on the lists provided. For the two associations that provided a relatively large number of members, we selected a random sample of members for our survey. In instances where a random sample was conducted, the sample can be generalized to the association’s membership. Table 1.1 lists the associations we contacted, the number of members in each association, the number of shippers we selected from each list to represent a statistically valid sample of each association, and the response rate. The selection of 2,149 shippers was reduced by 92 to account for shippers who were members of more than one association, leaving 2,057. In analyzing the questionnaires that were returned, we discovered that a very small percentage of National Corn Growers Association (NCGA) members were rail shippers. Of the questionnaires returned by NCGA members, only six indicated they were rail shippers. This does not yield a statistically valid result, and therefore we dropped the NCGA membership from our statistical analysis. As a result, we reduced our sample of 2,057 by the 400 NCGA members we sampled. This results in an adjusted sample size of 1,657. Of the 1,657 grain, coal, chemicals and plastics shippers we surveyed, 996 (or 60.1 percent) returned our survey. The response rates for grain, coal, chemicals and plastics shippers were 61 percent, 62 percent, and 55 percent, respectively. Because the National Grain and Feed Association’s (NGFA) membership was large, we sent our survey to a randomly selected sample of NGFA members. Our sample was statistically drawn and weighted so that we could generalize the responses of the NGFA members we surveyed to the entire membership for each question in the survey. The weights apply only to NGFA member responses and not the responses of the other shippers because we surveyed the entire membership of the other associations. We statistically combined the sample with the responses from the other 10 groups and reported weighted estimates for each question in the survey. As a result, the views and opinions of the shippers we surveyed are generalizable to the views and opinions of the 11 groups we surveyed. Not all shippers who responded to our survey were rail shippers, however. Therefore, our analysis only considers the responses of shippers that indicated that they were rail shippers, and have used rail in at least one year since 1990. Based on our sampling and analysis techniques, our results are based on an estimate of 709 shippers who shipped grain, coal, chemicals, or plastics by rail in at least one year since 1990. The responses of the 709 rail shippers are used as the core of our statistical analysis. Some of our estimates do not always represent the entire population because some shippers did not answer all questions. We have indicated the number of missing responses for each question in appendix III. In all instances where we discuss our survey results, we are referring to the rail shippers belonging to the groups we surveyed. Our statistical analyses of data collected are presented in chapter 4. A detailed technical appendix and our questionnaire results are presented in appendix III. To determine the railroad industry’s views on shippers’ suggestions to improve the rate relief process and competition in the railroad industry and to collect additional data on rate complaint cases, we mailed a questionnaire to each of the nine class I railroads with operations in the United States. The questionnaires asked the railroads to indicate the significance of barriers caused by the standard rate complaint process and their opinions regarding shippers’ suggestions to improve the process and increase competition in the railroad industry. In addition, we asked them for information regarding any rate complaint cases in which they were involved, including the number of complaints and the outcome of each complaint. AAR officials answered questions pertaining to the rate relief process and competition issues on behalf of the railroads. Each individual railroad was asked to answer questions regarding rate complaints pertaining to its company. We did not receive a sufficient number of responses from the railroads regarding the rate complaints to provide any additional data on the cases filed and or pending since 1990. We therefore relied on our independent analysis of the Board’s case files and did not include the small number of responses from the class I railroads. We summarized the data collected through the use of the railroad questionnaire and summaries of that analysis are presented in chapter 4 and appendix IV. We performed our work from February 1998 through February 1999 in accordance with generally accepted government auditing standards. In commenting on a draft of this report, the Board disagreed with our statistic that 88 class I railroads operated in 1976 and contended that 63 class I railroads, representing 30 independent rail systems, operated in that year. Furthermore, the Board noted that of these 30 rail systems, 9 were subsequently reclassified as smaller (class II or III) railroad systems as the revenue thresholds for class I status were raised, and 2 systems ceased operations as a result of bankruptcy. Thus, officials stated, the actual reduction in the number of class I railroad systems from 1976 to 1998 that resulted from mergers and consolidations was from 19 to 9. Our count of 88 class I railroads was based on information from FRA’s annual safety bulletins. To be consistent with the Board, we changed the number of class I railroads in 1976 to 63 and provided additional information on the 30 systems these class I railroads represented. However, we disagree with the Board’s assertion that the number of 1976 railroad systems should be further reduced to 19. While the number of systems may have declined after 1976, Board statistics show that 30 railroad systems operated in 1976. While there have been some changes to the rate complaint process since the ICC Termination Act, the process continues to be relatively complex and time-consuming. However, within the limits of the law, the Board has taken steps to reduce the complexity of the process, such as adopting simplified guidelines for determining the reasonableness of challenged rates and addressing some of the barriers to filing a complaint. The rate complaint process in larger cases is a complex administrative proceeding involving difficult issues. When a shipper files a rate complaint, the Board must assess many factors related to competition and the disputed rate. The Board first determines whether the railroad dominates the shipper’s transportation market. If the Board finds that a railroad is market dominant, the Board then conducts an economic analysis designed to determine the lowest rate than an optimally efficient railroad would need to charge to cover its costs. If the hypothetical railroad’s rate is less than the rate that the dominant railroad charges, the Board may order reparations for past shipments or prescribe rates for future shipments. The Board addresses a shipper’s complaint in an administrative proceeding during which the shipper and the railroad have the opportunity to develop and present evidence supporting their positions. Under the ICC Termination Act, a case may only be initiated upon a shipper’s complaint. A complaint must indicate whether the Board should examine the challenged rate under the Board’s more complex standard or its simplified guidelines and provide information to enable the Board to decide which guidelines to apply. The Board charges a fee to process the complaint. In February 1999, the Board raised the filing fee for a case brought under the standard guidelines to $54,500—20 percent of the 1999 cost to the agency of adjudicating a rate complaint. The Board also raised the fee for cases brought under the simplified guidelines issued after the ICC Termination Act to $5,400. After the case is initiated, the parties use a variety of tools to obtain information from each other and present evidence supporting their positions under a schedule established by regulation or by the Board. The Board must decide cases under the standard guidelines within 9 months after the close of the administrative record and cases under the simplified guidelines within 6 months. For cases under the standard guidelines, the Board’s goal is to complete the entire process in 16 months. The railroad or shipper may make an administrative appeal to the Board or request judicial review of the Board’s decision after exhausting all administrative options. According to shipper representatives, a complaint can cost a shipper from about $500,000 to $3 million. Figure 2.1 illustrates the dates that govern key parts of the process, including discovery, filing of evidence, and the date by which the Board has to make a final decision. The Board’s regulations require the parties to discuss discovery matters within 7 days after a complaint is filed. However, either side may be reluctant to share information, particularly information that may damage its case. Disputes have also arisen when a shipper contended that a railroad’s discovery requests were unfairly burdensome. For example, in a 1998 case, FMC Wyoming Corporation and FMC Corporation v. Union Pacific Railroad Company, the Board limited the railroad’s broad requests for information on possible product- and geographic-based competition. The Board found that through its broad discovery requests, the railroad had improperly attempted to shift the burdens of identifying product and geographic competition to the complaining shipper. As a result, the Board imposed restrictions limiting discovery requests. The Board later removed product and geographic competition from consideration in all cases. Shipping groups told us that obtaining information during the discovery process can be difficult and that railroads make it burdensome and time-consuming for them. Furthermore, shippers are reluctant to challenge railroads during discovery, fearing that an extended schedule will lead to added costs and the continued disruption of daily operations. On the basis of survey responses, we estimate that about 67 percent of the rail shippers indicated that difficulty in getting necessary data from the railroads would preclude them from filing a rate complaint. While railroads told us that procedural barriers should not be an obstacle in a rate complaint process, they believe that product and geographic competition tests are important aspects of proving that shippers have alternatives to the dominant railroad. By statute, the Board may assess whether a challenged rate is reasonable only if the railroad dominates the shipper’s transportation market. The requirement to determine market dominance originated with the 1976 act, which broadly defined market dominance as the absence of effective competition from other railroads or other modes of transportation. Underlying this statutory directive was the theory that if the railroad did not dominate the market, competitive pressures would keep rail rates at a reasonable level. The Staggers Rail Act retained this requirement and tied the definition of market dominance to rail rates exceeding a certain revenue-to-variable cost percentage. An analysis of market dominance contains both quantitative and qualitative components. Quantitatively, the Board first determines if the revenue produced by the traffic transported is less than 180 percent of the railroad’s variable cost of providing the service. By statute, a railroad is not considered to dominate the market for traffic that is priced below the 180-percent revenue-to-variable cost level. If the revenue produced by the traffic exceeds the statutory threshold, the Board conducts a qualitative analysis using data the shipper and railroad provide on competition. The shipper must prove that it does not have (1) access to more than one competing railroad or combination of railroads that can transport the same commodity between the same origin and destination points (intramodal competition) or (2) access to other competing modes of transportation, such as trucks or barges, that could transport the same commodity between the same origin and destination points (intermodal competition). Until January 1999, the railroad had to show that the shipper had (1) access to alternative origin or destination points for the same commodity (geographic competition) and (2) access to alternative products that could be substituted for the commodity in question (product competition). Until the 1976 act, the ICC regulated almost all rates and judged their reasonableness by various cost formulas and/or by comparing a challenged rate with an established rate for similar freight movements. Together, the 1976 act and the Staggers Rail Act provided railroads with significant flexibility to set rates in response to market conditions.However, neither the Staggers Rail Act nor the 1976 act prescribed quantitative measures for the ICC to use in determining rate reasonableness. In February 1983, the ICC proposed new Constrained Market Pricing (CMP) guidelines for coal shipped in markets where there was only one railroad. After more than 2-1/2 years of comment, the ICC adopted these final standard guidelines. Since the standard guidelines’ adoption, the ICC and the Board have used the guidelines to evaluate the reasonableness of rates for noncoal shipments. The ICC Termination Act retained the basic statutory framework for rate reasonableness determinations but, as discussed below, directed the Board to complete the development of alternative, simplified guidelines for rate relief cases. The CMP concept relies on railroads’ setting rates in all markets according to their own estimates of demand—just as many firms set their own prices in other industries—but subjects rates on captive traffic to reasonable constraints. ICC believed that CMP allowed it both to assist railroads in attaining adequate revenues and protect shippers from monopolistic pricing practices. CMP provides for the following: Revenue adequacy: A captive shipper should not have to pay more than is necessary for the railroad to earn adequate revenues. Management efficiency: A captive shipper should not pay more than is necessary for efficient service. Stand-alone cost: The rate should not exceed what a hypothetical efficient competitor would charge for providing comparable service; the shipper should not bear any costs from which it derives no benefit. Phasing of rate increases: Changes in rates should not be so sudden as to cause severe economic dislocations. Under the stand-alone cost approach routinely used in rate cases, a shipper develops a model of a hypothetical, optimally efficient railroad that could serve the complaining shipper. With the aid of a variety of experts, the shipper and railroad develop information regarding the hypothetical railroad’s traffic, operating plan, capital investment requirements, costs, and revenues. If the hypothetical railroad’s rate, including revenues sufficient to cover all costs and a reasonable profit, would be less than the rate the railroad charged the shipper, the Board will conclude that the challenged rate is unreasonable and may order the railroad to pay reparations on past shipments and prescribe rates for future shipments. Conversely, if the hypothetical railroad’s rate would be greater than the challenged rate, the Board will conclude that the rate is reasonable and dismiss the complaint. To reach its final decision, the Board typically employs a multidisciplinary team that includes a civil engineer to review the shipper’s assumptions in building the hypothetical railroad, a transportation analyst to review the shipper’s operational assumptions for the hypothetical railroad, and a financial analyst to prepare discounted cash flows. According to Board officials, the complexity of current rate cases and resource constraints on the Board allow the agency to work on two standard procedures cases concurrently at an average cost to the agency of $270,000 per case for staff directly assigned to a given case. According to shippers’ associations, developing a model of a hypothetical railroad requires a shipper to hire numerous consultants at significant cost. Of the shippers that expressed an opinion in our survey, an estimated 72 percent might not file a rate complaint because developing the model would be too costly. The ICC Termination Act of 1995 directed the Board to complete an ICC proceeding to develop a simplified alternative to the standard coal-rate guidelines within 1 year of enactment. While the Board adopted simplified guidelines in December 1996, no cases had been filed under the simplified procedures as of January 1999. In addition to the simplified guidelines, the Board has implemented other measures to reduce the barriers that shippers experience when bringing rate complaints. These measures include establishing procedural deadlines for standard cases as well as more limited deadlines for cases under the simplified guidelines and requiring the parties to discuss discovery matters at the beginning of the proceeding. Furthermore, the Board has eliminated the product- and geographic-based competition aspect of its market-dominance determination. The Board has also encouraged increased communication between the railroad and shipper communities so that they may better resolve their differences outside the regulatory process. The Board’s simplified guidelines are intended for complaints in which it would be too costly for the shipper to develop a cost model of a competitive railroad. Since 1986, the Board or its predecessor has attempted to develop simplified guidelines. According to Board officials, efforts to adopt the procedures have often been blocked by the courts. After the Board adopted the simplified guidelines in 1996, AAR challenged the simplified guidelines in federal court, contending that the guidelines did not fulfill the Congress’s directive to establish a simple and expedited method to determine whether rates in small cases were reasonable. AAR asserted that the guidelines were “vague and could undermine the revenue adequacy of railroads.” On June 30, 1998, the court found that the challenge to the simplified guidelines was premature because the Board had not yet applied them to invalidate a specific rate. The shippers’ representatives that we contacted expect that AAR will challenge the results of the first case in which the Board decides that a challenged rate is not reasonable under the simplified guidelines. These representatives contend that shippers may be reluctant to file a case under the simplified guidelines because they expect the results to be appealed and they would incur additional legal costs in subsequent litigation. In addition, they contend that if the court ruling invalidates the simplified guidelines, shippers would then have to decide whether to pursue complaints under the more complex standard guidelines. Nonetheless, Board officials noted that the Board would defend the simplification procedures in court and therefore believes that eligible shippers should not be deterred from their use because the procedures have not been judicially affirmed. Board officials expressed confidence that the courts would affirm the simplified procedures. In addition to establishing simplified guidelines, the Board has implemented procedures designed to expedite the rate complaint process. For example, in September 1996, the Board issued a 7-month procedural schedule for complaints under the standard guidelines to ensure that the proceeding would be completed within 16 months. In January 1998, the Board issued expedited procedures for complaints brought under the simplified guidelines. These procedures established a 50-day schedule for the Board’s determination as to whether simplified guidelines should be used in the complaint. Despite these efforts, the Board has either suspended or extended the proceedings for most of the shippers’ complaints as a result of shippers’ and railroads’ requests. Furthermore, in an effort to speed up the process and develop realistic time frames, the parties confer with each other at the outset of a rate case to set the ground rules for the proceedings. During the conference, the parties identify and resolve disputes relating to discovery or the evidentiary schedule. Finally, the Board eliminated product- and geographic-based competition from its market-dominance analysis. While the Board had tried to mitigate problems associated with discovery pertaining to product and geographic competition proceedings, it concluded that such actions were not sufficient to address shippers’ concerns. The railroads sought agency reconsideration of that decision. The Board prefers that shippers and railroads settle their differences without regulatory interference and has made various efforts to facilitate such agreements. The ICC Termination Act established the Railroad-Shipper Transportation Advisory Council to advise the Board, the Secretary of Transportation, and congressional oversight committees on rail transportation policy issues of particular interest to small shippers and small railroads. As a result of a proposal by the Council, the Board established a voluntary arbitration process as an alternative to traditional proceedings. The regulations establish a 120-day time frame for arbitration proceedings. Arbitrators’ decisions are binding and judicially enforceable, subject to a limited right of appeal to the Board. Arbitration has not been used as a substitute for a rate complaint. According to officials of the National Grain and Feed Association, arbitration is suitable for service problems, such as the misrouting of cars, but mediation is preferred to resolve rate complaints. As a result of April 1998 hearings, the Board has encouraged further private-sector discussions to address access and competition issues. At the hearings, shippers called for a greater role for smaller railroads, particularly in rural areas. In September 1998, the American Short Line and Regional Railroad Association (ASLRRA) and AAR announced an agreement to improve service. The agreement provides for the arbitration of certain issues contested by class I and smaller railroads. However, the Board also mandated that the railroads and shippers establish a formal dialogue to address concerns raised during the April hearings. In response to the Board’s directive, the National Grain and Feed Association and AAR entered into an agreement to address rate and service issues in the grain industry. The agreement provides for confidential, nonbinding mediation of certain rate disputes and mandatory binding arbitration of service disputes. Other mechanisms to encourage discussions between the railroad and shipper communities include the National Grain Car Council and the Joint Grain Logistics Task Force. The ICC Termination Act directed the Board to consult as necessary with the National Grain Car Council, previously established by the ICC as a means for assisting the Board in addressing problems arising in transporting grain by rail. According to a Board official, the National Grain Car Council generally focuses on addressing issues for the grain industry as a whole, and not necessarily for individual shippers. The Board also established the Task Force in cooperation with the U. S. Department of Agriculture. The Task Force will address shippers’ and railroads’ information needs concerning recurring seasonal problems that affect the transportation of grain and grain products. In commenting on a draft of this report, Board officials stated we should provide more information on the complex task the Board faces in balancing competing policy objectives set forth under statute and the strides the Board has undertaken to streamline and simplify the rate complaint process. Officials stated that the standard complaint procedures that the Board currently uses for large cases resulted from many years of debate and judicial interpretations. These standard procedures address the concerns that the Board must consider under the statute as it seeks to balance two competing goals: considering the needs of the railroad industry for adequate revenues while simultaneously ensuring that the industry does not exert an unfair advantage over captive shippers. Board officials noted that the agency has streamlined the standard process for handling large cases (such as modifying the market-dominance rule). However, they stated that the complexity of the standard procedures for larger rate cases is largely unavoidable, given the complexity of the underlying issues to be resolved and the need to balance competing policy objectives laid out by the Congress. Thus, officials contend that to further substantially reduce the complexity, time, and expense involved in handling these rate complaints would require legislative action. Officials noted that the Congress could choose to adopt even simpler maximum rate formulas for certain traffic. However, officials continued, a substantial retreat from differential pricing principles could have a noticeable effect on the railroad industry’s financial health and the type and scope of services provided and thus could affect the shippers that rely upon that industry to meet their transportation needs. Similarly, officials noted that the suggestions for increasing rail competition, such as through open access, would require substantial changes to the statute, could alter the shape and condition of the rail system, and limit the ability of the nation’s rail system to meet the needs of some of the shippers that use the current system. In response, we recognize that the Board faces competing policy objectives as a result of existing laws. These competing policies come to the forefront not just with rail rate complaints but with many other Board proceedings, such as actions to approve railroad mergers and consolidations. Throughout this report, we repeatedly cite the competing policies, embodied in statute, that the Board must employ in making its rail-related decisions. However, we have modified the report to reflect the Board’s views that important aspects of the rate-relief process or the competitive structure of the railroad industry can only be changed with the support and approval of the Congress. Board officials also stated that the report does not adequately address the simplified procedures. Officials stated that the new procedures were designed to provide a shorter, simpler, and less expensive means to address cases in which the more complex standard procedures are not cost effective. Board officials stated that the report, as well as our survey, generally focused on the standard rate complaint process—a process that is inherently more complex and time-consuming. Board officials stated that the report does not adequately reflect the value that the new, simplified procedures could have for the shippers that will use them. Although shippers have complained that the simplified procedures are also complex, Board officials stated that the procedures are user-friendly and based on readily available and inexpensive data. Because the Board would defend the simplified procedures against railroad challenges in court, Board officials stated that eligible shippers should not be deterred from using them simply because the procedures have not yet been judicially affirmed. Officials expressed confidence that the courts would affirm the Board’s simplified procedures, when they are applied. In response, we note that since the Board issued its simplified procedures in December 1996, no shipper has asked the Board to review a rate complaint under them. As this chapter notes, shippers and their associations are reluctant to use the simplified procedures because they believe that AAR will challenge the first rate complaint filed under the new procedures. Shipper associations have noted that Board statements declaring the Board’s intended defense of the simplified guidelines offer little encouragement for any shipper to be the first to file a complaint under these new procedures. While the new procedures offer shippers the prospect of resolving their complaints faster, the prospect of future litigation provides little incentive for shippers to initiate such a complaint. Accordingly, we believe that it is still too early to declare the simplified procedures a success. Very few shippers served by class I railroads have complained to the Board or the ICC about the railroads’ rates. After filing 130 rate complaints from 1980 through 1989, shippers filed only 24 rate complaints from 1990 through 1998. Furthermore, in the 41 complaints we reviewed that were filed or pending since the beginning of 1990, the shipper and the railroad were able to settle their differences on 18 complaints before completing the formal complaint process. In addition, the challenged rates were found to be unreasonable in two cases, seven complaints were dismissed in favor of the railroad, and five were dismissed for other reasons. The Board is still examining the remaining nine complaints. Shippers of coal, farm products, and chemicals filed the greatest number of rate complaints. The rate complaint process was quite long for some shippers—time for resolution ranged from a few months to about 16 years. Despite the fact that thousands of shippers transport their products by rail, very few have filed complaints about rates to either the ICC or the Board over the past 20 years. From 1980 through 1989, the ICC received 130 rate complaints. The number of rate complaints has declined almost every year since 1980, and as figure 3.1 shows, two shippers filed rate complaints in 1998. According to a Board official, the decline in the number of rate complaints filed may be attributed to the growth in the number of private transportation contracts between railroads and shippers, as well as a significant general decline in railroad rates over the past 10 to 15 years. However, some of the rail shippers that responded to our survey indicated that the complexity of the rate complaint process also had influenced their decisions not to file rate complaints. As a result of the Staggers Rail Act of 1980, railroads could establish rates through contracts with individual shippers rather than only through tariffs—predetermined rate schedules for particular routes—filed with the ICC. Contracts reflect negotiated agreements for rates and service levels tailored to the shippers’ needs. A 1988 AAR survey found that 60 percent of all rail traffic was subject to private transportation contracts between the shippers and the railroads. By 1997, AAR found that the amount of rail traffic subject to a contract had increased to 70 percent. In addition, according to the Board, the average inflation-adjusted class I railroad rate steadily declined 46 percent from 1982 through 1996, perhaps also leading to a decline in the number of rate complaints. Of the 709 shippers that responded to our survey, 25 percent indicated that they found their freight rates reasonable and therefore found no reason to file a complaint. However, an estimated 75 percent of the remaining rail shippers that responded to our survey indicated that administrative and legal barriers in the rate complaint process may have precluded them from filing a complaint. Since 1990, 41 complaints have either been filed with or are pending before the ICC/Board. Shippers of bulk commodities like coal, grain and chemicals are highly dependent on rail for their transportation needs and filed the most rate complaints. Coal, grain, and chemical shipments constituted about 60 percent of total traffic on class I railroads in 1997, and accounted for 76 percent (31) of the complaints either pending or filed since January 1, 1990. Coal shippers alone filed 21 of the 41 complaints, as shown in figure 3.2. The six chemical and four grain complaints represented 24 percent of the complaints either pending or filed since 1990. Commodities other than coal, grain, and chemicals identified in these complaints include corn syrup, sugar, pulpwood and woodchips, electric transformers, spent nuclear fuel, railroad cars, and perlite rock. Appendix II contains a list of the commodities associated with each complaint. Board officials believe that the number of rate complaints from coal shippers will increase partly because many long-term private transportation contracts between railroads and utility companies are expiring and there may be disputes regarding rates in the absence of contracts. According to the Board, coal shippers have the most incentive for bringing a rate complaint because of the large amount of dollars potentially in dispute. For example, in 1998, the Board awarded the Arizona Public Service Company and PacifiCorp over $23 million plus interest in their joint complaint against the Atchison, Topeka, and Santa Fe Railway Company. In addition, the stand-alone cost model is relatively less complicated to apply to coal shipments than it is to other commodities, such as chemicals. Railroads usually transport coal shipments between few origins and destinations—mainly between the coal mine and the utility company’s generating plant—over a limited segment of a railroad’s system. Chemical shippers, on the other hand, typically send smaller shipments to many destinations. However, officials from the Western Coal Traffic League stressed that bringing a rate complaint to the Board is the last resort for a utility company because in addition to the extremely high cost of bringing a rate complaint, the effort distracts from and disrupts the company’s everyday operations. The resolution of rate complaint cases has often taken a number of years under the standard guidelines. In some instances, complaints were prolonged because either the railroad or the shipper appealed an ICC/Board decision to a federal court, which subsequently remanded the complaint to the agency for another review. Since 1990, the ICC/Board has completed 32 rate complaint cases. As table 3.1 shows, some complaints were resolved in a few months, while others took more than 16 years. According to the Board, some cases were lengthy because the standards were not in place when the cases were filed and/or because of extensive litigation. The time required for resolving a complaint varied by commodity. Three complaints filed by grain shippers, which were combined into a single proceeding, McCarty Farms, Inc. et al. v. Burlington Northern, Inc., took about 16 years to resolve. According to Board officials, this is principally because the complaints were filed before the ICC had developed rate standards and because the parties challenged various ICC and Board decisions in court. In 1980, about 10,000 Montana farmers and owners of grain elevators (the McCarty Farms Group) filed a class action lawsuit against the railroad in federal district court, challenging Burlington Northern’s rates on wheat shipped from Montana to Oregon and Washington State. After numerous reviews by the agency and the courts, the Board found the rates not to be unreasonable in August 1997 and discontinued the proceedings. In October, 1997, the McCarty Farms Group appealed the Board’s decision to the U.S. Court of Appeals for the District of Columbia Circuit. In October 1998, the court upheld the Board’s decision that the rates were not unreasonable. (See app. I for a more detailed description of the McCarty Farms case.) Coal and chemical complaints have generally been resolved more quickly; average reviews have taken about 5 and 2 years respectively. However, the ICC/Board dismissed some of these complaints in less than 12 months. For example, the Board dismissed one coal complaint—Omaha Public Power District v. Union Pacific Railroad Company—after 4 months. Two of the nine pending cases—all concerning disputes over the same traffic—have been active for over 16 years. The Department of Energy and the Department of Defense filed complaints against various railroads in 1978 and 1981 regarding the transportation of spent nuclear fuel. The ICC found that the railroads’ practice of requiring special trains to handle this material was unreasonable. On appeal, the court held that the agency must rule on the rate levels instead. The Board told the parties that it will not resolve these cases until it receives information on their progress in settling the dispute. If the information provided shows that there is little or no prospect that the parties will resolve these complaints, the Board will move the case forward. (See app. II for a complete list of the complaints pending with the Board.) Of the complaints we reviewed, those filed after January 1, 1990, were generally completed more quickly. Many complaints filed or pending since January 1990 did not complete the entire rate complaint process. Eighteen of the 41 cases we reviewed did not complete the rate complaint process. In these cases, the shippers reached agreements with the railroads and requested that the ICC/Board dismiss the complaint. The ICC/Board dismissed many complaints in the early phases of the rate complaint process without rendering a decision regarding whether the rates were reasonable. Ten of the 41 complaints reached the rate-reasonableness phase of the process. In two cases, the rates were found to be unreasonable, and in six cases, they were found to be reasonable. While the ICC or the Board considered rate reasonableness in the remaining two cases, the complaints were not ultimately resolved on this basis but were dismissed at the request of the shippers. Often, a shipper files a rate complaint with the ICC/Board after the shipper and railroad have tried to negotiate terms for rail rates and service. According to Western Coal Traffic League officials, shippers initially use the leverage of possible or actual outside competition and negotiations to obtain favorable rates. If this does not work, the shipper’s last opportunity to try and obtain lower rates is to file a complaint. The ICC/Board dismissed 18 of the 41 complaints because the shipper and railroad reached a settlement. (See fig. 3.3) Area/natural resources management ($291 million) Land acquisition ($274 million) Science ($128 million) In some instances, the shippers requested that the Board dismiss the complaint because they had resolved their differences and entered into a transportation contract with the railroad. Board officials stated that in this instance they view a dismissal as a success because the parties were able to settle their differences. According to a Board official, however, the shippers are not required to provide details of any agreement or settlement they may reach with the railroad in requesting dismissal. Therefore, we were only able to determine that five of these dismissals were most likely due to private transportation contracts. (See table II.1 in app. II for a full list of complaints that the ICC/Board dismissed at the shipper’s request.) In two cases, the Board found the railroads’ rates to be unreasonable and awarded the shippers reparations. The Board is still examining nine rate cases. In other cases, the Board found that relief was not appropriate under the law. The ICC/Board dismissed seven complaints in favor of the railroad because the railroad was not market dominant or because the rates were reasonable. The ICC/Board dismissed five complaints primarily because the rate was either subject to a contract or the remedy that the shipper wanted was not available. (See tables II.2 through II.5 in app. II for a complete list of these complaints.) In one case in which the Board found the rates unreasonable, Arizona Public Service Company and PacifiCorp (jointly, Arizona) had filed a complaint with the ICC challenging the Atchison, Topeka, and Santa Fe’s rates for transporting coal from New Mexico to Arizona for electric power generation. The railroad asserted that it faced a hybrid form of product and geographic competition because the electric utility—Arizona—could produce power or purchase power elsewhere on the nation’s electric power grid. However, the Board disagreed, citing significant costs and barriers to Arizona’s obtaining substitute power and found that the revenues produced by the railroad’s rates exceeded the revenues that would be required by Arizona’s hypothetical railroad. The Board awarded the utility more than $23 million plus interest and prescribed future rates. In the second case, West Texas Utilities Company filed a complaint with the ICC challenging Burlington Northern’s rates for transporting coal from Wyoming to Texas. In this case, the railroad also alleged that it faced a hybrid form of product and geographic competition because the electric utility could either produce or purchase power elsewhere. The Board disagreed and found that Burlington Northern dominated the market with respect to the coal shipments at issue. The Board found the rates unreasonable and awarded West Texas more than $11 million plus interest. The Board is currently reviewing nine rate complaints, including three filed by the Department of Energy and the Department of Defense against the numerous railroads that transport spent nuclear fuel. While these complaints involve the same traffic, the Board has not officially consolidated these three complaints into a single proceeding. The Board is also considering three complaints filed by electric utility companies for the transport of coal, one complaint regarding the transport of grain, and two chemicals complaints. (See table II.5 in app. II for a complete list of these complaints.) Many of the complaints did not go through the entire rate complaint process. These complaints usually ended after reaching the discovery phase (where the railroad and shipper disclose information) or the evidentiary phase (where the railroad and shipper file evidence with the Board). As table 3.2 demonstrates, the ICC/Board dismissed 14 complaints during or prior to the evidentiary phase and 2 complaints during the evidentiary phase. Seventeen of the 41 complaints were active after the ICC/Board closed the evidentiary record. Of these complaints, the ICC/Board dismissed seven after it had examined market dominance and eight during or after a review of the reasonableness of the rates. The ICC/Board found that the rates in six of these eight cases were reasonable; the other two complaints were ultimately dismissed at the shippers’ requests. In two cases, the Board found rates to be unreasonable and awarded reparations to Arizona Public Service and PacifiCorp (jointly, Arizona) and West Texas Utilities. In commenting on a draft of this report, the Board indicated that additional factors beyond those cited in this chapter can affect the time required to complete a rate complaint. The Board noted that some cases were protracted for two important reasons: The rate complaints were filed before the Board’s predecessor had developed rate standards and/or the parties took various Board decisions to court. Delays in deciding rate cases in the 1980s, for example, were a transitional problem resulting from the process of developing and interpreting rate reasonableness standards. The Board noted that cases initiated in the 1990s were handled significantly faster. In addition, the Board concluded that our approach of counting consolidated rate complaints separately overstated the average time required to resolve rate complaints and suggested that we also include an analysis of what the average time would be to resolve the cases separately. We agree with the Board’s comments and added information in this chapter to reflect the other factors that can affect the time required to complete a rate complaint case. We have also added information on the average time to complete a rate complaint on the basis of the separate cases. In response to our survey of shippers and discussions with commodity shipping associations, shippers cited several reasons for not using the standard rate complaint process but particularly emphasized its time, cost, and complexity. Shippers suggested methods to simplify the filing process and thereby reduce the time and costs involved. Furthermore, they indicated that increasing competition in the railroad industry would lower freight rates and diminish the need to file rate complaints. In response to our survey, the nine class I railroads stated that maintaining the current regulatory environment is crucial to retaining and improving the financial stability of the railroad industry. They stated that the current process for deciding rate complaints, while not perfect, is an appropriate system in the current regulatory environment. Furthermore, the railroads contend that adequate competition currently exists and that their ability to determine freight rates in a competitive market is key to the railroad industry’s financial stability. On the basis of our survey responses, we estimate that 25 percent of the rail shippers consider their rates to be reasonable. Our survey responses suggest that the remaining 75 percent believe that their rates were unreasonable and that barriers kept them from filing a complaint under the standard procedures. These shippers found the rate complaint process to be time-consuming, costly, and complex. They cited the legal costs associated with filing a complaint, the complexity of the process, the time involved in seeking relief, and the overall costs associated with developing their cases as the most significant barriers to seeking relief with the Board. During our interviews with shipping association officials, they highlighted several potential barriers that could keep shippers from filing complaints under the standard procedures. On the basis of our questionnaire responses, we estimate that 178 shippers (25 percent) of the 709 shippers that use rail consider their freight shipping rates to be reasonable and therefore had no reason to file a complaint. We asked the remaining 531 shippers (75 percent) to indicate whether the barriers that the shipping associations had highlighted were a reason for not filing a rate complaint.Table 4.1 shows the barriers we presented to these shippers and the percent of the 531 shippers that found the barriers to be a major or moderate reason for not filing a rate complaint. While rail shippers found most of the barriers cited in our survey to be significant, they found some to be more significant than others. Generally, shippers found cost, complexity, and time to be significant barriers that kept them from filing standard rate complaints. Seven out of 10 shippers responding to our survey cited the following reasons as important barriers to filing a complaint: the legal costs for filing a complaint, the costs associated with developing a stand-alone cost model, the length of the rate complaint process, and the overall complexity of the process. In addition, 6 out of 10 shippers responding indicated that high consulting costs, the difficulties of the discovery process, the high level of the filing fee, or fear that railroads might retaliate against them were important reasons for not filing rate complaints under the standard process. The barriers most significant to shippers as a whole are not necessarily the most significant barriers for shippers in each commodity group. Shippers of specific commodities (grain, coal, chemicals, and plastics) have unique characteristics that may have affected their responses. For example, coal shippers make very large, routine shipments throughout the course of the year and often have few alternatives to using rail; thus, rail shipping costs are a significant portion of their total shipping costs. Coal shippers believed that the most significant barrier is the time involved in filing a complaint. This is because coal shippers would have to continue to pay the disputed rate over the length of the rate complaint process. However, they can obtain reparations plus interest if the rate is found to be unreasonable. Grain shippers cited legal costs as the most significant barrier. Grain shippers make the highest volume of their shipments during the harvest season and generally have more transportation options available to them. As a result, they spend relatively less on rail shipping, and therefore the costs of a complaint may offset or exceed the potential benefits of filing. The shippers’ concerns identified in our survey are similar to those we found in 1987, when we examined the rate complaint process and contacted shippers that had filed complaints with the ICC. At that time, we found that shippers were generally dissatisfied with the rate complaint process. Shippers were concerned about the complexity of the stand-alone cost model, the costs and time involved in adjudicating a rate complaint, the lack of clear criteria for determining rate reasonableness, and fear that railroads would most likely win any rate complaint case. Of the shippers we contacted in 1987 that used the process, 53 percent indicated that they would probably use the complaint process again if they believed their rates to be unreasonable. This compares to 68 percent of rail shippers in our 1998 survey who responded that they would probably or definitely use the rate complaint process again. This could indicate that, while some shippers are dissatisfied with the rate relief process, they may recognize it as their only alternative for seeking relief from unreasonable rates. Board officials provided an alternative interpretation of the results. They stated that shippers may recognize that the process has been improved and that it provides a clear basis for leverage in negotiating private contracts or obtaining relief from unreasonable rates. Railroads disagree with shippers about the extent to which the rate complaint process is burdensome. In responding for the class I railroads, AAR stated that it understood that the rate complaint process can be difficult for shippers and noted that barriers should not be an obstacle for seeking rate relief. AAR and its member railroads stated that while they believed that the process was generally suitable for determining rate reasonableness, they would not object to the Board adopting more efficient procedures for rate complaint cases. The railroads, however, want the standard for determining market dominance to remain. This is generally the same position that the railroads held in 1987, when AAR officials stated that the railroads were generally satisfied with the standard rate complaint process and viewed the criteria for jurisdictional threshold, market dominance, and rate reasonableness as clear. In our discussions at that time, railroad officials said they found the process suitable for adjudicating larger rate complaints. However, representatives from five of the eight railroads we contacted stated that contract negotiations, rather than the potential for being involved in litigation over published rates, was the preferred method of setting rates. One railroad official noted that it was not in the railroad’s interest to charge a rate that a shipper would challenge and that contract rates were preferred for their predictability and stability. The Board has conducted hearings to identify barriers to the rate relief process. On April 17, 1998, the Board instituted a forum for shippers and railroads to voice their opinions on a variety of rail access and competition issues. Board officials generally support any action that reduces barriers while maintaining the integrity of the process. The Board asked railroads and shipper groups to work together to find solutions outside of the regulatory framework—an environment that the Board contends is a better framework to resolve private-sector disputes. In addition, the Board eliminated product and geographic competition from its market-dominance analysis. The railroads have filed a petition requesting for the Board’s reconsideration of its decision. In our discussions with shippers’ associations and AAR officials, we identified potential options for addressing shippers’ concerns about the rate complaint process. In our surveys, we asked shippers and railroads to rate methods that would improve the rate complaint process and address shippers’ concerns. In response to options for improving the rate complaint process, shippers supported methods to simplify and accelerate the process and reduce the costs they incur when filing rate complaints. The class I railroads contend that the current rate complaint process is well suited to determining rate reasonableness in larger cases and therefore saw no need for substantive changes to the standard process. Board officials stated that in trying to improve the rate complaint process, they must balance the needs of shippers seeking relief from unreasonable rates with the railroads’ need for adequate revenues to continue operating. We estimate that nearly 64 percent of the 709 rail shippers believed that the standard rate complaint process should be changed to a very great or great extent and over 86 percent believed that the process should be changed to at least a moderate extent. Our survey asked the 709 rail shippers to identify what changes should be made to the rate complaint process to make it more useful to them. Table 4.2 lists the options we presented to shippers for improving the rate complaint process. The percentages, for all shippers and by commodity group, indicate the proportion of shippers that indicated that the options were either extremely or very important. Table 4.2 shows that reducing the time involved in filing and deciding a case is generally most important to shippers. In general, the options most favored by shippers relate directly to reducing the time involved in case decisions, reducing the costs involved in filing a complaint, and eliminating product and geographic competition criteria. According to Board officials, some of the options presented in table 4.2 are beyond the Board’s ability to implement. We estimate that about 76 percent of rail shippers believed that shortening the time involved in filing and completing a complaint would improve the rate complaint process. Although a shipper would obtain reparations, with interest, for rates paid during the complaint process if the rate is eventually found to be unreasonable, the shipper must continue to pay the higher rate until the case is resolved. The railroads recognize that the standard rate complaint process takes time and agree that the Board should pursue changes to shorten the time involved and reduce the barriers that shippers face. The railroads did not make any specific recommendations to address timeliness. They said that they stand ready to work with shippers and the Board to reduce the barriers inherent in the process but seek to maintain the process’s effectiveness. While the railroads recognize shippers’ concerns, they contend that the Board’s process is well suited to determining rate reasonableness. The shippers’ concerns about the timeliness of the rate complaint process may partly be due to the ICC’s and the Board’s experience with rate complaint cases filed or pending since 1990. As discussed earlier, the elapsed time for such cases ranged from a few months to about 16 years. This historical record may have affected shippers’ calls for faster rate complaint decisions. The Board has established an expedited procedural schedule designed to ensure that cases under the standard guidelines are completed within 16 months. As of January 1999, no cases had been decided under the expedited procedural schedule. The current filing fee for complaints filed under the standard guidelines is $54,500. The fee is substantially less for cases under the simplified guidelines ($5,400). We estimate that about 63 percent of the rail shippers believe that reducing or eliminating the Board’s filing fee is an important step toward improving the rate complaint process. While a majority of grain, chemical and plastics shippers concurred in this suggestion, we estimate that only 42 percent of coal shippers cited the need to eliminate the fee. The $54,500 filing fee may not be a determining factor in large coal complaint cases where the reparations sought are measured in millions of dollars. However, it can present a barrier to small grain shippers whose rail shipping costs constitute a smaller portion of their total shipping costs and where the damages sought are much lower. The class I railroads did not express an opinion regarding the Board’s filing fee. In a 1996 decision, the Board noted that it was sympathetic to shippers’ concerns that increasing the filing fee could impede the filing of complaints. In an effort intended to lessen the burden of the fees, the Board tentatively set the complaint filing fee at 10 percent of the full cost of adjudicating a complaint and proposed increasing the fee 10 percent annually until the fully allocated cost level was achieved. In 1998, the Board chose to increase the fee in proportion to increases in its costs rather than by the annual 10-percent adjustment. A report by DOT’s Office of Inspector General recommended that the Board either annually increase the fee to recover the full cost of complaint adjudication or convene a new proceeding to determine whether such increases are feasible and warranted. In February 1999, the Board updated its fee schedule and adjusted its complaint filing fees. As a result, the fee for filing a standard rate complaint increased from $27,000 to $54,500. The new fee represents 20 percent of the Board’s fully allocated costs. According to our survey responses, an estimated 62 percent of the rail shippers believed that eliminating the product and geographic competition criteria would improve the rate complaint process. During discovery, railroads request operating information from shippers in order to prove the existence of product and geographic competition. However, shippers stated that railroads use these discovery requests to delay the process. In a recent rate complaint case, FMC Corporation and FMC Wyoming Corporation v. Union Pacific Railroad Co., the Board agreed with the shippers’ contention. In addition, in December 1998, the Board issued a decision revising its procedures for market-dominance determinations and eliminated the consideration of product and geographic competition. However, the railroads filed a petition for reconsideration of the petition. In comments submitted for the Board’s proceeding on product and geographic competition, the National Industrial Transportation Leaguenoted that the consideration of product and geographic competition unduly complicated the Board’s market-dominance determination. The League assembled and analyzed the disclosure requests that railroads served on the complaining shippers in seven rate cases. On the basis of its analysis, the League found that railroads submitted hundreds of questions to the shippers that required hundreds of hours of effort by lawyers, consultants, and staff. In addition, many of the questions asked for multiple pieces of information and thus complicated the shippers’ efforts to answer the questions. The League contended that market-dominance discovery of this magnitude constituted a formidable barrier for the shippers. The railroads contended that, in order for them to successfully challenge a complaint, product and geographic competition tests are crucial to showing that sufficient alternatives exist for the shipper. The railroads acknowledged that the product and geographic competition criteria can make it unduly burdensome to litigate rate reasonableness cases before the Board. They contended that a core premise of the economics of competition is that market power can be constrained by a range of competitive forces, including product and geographic competition. The railroads agreed, however, that procedural obstacles and the cost of litigation should not be barriers to obtaining regulatory relief when such relief is warranted. After considering both the shippers’ and railroads’ perspectives, the Board eliminated product and geographic competition criteria from market-dominance determinations. The Board is now in the process of reviewing the railroads’ petition for reconsideration. We estimate that from 52 to 58 percent of the rail shippers we surveyed support certain types of arbitration as an alternative to bringing a complaint before the Board. Our estimates show that 58 percent of the rail shippers supported mandatory binding arbitration; 52 percent favored voluntary arbitration; and 16 percent supported voluntary arbitration with nonbinding results. Shippers’ support for arbitration as an alternative to the rate complaint process not only differed by the type of arbitration proposed but by the type of commodity shipped. Grain shippers generally favored arbitration more than coal, chemicals, and plastics shippers. In October 1998, the National Grain and Feed Association’s (NGFA) signed an agreement with the class I railroads to submit certain disputes to mandatory, binding arbitration. While rate disputes are excluded from the mandatory aspect of the agreement, they can be mediated on a voluntary basis. AAR officials point to the NGFA arbitration agreement as evidence that the railroads are willing to work with shippers to address their concerns. Furthermore, the officials added, the agreement is a solution that addresses shippers’ concerns outside of the regulatory framework—at the Board’s request. As a result of April 1998 hearings, the Board asked shippers and railroads to work together to recommend solutions that would improve the existing process. The railroads cite the arbitration agreement as evidence that they are seeking to address shippers’ concerns. Board officials generally support any settlement agreements between shippers and railroads that reduce the burdens that shippers face resulting from the rate complaint process. It is too early to tell whether the use of mediation to resolve rate disputes between shippers and their railroads will have a positive impact. As of November 1998, no rate disputes had been mediated under the process. Furthermore, the mediation agreement only extends to NGFA members, not to nonmembers and shippers of other commodities. The mediation process, if successful, could reduce the number of complaints. Board officials note that railroads and shippers currently have the option of agreeing to voluntary mediation or voluntary arbitration of a dispute. Thus, no regulatory or legislative action is required to use such alternative dispute resolution procedures. In addition, Board officials do not believe it has the authority to require mandatory (nonconsensual) arbitration. Some shippers suggested that the Board’s current statutory jurisdictional threshold for rate reasonableness complaints is too high and that a rate that is below 180 percent of a railroad’s variable cost of transporting the traffic can still be unreasonable. We estimate that about 53 percent of the rail shippers believed that lowering the statutory threshold for determining Board jurisdiction would improve the rate complaint process. Of those who suggested a different jurisdictional threshold, the average suggested threshold was 118 percent, while the most common suggestion was 100 percent. However, because of the large percentage of shippers that did not answer the question, few conclusions about the jurisdictional threshold can be drawn. AAR stated that the current revenue-to-variable-cost ratio of 180 percent is an effective and appropriate level for a jurisdictional threshold. One AAR official stated that 180 percent, on average, was not significantly greater than the breakeven point for all railroad traffic. AAR cited past Board decisions showing that rates greater than 180 percent of variable costs were found to be reasonable. AAR is further concerned about some shippers’ proposals suggesting that the Board should consider rates in excess of 180 percent to be a demonstration of market dominance. The railroads contend that they would not be able to make the capital improvements necessary to meet current and future demand and earn a fair return on that investment without being able to set differential prices and, when necessary, charge rates in excess of 180 percent. Board officials told us that because the jurisdictional threshold is legislatively set, the Board has not initiated a proceeding to consider whether it should be reduced to a level below 180 percent. Board officials also stated that legislative action to reduce the jurisdictional threshold could lower rates that the Board believes are at reasonable levels as a result of competition, involve the Board in reviewing rates that are not unreasonable, and result in the Board’s needing more staff and resources to accommodate a potential increase in rate complaint cases. While shipper groups suggest that improving the rate complaint process would reduce the barriers they face when filing rate complaints, they contend that increased competition in the railroad industry would do more—it would lower rates and diminish the need for the process itself. Shippers and railroads disagree, however, on the need to increase competition. Shipper groups contend that increasing competition would enhance the viability of their businesses, lower the freight rates they currently pay, and diminish the overall need for the rate complaint process. The class I railroads contend that competition is greater than it has ever been and note that current rates are 46 percent lower than they were in 1982. According to the railroads, the deregulation of the industry has increased their revenue, decreased rates, and improved the overall financial viability of the industry—all critical goals of the Staggers Rail Act. Some shippers want the ability to choose between railroads when there is an option and contend that in certain situations, they are unable to do so. In our discussions with shipper groups, the following five different methods emerged as suggestions to increase competition in the railroad industry: require the Board to make a track segment owned by one railroad available to competing railroads for a fee (grant trackage rights), increase shippers’ access to smaller regional or shortline railroads, require the Board to grant reciprocal switching agreements—making a railroad transport the cars of a competing railroad, for a fee, reverse the Board’s “bottleneck decision,” and allow shippers to dictate the routing of their shipments, including interchange points (commonly called open routing). Of the rail shippers that responded to our survey, an average of 436 expressed opinions on different aspects of the rate complaint process, and an average of 567 expressed opinions on increasing competition in the railroad industry. Table 4.3 shows the options we presented to shippers and their response. The percentages reflect the number of shippers that responded that the option was extremely to very important. We did not analyze the implications for implementing these options or the effects these options would have on shippers or the railroad industry. Most shippers support four of the five options for increasing competition; shippers gave less support to the option of allowing them to specify the routing of their shipments. Of the 709 rail shippers we surveyed, an estimated 81 percent said that the Board should grant trackage rights to competing railroads to improve competition in the railroad industry. Furthermore, our estimates show that from 71 to 75 percent favored granting reciprocal switching agreements, increasing shortline railroad access to the major railroads, and expanding the relief provided in the Board’s bottleneck decision. The railroads and AAR disagree with shippers on the need for additional competition. AAR officials contend that the Board may currently grant trackage rights and reciprocal switching where there are competitive abuses. AAR sees additional efforts to impose reciprocal switching agreements and trackage rights as a kind of “forced” access and greater government regulation of shipping rates that would return the industry to the poor financial condition it was in before deregulation. Furthermore, AAR stated that these methods should be used only as a remedy when needed, not as a means to create additional competition. Regarding increased access to shortline railroads, AAR officials noted that the association has recently entered into a cooperative agreement with the American Shortline and Regional Railroad Association. The 5-year agreement seeks to improve customer service through a mutual car supply policy, cooperative interchange service agreements, and reduced switching barriers. AAR officials challenged the Board’s bottleneck decision in court on the grounds that it went too far in requiring railroads to provide separately challengeable rates in certain circumstances. Board officials agreed with AAR that requiring trackage rights and reciprocal switching agreements are remedies that are currently available to shippers and that they have been used where appropriate. However, Board officials stated that it is not authorized to grant trackage rights or reciprocal switching as a remedy for a complaint about the reasonableness of a rate because the only statutory remedies for an unreasonable rate are reparations and rate prescriptions. The Board has approved certain aspects of the AAR/ASLRRA cooperative agreement. With respect to the bottleneck decision, Board officials stated that the decision was required by existing law. Board officials stated that the suggestions for increasing rail competition—primarily through various means of “open access” to private sector rail lines—would require substantial changes to the underlying statute and could alter the shape and condition of the rail system. They stated that many shippers assume that greater competition would lead to lower rates and improved service, without the need for differential pricing. Board officials cited a countervailing concern, however, that not all shippers would benefit equally from such changes and that the result could be a smaller rail system serving fewer shippers and a different mix of customers than are served today. Officials contend that many shippers (particularly small shippers on remote lines) might not benefit from an “open access” system in the way that they might expect. In commenting on a draft of this report, the Board noted that our survey did not distinguish captive shippers from those with competitive transportation alternatives, which by statute are not eligible to use the rate complaint process. Thus, the Board indicated that the views of shippers with competitive options are not instructive in assessing the effectiveness of the rate complaint process. The Board also believed that we should clarify that the survey asked shippers to comment on the standard process and not the simplified procedures, even though the majority of the survey respondents would likely qualify for the simplified procedures. We disagree that our survey should have sought comments only from captive shippers. Such an approach would inject excessive bias into the survey and would have produced results that represented only a small segment of the shipper community. More importantly, we disagree that only captive shippers would be in a position to provide informed analysis of the rate complaint process and methods to improve it. The consolidation in the railroad industry has made all shippers keenly aware of their potential for becoming captive to only one railroad and equally aware of the means available to seek recourse should they believe that their rates are unreasonable. In addition, the Board’s position is contrary to its standard practice of inviting comments from all parties during its deliberations—deliberations that can vary from improvements to the rate complaint process to more complex issues, such as railroad mergers and consolidations. Therefore, we chose to survey all shippers and class I railroads to garner their insights into the rate complaint process; to discount any comment minimizes the views and opinions of the shipper community. Regarding the Board’s comment on distinguishing between the standard and simplified procedures in the report and the survey, we have clarified the report to better differentiate between the two processes. In addition, when presenting survey results, we note that the survey refers to the standard rate complaint process. The Board stated that the results of our survey reflect the natural response to be expected from customers when asked if they would like lower prices and the ability to obtain lower prices through a faster, simpler, and less costly process. In addition, the Board noted that the report did not address whether the surveyed shippers were being charged higher rates than they should reasonably be expected to pay. The Board indicated that, under the demand-based differential pricing principles that the Congress has determined should apply to the rail industry, it is not necessarily unreasonable to have even one shipper paying a higher rail rate than a comparable shipper with greater transportation alternatives. The Board’s characterization of our survey is not accurate. We did not ask shippers if they wanted lower prices. Rather, we sought to determine the barriers shippers face in filing rate complaints with the Board and options for improving this process. Furthermore, the survey was not limited to the Board’s rate complaint process but also sought information about the quality of service that shippers had received from 1990 to 1997. This information is presented in our companion report. We agree with the Board’s comment that we did not address whether the surveyed shippers were being charged higher rates than they should reasonably be expected to pay. Because this is the stated purpose of the highly complex and time-consuming rate complaint process, we defer these judgments to the Board. Finally, the Board believed we should more clearly identify the percentage of shippers that expressed a specific opinion on issues presented in the survey to avoid misleading interpretations of the survey results. For example, because 25 percent of the surveyed rail shippers found their rates to be reasonable, our presentation of the barriers shippers encounter in filing a rate complaint should only be attributed to the remaining 75 percent of rail shippers responding. In addition, the Board noted that those shippers that responded “Don’t Know” or that did not answer specific questions should be included among those shippers that did not assign great importance to the choices identified rather than excluded from the total count. We have clarified our presentation of the survey responses to distinguish between those shippers that consider their rates to be reasonable and other shippers. However, we disagree with the Board’s assertion that we include “Don’t Know” or missing responses with those shippers that were satisfied with certain aspects of the rate complaint process. We have no basis for inferring such a precise meaning from the “Don’t Know” or missing response categories. Such responses could mean that the respondents did not understand the question, answered only those questions for which they had a strong opinion, did not believe their responses would be kept confidential, or erroneously skipped a question. Accordingly, the report only tabulates data where the shippers’ responses are clearly marked. In 1980, a group of approximately 10,000 Montana farmers and grain elevator operators (the McCarty Farms Group) filed a class action suit against Burlington Northern Railroad in the U.S. District Court for the District of Montana. McCarty Farms alleged that Burlington Northern Railroad was charging unreasonable rates for transporting wheat from Montana to ports in Oregon and Washington State for the 2-year period ending September 12, 1980. The district court referred the matter to the Interstate Commerce Commission (ICC) to determine the reasonableness of the rates. On March 27, 1981, McCarty Farms filed a complaint with the ICC challenging not only Burlington Northern’s wheat rates but also its rates for barley. McCarty Farms asked ICC to prescribe future rates. It did not limit its request for reparations to the 2-year period specified in its complaint filed with the district court. In a December 1981 decision, an administrative law judge found that (1) Burlington Northern had market dominance over wheat and barley traffic, (2) Burlington Northern’s present and past rates were unreasonable insofar as they exceeded 200 percent of the variable cost of service, and (3) a revenue-to-variable-cost ratio of 200 percent would constitute the maximum reasonable rate for the transportation of wheat and barley. In a separate proceeding filed with the ICC on March 26, 1981, the Montana Department of Agriculture and the Montana Wheat Research and Marketing Committee (state of Montana) challenged Burlington Northern’s rates for multiple-car and trainload shipments of wheat and barley and asked the ICC to prescribe rates for future shipments. In a July 1982 decision, the ICC reopened the McCarty Farms complaint and instituted a separate proceeding regarding the reasonableness of barley rates because it did not believe they were part of the district court’s referral. The ICC consolidated the McCarty Farms and state of Montana proceedings. According to Board officials, in 1983, the ICC vacated the administrative law judge’s opinion because the rate reasonableness standard used had been discredited and held the three consolidated cases in abeyance, pending its search for appropriate rate standards for noncoal cases. The ICC reopened the proceedings in September 1984 in response to a district court directive to move forward with the case. In an April 1986 decision, the ICC reopened the record for additional market-dominance evidence because of the changes made to its market-dominance guidelines in 1985. After extensive discovery, in May 1987, the ICC ruled that Burlington Northern dominated the market over wheat and barley movements from Montana to the Pacific Northwest. Having determined that Burlington Northern dominated the market for the shipments at issue, ICC turned to the rate-reasonableness analysis. ICC decided to use this case to develop a new rate test—the revenue-to-variable-cost comparison. In 1988, applying the new comparison, the ICC found some of the rates unreasonable for some years (1981 through 1986) and directed the parties to calculate reparations. In 1991, the ICC affirmed its earlier decisions, concluding that Burlington Northern dominated the movement of wheat and barley and that Burlington Northern’s rates for this traffic were unreasonable. According to Board officials, the ICC calculated the amount of reparations owed by Burlington Northern through 1988 to be $8.97 million plus interest and prescribed the level of future rates. The ICC subsequently updated the amount of reparations and interest due to $16.6 million through July 1, 1991, and removed the rate prescription as unnecessary since the rates had been in compliance with the rate reasonableness standard for the prior 5 years. Both Burlington Northern and McCarty Farms sought judicial review of the ICC’s decision. In 1993, the U.S. Court of Appeals for the District of Columbia Circuit questioned the ICC’s use of the revenue-to-variable-cost comparison and the reasons for not applying the stand-alone cost test to this large volume of traffic. The court sent the case back to the ICC to reconsider whether the stand-alone cost model would be more appropriate. On remand, both parties agreed to apply the stand-alone cost test and from 1993 through 1995, prepared and presented their stand-alone cost evidence. According to Board officials, the review of the stand-alone cost evidence was delayed somewhat because key ICC staff who had been working on the case left the agency as a result of the reduction-in-force implemented following the ICC Termination Act of 1995. In an August 1997 decision, the Board found that McCarty Farms had failed to show that Burlington Northern’s rates were unreasonably high on the basis of its review of the evidence of the stand-alone cost model. According to the Board, this conclusion is consistent with the ICC’s prior conclusion that certain rates during 1981 through 1986 were unreasonable. On the basis of the 20-year analysis presented in the discounted cash flow analysis in the stand-alone cost model, Burlington Northern earned more revenues in 1981 through 1986 than was necessary to cover the stand-alone costs allocated to those years. However, those additional earnings were needed to make up for shortfalls in other years. The Board discontinued the proceedings. In October 1997, McCarty Farms appealed the Board’s August 1997 decision to the U.S. Court of Appeals for the District of Columbia Circuit. After examining McCarty Farms’ brief to the court, the Board agreed that there were certain errors in the August 1997 decision and issued a supplemental decision to correct those determinations that it agreed were erroneous. Even after it made these corrections, the Board still concluded that Burlington Northern’s rates were reasonable. In a decision issued on October 20, 1998, the court affirmed the Board’s decision, agreeing that the challenged rates had not been shown to be unreasonable under the stand-alone cost test. As noted earlier, the court held that it did not have jurisdiction over claims that were initially raised by the McCarty Farms Group’s complaint in federal district court and subsequently referred to the ICC. Accordingly, the court did not rule on the Board’s decision as it pertained to those claims. The district court has since dismissed its portion of the case at the request of the parties. Market-dominance present? Yes (for three of four complainants) Carbon black (chemical) Tread and carcass grade carbon black (chemical) Synthetic plastic resin (chemical) (continued) Market-dominance present? Present for at least some of the traffic in question but not necessarily for all of it. In a decision served on June 7, 1989, the ICC divided the proceeding that originated with one complaint into two proceedings in order to assess the reasonableness of the rates for the period through 1982, while reconsidering whether market dominance existed after 1982. The ICC designated the post-1982 part of this case as 38239S (Sub-No.1). For the purposes of this analysis, we considered this as one complaint. Market-dominance present? The ICC found the rates to be reasonable; proceedings discontinued on Oct. 24, 1994. ICC dismissed the complaint on Sept. 28, 1995 because it found rates to be reasonable. The ICC dismissed the complaint due to lack of market-dominance on Nov. 8, 1993. Wheat (grain) The Board found rates to be reasonable; proceedings discontinued on Aug. 20, 1997; the Board made technical corrections on May 11, 1998. Wheat and barley (grain) The Board found rates to be reasonable; proceedings discontinued on Aug. 20, 1997. Wheat and barley (grain) The Board found rates to be reasonable; proceedings discontinued on Aug. 20, 1997. Market-dominance present? The Board dismissed the complaint on Dec. 31, 1996, because the regulatory relief sought was not available. Shipper’s appeal pending. The Board dismissed the complaint on Dec. 31, 1996, because the regulatory relief sought was not available. Shipper’s appeal pending. Vinyl acetate (chemical) The Board dismissed the complaint for lack of jurisdiction on Aug. 22, 1997. The transportation was performed under contract. The Board dismissed the complaint on Aug. 28, 1997, because the shipper never filed an opening statement and failed to respond by the appointed dates. The Board dismissed the complaint on Oct. 17, 1997 because the transportation was performed under contract. Shipper’s appeal pending. Market-dominance present? Decided July 29, 1997. Shipper won; reparations and prescriptions awarded. Decided May 3, 1996. Shipper won; reparations and prescriptions awarded. Market-dominance present? Aug. 4, 1994 The parties have asked the Board to hold the proceedings in abeyance, pending possible settlement. Polyethylene terephthalate (chemical) Apr. 5, 1996 Not determined (continued) Market-dominance present? Soda ash, phosphorus, phosphate rock, coke, sodium bicarobonate, including sodium sesqui carbonate (chemical) DOD = Department of Defense DOE = Department of Energy Present for at least some of the traffic in question, but not necessarily for all of it. This appendix presents the results of our shipper survey in summary form. It discusses the methodology used in controlling for sampling error, nonsampling error, and presentation. In administering this survey, we agreed to hold the responses of individual shippers confidential. In the few instances where the responses of individual shippers could be determined from the data, we have not presented the results. We asked survey respondents to indicate the percentage of their shipments made using various transportation modes. Specifically, In 1997, about what percentage of your company’s shipments of bulk grain, coal, chemicals or plastics went by the following modes of transportation? (Enter percent; Please make sure your responses total 100%.) We selected four categorical ranges for the question, and counted the number of responses that fit into each category. Table III.1 shows the percentage of respondents whose answers fit each category. We asked survey respondents to indicate the average number of loaded out-bound rail shipments. they had made since 1990. Specifically, Since 1990, on average, how many loaded rail cars has your company used for out-bound shipments of bulk grain, coal, chemicals or plastics per year? (Enter Number) We asked survey respondents to indicate the percentage of their rail shipments made using rail cars owned or leased by their company. Specifically, Since 1990, on average, what percentage of your annual out-bound shipments (that you identified in the previous question) of bulk grain, coal, chemicals or plastics were shipped in rail cars owned or leased (except leased from a railroad) by your company? (Enter percent, if none, enter 0) We selected four categorical ranges for the question, and counted the number of responses that fit into each category. Table III.3 shows the percentage of respondents whose answers fit each category. Table III.3: Percentage of Annual Rail Shipments Using Company-owned Rail Cars (Survey Question 7) We asked survey respondents to indicate the percentage of their shipments that were made using contract versus tariff rates. Specifically, Since 1990, what percentage of each of the following rate setting methods (contract or published tariff rate) were used to set the freight rates of your annual out-bound shipments? (Enter percent; if none, enter ’0’) We selected four categorical ranges for the question, and counted the number of responses that fit into each category. Table III.4 shows the percentage of respondents whose answers fit each category. We asked survey respondents to indicate the percentage of their shipments made that were exempt from federal rate regulation. Specifically, Of the published tariff or public rate shipments identified in question 8 what percentage was exempt from regulation — that is, commodities, or classes of transportation (such as box car) that have been granted exemption by STB or its predecessor the Interstate Commerce Commission (ICC) from economic regulation? (Enter Percent) We selected four categorical ranges for the question, and counted the number of responses that fit into each category. Table III.5 shows the percentage of respondents whose answers fit each category. We asked survey respondents to indicate the percentage of their shipments that were limited to a single railroad from origin to destination. Specifically, Consider the out-bound movements of bulk grain, coal, chemicals, or plastics that your company made by railroad. About what percentage, if any, of these shipments could only go from origin to destination using a single railroad in 1997 as compared to 1990? (Enter Percent, if none, enter ’0’) We selected five categorical ranges for the question, and counted the number of responses that fit into each category. Table III.6 shows the percentage of respondents whose answers fit each category. Tables III.7 through III.9 present the sampling error associated with the estimates we present in Chapter 4. Our estimates for coal, chemical and plastics shippers do not include sampling error because we sent our survey to 100 percent of these shippers in our universe. In addition, the estimates shown in tables III.7 through III.9 differ from the data presented in questions 15, 17 and 18 above because we have collapsed certain categories for our analysis. Legal costs associated with filing outweigh the benefits Rate complaint process is too complex Rate complaint process takes too long Stand-alone cost model is too costly to prepare Railroad will most likely win case Getting information from railroads is too difficult Consulting costs are too high Discovery requests from railroad difficult Fear of reprisal from railroads Other parts of the process are too costly 40.8 Table III.8: Percentage of Rail Shippers That Believe Suggested Changes for Improving the Rate Complaint Process Were Extremely to Very Important (Table 4.2) Chemicals and plastics shippers Missing Each percentage represents rail shippers who expressed an opinion regarding a particular suggestion. Some shippers did not express an opinion for some suggestions. Chemicals and plastics shippers Missing Each percentage represents rail shippers who expressed an opinion regarding a particular option. Some shippers did not express an opinion for some options. Due to the low number of responses to this question, we are either unable to generalize to the universe, or unable to report for reasons of confidentiality. Includes missing responses. In order to obtain the major U.S. railroads’ views of the Surface Transportation Board’s rate relief process, we mailed the class I railroads a survey similar to the survey we mailed to shippers (See app. III). The class I railroads determined that it would be appropriate for the Association of American Railroads (AAR) to respond to our questions regarding changes to the process. Therefore, AAR answered questions 6 and 7 of our survey. The remaining questions dealt with the railroads’ experiences using the process during any rate complaint cases involving movements on their lines. Four of the nine class I railroads we surveyed responded to this set of questions. The information provided by the railroads augmented the information we developed from reviewing the Board’s case files. However, because of the low response rate and the nature of the information provided, there was not sufficient information to present it in summary form in this appendix. We have provided a copy of the survey we mailed to the class I railroads for reference purposes. Joseph A. Christoff Helen T. Desaulniers Lynne L. Goldfarb Alexander G. Lawrence, Jr. Bonnie Pignatiello Leer David R. Lehrer Luann M. Moy The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on: (1) the Surface Transportation Board's rate relief complaint process and how it has changed since the Interstate Commerce Commission (ICC) Termination Act of 1995 became law; (2) the number and outcome of rate relief cases pending or filed since 1990; and (3) the barriers that shippers face when bringing rate complaints to the Board and potential changes to the process to reduce these barriers. GAO noted that: (1) the Surface Transportation Board's standard procedures for obtaining rate relief are highly complex and time-consuming; (2) under these standard procedures, the Board: (a) evaluates all competition within the market allegedly dominated by a railroad; and (b) typically assesses the results of a shipper-developed model of a hypothetical, optimally efficient railroad that could provide comparable service in place of the shipper's railroad; (3) the process reflects a statutory scheme whereby the Board must balance two competing objectives: considering the need of the railroad industry for adequate revenues while simultaneously ensuring that the industry does not exert an unfair advantage over shippers without competitive alternatives; (4) since the ICC Termination Act, the Board has attempted to improve the rate complaint process and simplify the process for shippers; (5) it is too early to tell if these steps will significantly lessen the burden of the rate complaint process; (6) very few shippers served by class I railroads have complained to the Board about railroads' rates; (7) generally, only those shippers that depend on rail transportation, such as coal, chemical, and grain shippers, have filed complaints; (8) 18 of these complaints were resolved by negotiated settlements with the railroads before the Board or its predecessor determined whether the contested rate was reasonable; (9) in addition, seven complaints were dismissed in favor of the railroad, five were dismissed for other reasons, and two complaints resulted in rate relief to shippers; (10) nine complaints remain before the Board; (11) GAO's results suggest that of the 709 rail shippers that responded, 531 do not believe that their rail rates are always reasonable and therefore might use the rate complaint process; (12) of the shippers who expressed an opinion about the rate complaint process, GAO estimates that over 70 percent believe that the time, complexity, and costs of filing complaints are barriers that often preclude them from seeking rate relief; (13) all the major U.S. railroads, on the other hand, are generally satisfied with the standard rate complaint process, contending that it is well suited to determining whether a railroad dominates the shipper's market and what rate relief may be needed; (14) however, railroads do not support the simplified procedures or the Board's December 1998 decision to change aspects of its market dominance approach; and (15) this divergence of opinion may make responding to shippers' concerns about the barriers in the rate relief process difficult to resolve.
|
PBGC’s single-employer insurance program is a federal program that protects the retirement incomes of more than 34 million workers and retirees covered by almost 29,000 private sector defined benefit pension plans. Defined benefit pension plans promise to pay a specified monthly benefit at retirement, commonly based on salary and years on the job. PBGC receives no funds from general tax revenues. Operations are financed by insurance premiums set by Congress and paid by sponsors of defined benefit plans, investment income, assets from pension plans taken over by PBGC, and recoveries from the companies formerly responsible for the plans it took over. In addition, PBGC uses Form 5500 information—the primary source of information for both the federal government and the private sector regarding the operation, funding, assets, and investments of private pension plans--to monitor single- employer defined benefit pension plan activities, focusing on assets, liabilities, number of participants, and funding levels. Form 5500 information is also used to forecast PBGC's potential liabilities. The Employee Retirement Income Security Act of 1974 (ERISA) established PBGC to insure the pension benefits of participants, subject to certain limits, in the event that an employer cannot pay its promised benefits. ERISA also required PBGC to encourage the continuation and maintenance of voluntary private pension plans and to maintain premiums set by the corporation at the lowest level consistent with carrying out its obligations. PBGC may pay only a portion of a participant’s accrued benefit because of limits on the PBGC benefit guarantee; PBGC generally does not guarantee benefits above a certain amount, currently $47,659 annually per participant at age 65. Additionally, benefit increases arising from plan amendments in the 5 years immediately preceding plan termination are not fully guaranteed, although PBGC will pay a portion of these increases. Finally, sponsors of PBGC-insured defined benefit plans pay annual premiums to PBGC for their coverage. PBGC prepares its financial statements in accordance with FASB standards, as permitted by the Federal Accounting Standards Advisory Board. For procedures on how to record and report contingencies, FASB’s Statement of Financial Accounting Standards No. 5, Accounting for Contingencies (FAS No. 5), specifically requires that a liability for loss contingency be recorded if two conditions are met: (1) information available prior to issuance indicates that it is probable that a liability has been incurred at the date of the financial statements, and (2) the amount of the loss can be reasonably estimated. For fiscal year 2005, PBGC received its 13th consecutive clean or unqualified audit opinion from its independent auditors. As a wholly-owned government corporation, PBGC is subject to the financial and internal control reporting requirements of Chapter 91 of Title 31 of the U.S. Code (commonly known as the Government Corporation Control Act). The Office of Management and Budget (OMB) issues guidance to the heads of federal agencies and government corporations that sets out the annual process for them to make a statement as to the adequacy of their entity’s internal controls. In light of new requirements for publicly-traded companies relating specifically to internal controls over financial reporting and management’s related responsibilities, contained in the Sarbanes-Oxley Act of 2002, OMB recently revised its guidance to adopt enhanced requirements for internal controls over financial reporting by major federal agencies. Specifically, OMB added an entirely new appendix to its existing guidance requiring that these agencies establish a formal process for assessing their internal controls over financial reporting. This new requirement, however, applies only to the 24 CFO Act agencies, and thus these specific requirements do not apply to PBGC. To estimate the present value of its future liabilities, PBGC develops interest rate factors, similar to interest rates, to use for these calculations, based on surveys of insurance companies conducted by the American Council of Life Insurers (ACLI) for PBGC and the Internal Revenue Service (IRS). The survey asks insurers to provide the net annuity price for annuity contracts for plan terminations. PBGC develops interest rate factors, similar to interest rates, from the survey results that together with PBGC’s other actuarial assumptions produce prices in line with those of the private sector insurers surveyed, which are adjusted to the end of the year using an average of the Moody’s Corporate Bond Indices for Aa and A-rated corporate bonds for the last 5 trading days of the month. The adjusted interest rate factors are published in mid-December for use in January. The interest rate factors are then further adjusted each subsequent month of the year on the basis of the average of the Moody’s bond indices. All other things being equal, when interest rates are lower, more money is needed today to finance future benefits because this money will earn less income when invested. Therefore, lower interest rate assumptions result in higher liability amounts, while higher interest rate assumptions result in lower liability amounts. In 1997, we reported that the cash-based federal budget, which focuses on annual cash flows, does not adequately reflect the cost or the economic impact of PBGC’s single-employer pension insurance program and other federal insurance programs. Generally, cost is only recognized in the budget when claims are paid rather than when the commitment is made. Benefit payments of terminated plans assumed by PBGC may not be made for years, even decades, because plan participants generally are not eligible to receive pension benefits until they reach age 65. Once eligible, these benefits are paid over a period of years or even decades. As a result, there can be years in which PBGC's current cash collections are estimated to exceed current cash payments, regardless of the expected long-term cost to the government. We concluded that the use of accrual concepts in the budget for PBGC and other insurance programs has the potential to better inform budget choices. SEC, the principal federal regulator of the U.S. securities markets, requires public companies to disclose meaningful financial and other information to the public. SEC’s mission is to protect investors, maintain fair orderly and efficient markets, and facilitate capital formation. Public companies are required to submit reports to SEC on Form 8-K, the “current report” companies must file with SEC to announce certain major events that shareholders should know about, including any expected losses that are considered to be material. In addition, public companies must submit annual reports on Form 10-K and quarterly reports on Form 10-Q. These disclosures are designed to keep the public informed about any information that could be considered important for investors. This provides a common pool of knowledge for all investors to use to judge for themselves whether to buy, sell, or hold a particular security. FDIC is a government corporation. In addition to its roles as primary federal regulator of state-chartered banks that are not members of the Federal Reserve System and back-up regulator for all insured depository institutions, FDIC said it promotes public confidence and stability in the U.S. financial system by insuring deposits in banks and thrift institutions; by examining and supervising financial institutions; by identifying, monitoring, and addressing risks to the deposit insurance funds; and by limiting the effect on the economy and the financial system when a bank or thrift institution fails. Similar to PBGC, FDIC receives no congressional appropriations; it is funded by premiums that banks and thrift institutions pay for deposit insurance coverage and from earnings on investments in U.S. Treasury securities. PBGC reports that it monitors its probable claims on an ongoing basis by contacting plan sponsors to obtain certain plan financial information, reviewing filings submitted by probable plans to conduct a risk analysis, and performing valuations to determine, among other things, the present value of net probable claims and expected date of probable plan termination. PBGC also regularly updates and reviews its list of probable claims that it monitors. PBGC also takes certain steps to ensure the accuracy of its probable claims, such as using an automated system for estimating probable claims and the most currently available data when calculating its estimates. We found that PBGC’s probable claims estimates are reasonable because they are generally close to the final claim amounts that are determined for these plans that PBGC ultimately takes over. PBGC assesses underfunded plans to determine which plans should be classified as probable claims and monitored on an ongoing basis. To be classified as a probable claim, a plan must meet at least one of the seven criteria PBGC uses, five of which it characterizes as objective, and two as subjective. According to PBGC officials, objective criteria are used when substantial evidence exists to indicate that the plan sponsor is in liquidation or insolvency proceedings or will meet the requirements for a distress or involuntary termination. Subjective criteria involve management judgment. Table 1 shows the different applications of the objective and subjective criteria used to classify plans as probable claims. Once probable plans are identified, PBGC uses certain information to more closely monitor such plans and the future claims they represent. To accomplish this, PBGC primarily relies on information it receives from plan sponsors, information from Section 4010 filings, reportable event and distress termination filings, and other sources. Information from plan sponsors: PBGC officials said they contact plan sponsors in order to monitor the case and obtain any required information not submitted by the plan sponsor, such as the plan’s most recent Form 5500 filing and actuarial valuation report. PBGC officials said they also make an assessment of the potential for termination. This is part of a process that encompasses (1) querying the plan sponsor about the intentions with regard to its pension plans, (2) obtaining estimates of due and unpaid employer contributions and unfunded benefits, and (3) performing a risk analysis. Section 4010 filings: These filings provide PBGC with actuarial and other information on some underfunded plans and financial information for companies that meet certain criteria. PBGC said that these filings are an important component of PBGC’s monitoring activities. Section 4010 of ERISA requires the reporting of plan actuarial and company financial information by employers with plans that have aggregate unfunded vested benefits in excess of $50 million, missed required contributions in excess of $1 million, or outstanding minimum funding waivers in excess of $1 million. The information required to be filed includes (1) plan identifying information, (2) information regarding the fair market value of plan assets and the value of benefit liabilities on a PBGC termination liability basis, and (3) financial information, such as financial statements. Reportable event and distress termination filings: The major types of reportable event filings that PBGC uses to monitor underfunded plans include the inability of a plan to pay participants the benefits due them in the form prescribed by the plan, bankruptcy or insolvency proceedings, liquidation proceedings or the dissolving of the plan sponsor, and failing to meet the minimum funding standards. According to PBGC officials, PBGC is notified of reportable events affecting approximately 300 plans per year. On average, 200 of these plans either undergo standard terminations or continue without termination, resulting in PBGC not taking over the plan. The remaining 100 plans eventually become distress terminations or involuntary terminations. Other sources: PBGC uses Form 5500 information to monitor plan funding. PBGC also monitors news sources (e.g., Bloomberg, Livedgar, and NewsEdge) to identify transactions that could adversely affect plan funding status and ultimately PBGC. In addition, PBGC contracts with Dun and Bradstreet’s First Alert Service, which reports on bankruptcy filings within several days of the filing. As part of its monitoring process, all probable claims are reviewed by PBGC’s Contingency Working Group (CWG), which is composed of representatives from various departments and divisions within PBGC. The CWG is responsible for approving probable plan classifications and probable loss amounts. PBGC updates its probable claims three times per year, and performs valuations for financial reporting purposes on all plans on the probable claims list. For each financial reporting date (March 31, June 30, and September 30), PBGC actuaries prepare a preliminary list of probable claims that also contains the estimated date of plan termination and the present value of the each plan’s net claim. For each period, the Contingency Working Group reviews and approves the finalized list of probable claims. PBGC reports that it takes certain steps to help ensure the accuracy of its probable claims. One of PBGC’s key controls to help ensure accuracy is an automated system for estimating probable claims. The Integrated Present Value of Future Benefits (IPVFB) system estimates the probable losses and does so according to GAAP and FAS No. 5 standards. To calculate the fiscal year end financial statement assets and liabilities for probable plans, plan information such as Form 5500 filings, asset statements, annuity purchases, contributions, and estimated dates of plan termination are entered into the IPVFB system as of the estimated date of plan termination and fiscal year end. This system adjusts the liabilities from the plan’s assumptions, such as mortality, interest, and expected retirement age, to standard assumptions used by PBGC, and then produces a report that provides PBGC staff with information on how the assets and liabilities are brought forward from the Actuarial Valuation Report date to the date of the financial statements. PBGC officials reported that the agency’s process for estimating its probable claims is reviewed by its financial auditors as part of its annual audit of its financial statements. As noted earlier, the auditors have issued unqualified audit opinions for the last 13 years. PBGC officials also told us they help ensure high levels of accuracy by using the most current data available as the starting point in their valuation process, but there’s room for improving the timeliness of the data. For example, PBGC has 4010 data for the largest plans that it designates as probable claims, but even those data are 3 ½ months old when received, and they are received only once per year. On occasion, PBGC is able to obtain more current actuarial valuation reports, but it is not able to do this regularly for all plans on the probables list. PBGC officials also said since single-employer probable claims are estimates, factors that are not fully determinable can cause the actual claims PBGC receives to differ from its probable estimates. According to these officials, when they calculate their probable estimates, they usually do not have complete data on the provisions of the plan, the characteristics of plan participants, the exact date of plan termination, the precise value of plan assets and liabilities at that termination date, or the level of recoveries from the plan’s sponsor, all of which may change over time. Additionally, potential changes in PBGC’s valuations, knowledge of specific plan provisions, and participant characteristics combine in ways that can cause the actual claims from these plans to deviate from the estimates, regardless of the timeliness of the data used in preparing the estimates. When a probable claim becomes an actual claim, PBGC officials said they adjust the probable claim estimate to that of the actual claim amount as of the date of plan termination. If a plan is removed from the probables list for a reason other than termination, the previously estimated claim is removed from the probable claims total and reclassified, often as a reasonably possible claim. PBGC data showed its total probable claims estimates ($21.8 billion) for all plans that eventually terminated were within one percentage point of the actual claim amount ($21.9 billion). Ninety-five percent of the $22.9 billion in resolved probable claims was in plans that terminated with PBGC. In addition, historically the vast majority of probable claims (76%) subsequently become actual claims, as shown in figure 1. Moreover, of the $16.9 billion of probable net claims that were reported in the financial statements as of the end of fiscal year 2004, more than $10 billion was in plans that terminated during fiscal year 2005. PBGC and public companies have different practices for disclosing information about liability settlements, including probable losses. Although PBGC and public companies follow the same accounting standards for recording probable losses in their annual financial statements, they each follow different policies and requirements when reporting information about probable losses throughout the fiscal year. When reporting information on liability settlements, PBGC follows its own set of policies and procedures, while public companies are required to follow the standards set forth by SEC requirements. PBGC’s disclosure practices regarding financial liabilities are similar to those of other government corporations, namely FDIC, which operates similar programs and faces similar risks. PBGC and publicly traded companies have different practices for disclosing information on liability settlements, including probable losses. Both PBGC and public companies follow the same accounting standards for recording probable losses in their annual financial statements. For example, PBGC announces an estimated liability from probable claims in its annual report, and public companies report similar information about probable losses in their annual reports. However, when reporting information about probable losses throughout the fiscal year, PBGC and public companies have different practices, which result from the different policies and requirements of each entity. PBGC’s reporting in its annual report, or elsewhere, is meant to avoid the disclosure of any information on specific plans classified as probable in order to prevent harm to the economic health of the plan sponsor. When a plan is classified as a probable claim and has not yet terminated, PBGC’s policy precludes the disclosure of any information indicating that a specific plan has been classified as a probable claim. In its annual report, PBGC reports its probable claim estimates as an aggregated total value of net claims in order to conceal exactly which plans are included in the figure. PBGC also identifies its probable claim estimates by industry in its financial statements. PBGC officials said their policy of not disclosing any information on plans so classified is meant to avoid causing more economic distress to the plan sponsor, because such plans usually have plan sponsors that are already economically weak. According to these officials, this policy exists so that disclosure does not unnecessarily (1) compromise a company’s ability to continue as a going concern and (2) influence a sponsor’s decision whether to maintain or terminate its pension plans. For example, if investors knew that PBGC had classified a particular plan as a probable claim, this could encourage additional negative speculation about the financial health of the plan sponsor and trigger activity that might further the financial instability of the company. When PBGC takes over a plan, it typically issues a press release announcing the liability PBGC expects to incur, without indicating if the terminated plan had already been booked as a probable claim and consequently included in its previously announced deficit in its year-end financial report. According to PBGC officials, PBGC does not release the amount of its previously booked probable claims in its press releases for fear of compromising PBGC’s position in litigation and of negatively affecting a company’s financial condition. These officials said that releasing its previously booked probable claim amounts would enable someone to make a comparison between PBGC’s booked liability for a particular plan and the amount of underfunding for that plan. Publicizing this information could affect PBGC’s ability to recover the full amount of a plan’s claims in litigation because companies are likely to resist a settlement with PBGC for more than the amount of PBGC’s previously booked losses. In addition, announcing the previously booked liability also has a negative impact on a company’s ability to obtain additional financing and may worsen its financial condition. In contrast, public companies must follow SEC requirements for disclosing information on financial liability settlements, including probable claims. Federal securities laws enforced by SEC require that public companies disclose information on liabilities booked as a probable loss. Public companies must submit current reports to SEC on Form 8-K for a number of specified events, including any material event that might affect the investment decisions of shareholders. For example, when a public company books a material liability as “probable,” the company is required to file a Form 8-K with SEC. Often companies issue such information in a press release, which they may attach to the 8-K. Although SEC requires public companies, under certain conditions, to disclose information about a probable loss, there is much variability in how these companies choose to disclose this information. According to SEC officials, some companies choose to provide as much information as possible concerning their probable losses, while other companies choose to release less information. For example, some companies release the specific amount of a probable loss in an 8-K, and after the liability has changed from a probable claim to a certainty, these companies choose to redisclose the previously booked quantity to fully demonstrate the total financial impact of the liability on the company. Other companies choose to be more general in their disclosure of probable liabilities by disclosing information about these quantities in general terms, such as announcing a range instead of a specific number. Pension experts, financial analysts, SEC officials, and others said that the different reporting practices and policies of PBGC and public companies are a consequence of the different risks and responsibilities faced by each entity. They agreed that there would likely be further economic distress for plan sponsors if PBGC divulged which plans are classified as probable before those plans terminate. These experts also noted that while PBGC has an obligation to not disclose information on its probable claims, public companies have a responsibility to report as much information as possible, including probable claims, in order to keep investors informed and able to make knowledgeable decisions. Finally, they said that the unique risks of PBGC as a government corporation make its disclosure responsibilities distinctively different from those of public companies. We found that PBGC’s reporting practices are more comparable to those of government corporations like FDIC than they are to those of public companies. Both corporations operate similar types of programs, and both agencies have a responsibility to ensure the stability of the programs they operate. Unlike public companies, PBGC and FDIC are not subject to SEC requirements. In our discussions with PBGC and FDIC officials, we found that both agencies have adopted similar disclosure policies when reporting probable losses. According to officials from PBGC and FDIC, both agencies follow the same accounting standards and face similar risks when disclosing information about probable losses. For example, officials from both agencies said they do not disclose case-specific, detailed information on probable claims in order to protect the entities under their jurisdictions from further economic distress. Just as PBGC avoids exacerbating the economic situation faced by plan sponsors whose plans are booked as probable, FDIC avoids any financial statement disclosure action that might cause further economic distress for the institutions it insures. In addition, PBGC follows a policy of not disclosing probable claim amounts in its press releases announcing the termination of a plan. FDIC officials said that the agency issues a press release on the day of institution failure that may or may not, depending on the nature and timing of the failure, include an estimated cost to the Insurance Fund. The unique responsibilities of each entity heavily influence the nature and content of the disclosures. Over the years, PBGC has made efforts to improve the transparency of its disclosures, including revising its annual report to include more detailed information about its methodology for determining probable claims, and it has published more detailed information about its financial condition on its Web site. Despite PBGC’s efforts to improve the transparency of its disclosures, pension experts and others told us that they would like to see more information disclosed when PBGC announces the termination of a new plan in its press releases and that they have some uncertainty about PBGC’s methodology for calculating its interest rate. In addition, pension experts, financial analysts, and industry association representatives told us that PBGC’s disclosures are likely to become increasingly important in light of upcoming changes in the pension accounting rules for plan sponsors. PBGC officials told us they regularly review the agency’s policies for disclosing information related to its financial condition and revise their disclosure documents as needed. According to PBGC officials, when making decisions to revise its disclosures, PBGC takes into consideration, among other things, current changes in the private pension environment that affect the agency’s financial condition and information that would help the public better understand PBGC’s current financial condition. Over the last several years PBGC took the following actions to improve the transparency of its disclosures: it published a fact sheet on its Web site that provides answers to questions that have been raised about PBGC financial condition, including its deficit, the true cost of its insurance program, and pension underfunding; it revised its annual report to include more detailed information about its methodology for determining probable claims; it revised its Pension Insurance Data Book 2003 , which has detailed statistics for the agency’s insurance programs, to include information related to PBGC’s claims experiences in order to provide more historical data on the number and size of claims by the year the plans terminated, the funding levels of the plans at termination, and the size of the plans at termination; it released extensive data about PBGC’s financial condition, as well as explanatory and white papers about how to understand PBGC’s financial condition, reports, and methods available on its Web site (www.pbgc.gov); and it revised the agency’s Web site to make it more user-friendly. PBGC officials also said they conduct a range of educational outreach activities with their various stakeholders, including plan participants, experts, policy makers, and the press. For example, PBGC discusses its financial condition at various meetings with plan participants held each year. PBGC officials also regularly give speeches at gatherings of actuaries, lawyers, financial officers, benefit specialists, and other members of the plan sponsor community. PBGC also issues press releases regarding its current financial condition, in connection with the annual financial statements. Some pension experts and others have expressed concern that some aspects of PBGC’s disclosures are still unclear. For example, PBGC does not include sufficient information to determine the financial impact of new terminations on PBGC’s financial position when it issues a press release. When announcing the termination of a new plan in its press releases, PBGC does not announce whether or not the terminated plan was previously booked as a probable and already included in its reported deficit and reported in its annual financial statement. For example, when a large plan is terminated, PBGC puts out a press release announcing the following information about the terminated plan: the type of termination and reason for termination; plan information, including the amount of assets, amount of liability, level of funding, and the number of employees covered by the plan; the current estimate of PBGC’s liability; and a review of pension rules and guarantee limits. When announcing a plan’s termination, PBGC does not disclose whether the plan was previously booked as a probable claim, and experts and others said this practice leads some to believe that PBGC is assuming a wholly new liability, in addition to the large deficit already reported by PBGC in its annual report. According to PBGC officials, in the case of most large terminations, PBGC has already recorded a major part of the announced liability in its reported deficit as a probable claim. Thus, when PBGC issues a press release, it may appear that PBGC is assuming a considerable liability with the termination of a large underfunded pension plan when in fact most of the announced liability has been previously recorded in PBGC’s annual financial statement and is already reflected in its previously reported deficit. Pension experts and financial analysts said that PBGC’s press releases include PBGC’s best estimate of the financial liability facing PBGC without any mention of how much of this liability has already been recorded as a probable claim in its annual financial statement. Experts and others told us that when PBGC puts out a press release announcing the termination of a new plan, they regularly receive telephone calls from the media and others asking if this announced liability is a new liability added to PBGC’s deficit. By revealing whether the newly terminated plan was previously recorded as a probable claim, PBGC would give policy makers and others better information to understand the impact on PBGC’s financial condition. Pension experts and financial analysts were also concerned that they remain uncertain about PBGC’s methodology for calculating the interest rate it uses to discount its long-term liabilities and would like to see more information disclosed about this process. The interest rate used by PBGC to calculate its liabilities has a significant effect on the reported financial condition of PBGC. For instance, if PBGC reduced its interest rate, the value of its liabilities would increase, and conversely, if PBGC increased its interest rate, the value of PBGC’s liabilities would decrease. Pension experts said that because PBGC’s interest rate choice has a large impact on its reported financial position, they are troubled that the assumptions surrounding the interest rate decision are not transparent. Pension experts and financial analysts said that they are uncertain of the exact calculations used by PBGC to calculate its rate, and they believe that PBGC could do more to clarify its interest rate assumptions and the effect of this rate on its reported financial position. According to PBGC officials, information about their interest rate calculations is available upon request but is not provided in its annual financial disclosure documents. Although PBGC discloses its interest rate factors and some of its actuarial assumptions in its annual financial statement, it does not disclose its entire interest rate methodology. PBGC officials also told us that in 2003, during bankruptcy proceedings for US Airways, the court reviewed PBGC’s interest rate. During these proceedings, PBGC submitted a detailed presentation on how the rate is calculated, which is now part of the public record. The court upheld PBGC’s use of its interest rate. PBGC officials told us they distribute documentation of their methodology when it is requested. However, while the assumptions and calculations PBGC uses to calculate its interest rate are public information, such information is not readily accessible to experts, plan sponsors, plan participants, or lawmakers because they may not know that this information is available upon request. Pension experts and others also had additional concerns about PBGC’s methodology and practices that PBGC has addressed. Specifically, they were concerned that PBGC’s practices and disclosures overstate PBGC’s economic distress by (1) including estimated probable claim liabilities in its deficit, (2) not publishing a projected date of PBGC’s insolvency, and (3) using a low interest rate. For more information on these concerns and PBGC’s responses to these concerns, see appendix III. Concerns about the transparency of information on the financial condition of the PBGC are not new. As we previously reported, the cash-basis of the federal budget contributes to a lack of transparency about PBGC and other federal insurance programs that may delay recognition of emerging financial problems. Furthermore, current budget reporting may not provide policy makers with information or incentives to address potential funding shortfalls before claim payments come due. Generally costs are recorded in the budget too late for policy makers to control them or even ensure that adequate resources will be available to cover them. The delayed budget recognition of these costs can reduce the number of viable options available to policy makers and may ultimately increase the cost to the government. Finally, many of the pension experts, financial analysts, and industry association representatives we consulted observed that PBGC’s disclosures are likely to become increasingly important if changes to the pension accounting rules currently under discussion are made, as these could have an effect on defined benefit plans covered by PBGC. The primary accounting rule change that is being discussed is likely to move the disclosure of the funding status of pension plans from the footnotes to the balance sheet of the employer’s financial statement. According to FASB, this change is intended to make an employer’s pension obligations more visible on the balance sheet and income statement in order to increase the transparency of the plan sponsor’s financial position. Financial analysts and others told us that there are many potential effects to this expected change that do not directly influence PBGC but are likely to indirectly affect PBGC’s financial condition. For example, pension experts and analysts said that the new FASB changes could have a positive effect on PBGC’s future financial condition by promoting greater contributions and higher funding levels from plan sponsors, thus reducing PBGC’s exposure to financially troubled plans. The information that PBGC reports on its probable claims and financial liabilities is the main source of information that policy makers and interested parties have to evaluate the financial condition of PBGC. The more transparent PBGC is about the impact of new terminations on PBGC’s financial positions and the methodology it uses to determine its interest rate, the less ambiguity there will be about PBGC’s financial condition and the more beneficial PBGC’s financial reporting will be for policy makers and others. Public understanding of PBGC’s financial condition will become even more important in the near future if the proposed changes to pension accounting rules are made, potentially affecting PBGC’s financial condition. The risks faced by PBGC when disclosing information about probable claims are not the same as those faced by public companies. PBGC must be careful not to release information that can negatively affect the sponsors of the plans it insures or its ability to recover assets from the sponsors of the plans that it takes over. Public companies’ disclosures are aimed at providing necessary information to investors. Improving the transparency of the financial information released by PBGC will require finding a solution that does not hinder the agency’s ability to make recoveries from plan sponsors for the losses it incurs. We recognize that PBGC believes that there are reasons for withholding certain information about its probable claims. As we reported, PBGC does not disclose the names and liability amounts for newly terminated plans that were classified as probable claims because of its concerns of compromising PBGC’s position during litigation and negatively affecting the economic health of plan sponsors. However, PBGC could better describe the impact of new claims on its reported net financial position when announcing new plan terminations in its press releases. Therefore, we recommend that PBGC consider disclosing, in its press releases, whether a newly terminated plan was classified as a probable and already included in its reported deficit in its annual financial statement. To improve the transparency of the interest rate assumptions PBGC uses to calculate its liabilities, we recommend that PBGC makes it interest rate methodology more widely available to the public. In doing so, PBGC should considering making this information available on its Web site. We provided a draft of this report to PBGC, FDIC, the Department of Labor (Labor), SEC, and the Department of the Treasury (Treasury). PBGC provided written comments, which appear in appendix I. PBGC’s comments agreed with the findings and conclusions of our report. SEC provided written comments which appear in appendix II. SEC’s comments generally agreed with our findings related to SEC’s Form 8-K and material liabilities. The agency’s comments also provided additional information that is related to Form 8-K requirements and financial statement disclosure of loss contingencies. PBGC and FDIC also provided technical comments on the draft. We did not receive any comments from Labor and Treasury. We incorporated each agency’s comments as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the issue date. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Executive Director of the Pension Benefit Guaranty Corporation; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215. Other contacts and acknowledgments are listed in appendix IV. Pension experts and others had additional concerns about the Pension Benefit Guaranty Corporation’s (PBGC) methodology and practices that have been addressed by PBGC. Pension experts and industry association representatives told us they are also concerned that PBGC’s practices and disclosures overstate its economic distress by (1) including estimated probable claim liabilities in its deficit, (2) not publishing a projected date of insolvency, and (3) using a low interest rate. Each of these concerns has already been addressed by PBGC. PBGC includes estimated net losses incurred from probable terminations in its liabilities when calculating its deficit. Some experts are concerned that PBGC’s deficit is largely composed of liabilities from probable claims that might never terminate. In addition, these parties told us that by only booking the net claim, instead of booking the assets and liabilities separately, PBGC is making its own funded ratio look worse. PBGC officials agreed that this practice may make their funded ratio lower but said it has no impact on the size of its deficit. However, PBGC officials said that according to FAS 5, PBGC is required to include the expected net claims from probable claims in its financial statement if the loss is probable and the amount of the loss can be reasonably estimated. Furthermore, the accounting standards do not permit PBGC to separately book plan assets and liabilities of probable claims until PBGC takes over the plan. Pension experts and industry association representatives also told us that PBGC publicizes its large financial deficit without publishing an estimated date of insolvency. While some experts we spoke to understand that PBGC has enough assets to pay its promised obligations for a number of years, they are concerned that by not announcing a date of insolvency, PBGC leaves the impression that participants are at imminent risk of not receiving their benefits. PBGC officials told us that a projected date of insolvency for the single-employer program is not calculated because there are many uncertain variables that will affect PBGC’s future cash flows, and it is not possible to reasonably project a date of insolvency with any accuracy. The uncertainty of when PBGC will trustee new plans, how those claims will affect PBGC’s future cash flows, and of PBGC’s revenue from investment returns make it impossible to make reasonable predictions of the date of PBGC’s insolvency. However, PBGC officials reported that the agency’s analysis has shown that there is less than a 10 percent chance that its single-employer program will not have sufficient assets to pay guaranteed benefits through 2020. Some pension experts, industry association representatives, and financial analysts disagree with PBGC’s choice of interest rate used when calculating the value of its liabilities and argue that this interest rate is unrealistically low, thereby overstating the value of its reported deficit. These experts said that PBGC should be using a higher interest rate that is more in line with current corporate economic conditions. According to PBGC officials, the agency’s interest factors are based on the rate at which a private insurance company would charge plan sponsors to take on a plan’s promised benefits. The survey ensures that PBGC’s termination values reflect the current market price of terminating a plan. PBGC officials also noted that its auditors have issued unqualified opinions on its financial statements, which include its liabilities for probable losses and the interest factors used to calculate these liabilities. In addition, the American Academy of Actuaries conducted a study that found its procedures accurately reflected annuity prices. Tamara Cross, Assistant Director; Raun Lazier, Analyst-in-Charge; Diana Blumenfeld; Joseph Applebaum; Richard Burkard; Robert Dacey; Kimberly McGatlin; Jonathan McMurray; James McTigue Jr.; and Roger J. Thomas made important contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Pension Benefit Guaranty Corporation's (PBGC) single-employer insurance program insures the pension benefits of over 34 million participants in almost 29,000 private sector defined benefit pension plans. The increase in PBGC's probable claims has raised questions about PBGC's monitoring and financial disclosure practices, including whether the information that PBGC discloses is sufficient for interested parties to understand the effect on PBGC's financial condition. GAO examined (1) the steps that PBGC takes to monitor and ensure the accuracy of its probable claims, (2) how PBGC's financial liability reporting compares with those of publicly traded companies, and (3) the steps PBGC has taken to improve the transparency of its financial reporting and whether additional improvement is needed. PBGC takes steps to monitor and ensure the accuracy of its single-employer probable claims forecasts. PBGC reported it monitors its probable claims on an ongoing basis by contacting plan sponsors to obtain certain plan financial information, reviewing filings submitted by probable plans to conduct a risk analysis, and performing valuations to determine the present value of net probable claims and expected date of probable plan termination. To ensure the accuracy of its probable claims, PBGC reported that it uses an automated system and available plan financial data to calculate the assets and liabilities for probable plans. PBGC and public companies have different practices for disclosing certain information about liability settlements, including probable losses, that arise from the differences between PBGC's responsibilities and disclosure policies, and the Security and Exchange Commission's (SEC) requirements for public companies. While PBGC and public companies follow the same accounting standards for recording probable losses in their annual financial statements, they each follow different policies and requirements when reporting information about probable losses throughout the fiscal year. When reporting information on liability settlements, public companies are required to follow the standards set forth by SEC requirements, while PBGC, which is not subject to SEC requirements, follows its own set of policies and procedures. GAO found that PBGC's disclosure practices regarding probable losses are more comparable to those of the Federal Deposit Insurance Corporation (FDIC). PBGC has made efforts to improve the transparency of the information it discloses about its financial condition, but pension experts, financial analysts, and others believe that additional improvements are still needed. PBGC has recently taken steps to include more information about its methodology for determining probable claims in its annual reports and make more detailed information on its financial condition available on its Web site. However, pension experts, analysts, and industry association representatives still have concerns about transparency. Many stated that the press releases PBGC issues that announce newly terminated plans do not provide the public with enough information to determine the financial impact of such plans on PBGC's published deficit. In addition, these parties expressed concern about the lack of transparency regarding the methodology PBGC uses to determine its interest rate its uses to calculate its liabilities. Specifically, these parties told us the fact that PBCG does not widely disclose the interest rate methodology contributes to ambiguity about PBGC's assumptions and means that these parties are unable to fully assess PBGC's financial condition.
|
In March 2003, the United States—along with the United Kingdom, Australia, and other members of the coalition—began combat operations in Iraq. The original “coalition of the willing” consisted of 49 countries (including the United States) that publicly committed to the war effort and also provided a variety of support, such as direct military participation, logistical and intelligence support, over-flight rights, or humanitarian and reconstruction aid. The term “coalition of the willing” refers to those countries that declared political support for the war effort; not all of these countries contributed troops to multinational operations. Between December 2003 and May 2007, 39 countries (including the United States)— some of which were not original coalition members—provided troops to support operations in Iraq. Three sources of funding help support non-U.S. coalition troops in Iraq: coalition support funds, lift and sustain funds, and peacekeeping operations funds. First, the Emergency Wartime Supplemental Appropriations Act of 2003 authorized DOD to use up to a certain amount of its operations and maintenance funds to reimburse countries for the logistical and military support they provided to U.S. military operations in Iraq. DOD refers to these funds as coalition support funds. Congress has continued to make such funds available in each subsequent fiscal year. Second, DOD’s annual Appropriations Act in 2005 authorized DOD to use funds from its operations and maintenance accounts to provide supplies and services; transportation, including airlift and sealift; and other logistical support to coalition forces supporting military and stability operations in Iraq. DOD refers to these funds as lift and sustain funds. This authority has also been continued in subsequent appropriations acts. According to a DOD official, both coalition support funds and lift and sustain funds are used for any requirements that could be appropriately paid for from operations and maintenance accounts, including airlift, sealift, and sustainment services such as feeding and billeting for coalition troops, among other things. In addition, a DOD official stated that both of these funds are used to support nations whose economic conditions prevent them from fully funding their troops’ presence in Iraq. The key distinction between the coalition support and the lift and sustain funds is that coalition support funds are used to reimburse countries for costs they incur, and lift and sustain funds are used to reimburse U.S. military departments for services they provide to support eligible countries. Third, the State Department provided peacekeeping operations (PKO) funds in 2003 and 2004 to provide basic supplies and equipment such as armor and medical supplies to coalition troops in Iraq. These funds were used to make initial equipment purchases for countries participating in Polish and U.S.-led divisions in Iraq. Many nations and various international organizations are supporting the efforts to rebuild Iraq through multilateral or bilateral assistance. U.N. Security Council Resolution 1511 of October 16, 2003, urged member states and international and regional organizations to support the Iraq reconstruction effort. On October 23-24, 2003, an international donors conference was held in Madrid, with 76 countries, 20 international organizations, and 13 nongovernmental organizations participating. As of May 2007, 25 coalition nations were contributing about 12,600 troops to multinational force operations in Iraq. This compares to the 145,000 U.S. troops in Iraq, for the same time period. See figure 1 for a comparison of U.S. and coalition troops from December 2003 through May 2007. Non-U.S. coalition troops represent about 8 percent of multinational forces in Iraq as of May 2007. Although the coalition has trained and equipped about 331,000 Iraqi army and police forces, we do not include Iraqi security forces (ISF) in our analyses. As we have reported, these data provide limited information on the forces’ capabilities, effectiveness, and loyalties. For example, DOD reported in March 2007 that the number of ISF forces present for duty is one-half to two-thirds of the number trained and equipped. In addition, the number of coalition forces has declined by 47.5 percent— from 24,000 in December 2003 to 12,600 in May 2007, as shown in figure 2. Although the number of troops is declining, three countries—the United Kingdom, Poland, and the Republic of Korea—have led operations in three of seven security sectors in Iraq (see figure 3). Since July 2003, the United Kingdom has led operations in one of the seven sectors—Multinational Division-Southeast—in southern Iraq in the area around Basra. As of October 2006, coalition troops in this sector were from Italy, Japan, Australia, Romania, Denmark, Portugal, Czech Republic, and Lithuania. Since that time, Italy and Portugal have withdrawn troops from military operations in Iraq. The United Kingdom has provided the largest number of non-U.S. coalition troops, peaking at 46,000 from March through April 2003, then declining to 7,100 in November 2006. British forces have conducted combat operations to improve the security environment and have trained Iraqi security forces, among other things. They had sustained 147 fatalities as of May 1, 2007. The United Kingdom announced that it will begin withdrawing troops in 2007 but has pledged to maintain a presence in Iraq into 2008. Poland has led operations in the MND-Central South, which is south of Baghdad, since September 2003. As of May 2007, non-U.S. coalition troops in this sector were from Poland, Armenia, Bosnia, Denmark, Kazakhstan, Latvia, Lithuania, Mongolia, Romania, El Salvador, Slovakia, and Ukraine. Poland’s highest troop level was 2500, declining to 900 by October 2006. Poland’s troops have conducted joint combat operations and performed humanitarian, medical, advisory, and training missions, and have sustained 20 fatalities. The Republic of Korea has led operations in MND-Northeast from Irbil City in the area north of Kirkuk since September 2004. Their peak number of troops was 3,600 troops in that year but declined to 1,600 in March 2007. Their missions have included medical, humanitarian, and reconstruction efforts. The Republic of Korea’s government is to draw up a timetable in 2007 for withdrawing its troops from Iraq. The number of contributing countries has decreased from 33 in December 2003 to 25 in May 2007. Figure 4 shows the countries that have contributed troops between 2003 and 2007. According to State Department officials and government press releases, the decline in the number of troops can be attributed to completion of missions, domestic political considerations, and the deteriorating security condition in Iraq. As the figure shows, eight countries withdrew their troops from Iraq during 2004. For example, in mid-April 2004, the new government of Spain announced that it would withdraw its 1,300 troops from Iraq. The government withdrew the troops much earlier than the United States expected, after violence escalated in the Spanish area of operations in Iraq. Shortly thereafter, Honduras and the Dominican Republic announced they would also withdraw their national contingents from the multinational force, which they did the same year. Some countries that have provided troops to the multinational force in Iraq are not financially able to support those troops in the field for extended periods of time or may need assistance in preparing their troops for this type of operation. Since 2003, the United States has provided about $1.5 billion to 20 countries. Of the $1.5 billion spent to support these troops, about $725.9 million was reimbursed to countries, and about $702 million was reimbursed to U.S. military departments that provided support to non-U.S. coalition troops. See table 1 below for the total amount of support provided for non-U.S. coalition troops in Iraq. Since 2003, the departments used about $1 billion of the approximately $1.5 billion (71.5 percent) for sustainment services such as food, supplies, and base operations services such as communications and equipment. The departments used the remaining funds to support other operational requirements: About $212 million to support Jordan’s border operations; About $43 million to support hospital operations; and About $125 million to support lift requirements. Nineteen coalition nations and Jordan received support from these funds. As displayed in table 2, Poland received the largest amount of support— about $988 million, or 66 percent of total funding—for requirements sustained in its capacity as Commander of the MND-Central South sector. However, the support provided Poland was not solely for its own troops but for the coalition troops under its command—Armenia, Slovakia, Denmark, El Salvador, Ukraine, Romania, Lithuania, Latvia, Mongolia, Kazakhstan, and Bosnia-Herzegovina. According to a DOD official, as a matter of policy, it confined its support to those coalition countries that they deemed were less capable of absorbing the costs associated with participating in operations in Iraq. However, one exception to this policy was the decision in 2005 to reimburse the United Kingdom about $5.6 million for improvements it made to Royal Air Force (RAF) Base Akritori on Cyprus to accommodate U.S. requirements for lift and refueling needs. Jordan was the next largest recipient of support, receiving reimbursement or services worth about $300 million for border operations and other activities. It is important to note that the United States also has provided security assistance funds to develop and modernize the militaries of several countries contributing to operations in Iraq. Security assistance has included military equipment, services, and training. From fiscal year 2003 through 2006, the United States provided about $525 million in security assistance to 10 countries contributing troops to Iraq. In addition, since 2003, the United States has provided Jordan about $1.34 billion in security assistance. International donors have pledged about $14.9 billion in support of Iraq reconstruction. In addition, some countries exceeded their pledges by providing an additional $744 million for a total of $15.6 billion, according to the State Department. Of this amount, about $11 billion, or 70 percent, is in the form of loans. As of April 2007, Iraq had accessed about $436 million in loans from the International Monetary Fund (IMF). The remaining $4.6 billion is in the form of grants, to be provided multilaterally or bilaterally; $3.0 billion has been disbursed to Iraq. See table 3 for pledges made at Madrid and thereafter for Iraq reconstruction. In addition, 16 of the 41 countries that pledged funding for Iraq reconstruction also pledged troops to the multinational force in Iraq. About $11 billion, or 70 percent, of the $14.9 billion pledged in support of Iraq reconstruction is in the form of loans. Pledging the majority of these loans were the World Bank ($3 billion), the IMF (up to $2.55 billion), Iran ($1 billion), and Japan ($3.4 billion), according to the State Department. In September 2004, the IMF provided a $436 million emergency post-conflict assistance loan to facilitate Iraqi debt relief. The World Bank has approved loans for $399 million from its concessional international development assistance program, which the Iraqis have not accessed. According to the State Department, the Iraqis lack a system for approving projects supported by donor loans, which has impeded efforts by the World Bank and Japan to initiate loan-based projects. In addition, Iraq has not yet accessed loans from Iran, according to the State Department. Further, according to IMF reporting as of February 2007, Iraq has received about $39 billion in debt reduction from commercial and bilateral creditors. As of April 2007, international donors for Iraq reconstruction had pledged $3.9 billion in grants to be provided multilaterally and bilaterally. In addition, some countries exceeded their pledges by providing an additional $744 million, according to the State department. Of the total grants, donors provided about $1.6 billion multilaterally to two trust funds, one run by the U.N. Development Group (UNDG) and the other by the World Bank. Donors have provided about $1.1 billion to the UN trust fund and $455 million to the World Bank trust fund. As of March 2007, the UN has disbursed about $612 million to support, among other things, Iraq’s elections, infrastructure projects, health and nutrition, agriculture and natural resources, and assistance to refugees. As of March 2007, the World Bank fund had disbursed about $96 million to support, among other things, capacity building, school rehabilitation and construction, and health rehabilitation. Donors provided about $2.3 billion in bilateral grants for Iraq reconstruction efforts. As of April 2007, these grants have funded more than 400 projects as reported by Iraq’s Ministry of Planning and Development Cooperation. According to State, these projects include about $1 billion in grant assistance from Japan, $775 million from the United Kingdom, $153 million from Republic of Korea, $110 million from Canada, and $100 million from Spain. These funds have been provided as bilateral grants to Iraqi institutions, implementing contractors, and nongovernmental organizations for reconstruction projects outside the projects funded by the UN and World Bank trust funds. Mr. Chairman, this concludes my statement. I will be happy to answer any questions you or the members of the subcommittee may have. For questions regarding this testimony, please call Joseph A. Christoff at (202) 512-8979. Other key contributors to this statement were Muriel Forster, David Bruno, Monica Brym, Dorian Herring, Lynn Cothern, Judith McCloskey, and Mary Moutsos. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In March 2003, a U.S.-led multinational force began operations in Iraq. At that time, 48 nations, identified as a "coalition of the willing," offered political, military, and financial support for U.S. efforts in Iraq, with 38 nations other than the United States providing troops. In addition, international donors met in Madrid in October 2003 to pledge funding for the reconstruction of Iraq's infrastructure, which had deteriorated after multiple wars and decades of neglect under the previous regime. This testimony discusses (1) the troop commitments other countries have made to operations in Iraq, (2) the funding the United States has provided to support other countries' participation in the multinational force, and (3) the financial support international donors have provided to Iraq reconstruction efforts. This testimony is based on GAO's prior work and data collected for this hearing. Although we reviewed both classified and unclassified documents, the information in this statement is based only on unclassified documents. We completed this work in accordance with generally accepted government auditing standards. As of May 2007, 25 countries were contributing 12,600 troops to multinational forces in Iraq. Compared with 145,000 U.S. troops, coalition countries represent about 8 percent of multinational forces in Iraq. From December 2003 through May 2007, the number of coalition troops decreased from 24,000 to 12,600; the number of coalition nations contributing troops decreased from 33 to 25. The United Kingdom, Poland, and Republic of Korea are responsible for leading operations in three of seven security sectors in Iraq. In addition, coalition troops have performed humanitarian, medical, and reconstruction missions. Some have provided combat capabilities, such as infantry and explosive ordinance capabilities. The United States has spent about $1.5 billion to transport, sustain, and provide other services for military troops from 20 countries other than the United States and Iraq. The United States used about $1 billion of the $1.5 billion to feed, house, and equip these countries. In terms of allocation by country, about $988 million, or 66 percent, was used to support Poland and the countries under its command, and $300 million, or 20 percent, supported Jordan for border operations and other activities. In addition to support for operations in Iraq, the United States, through the State Department, has provided about $1.9 billion in security assistance for military training and equipment to 10 coalition members and Jordan since 2003. As of April 2007, international donors had pledged about $14.9 billion for reconstruction efforts in Iraq. Some countries exceeded their pledges by an additional $744 million for a total of $15.6 billion. About $11 billion, or 70 percent, of these pledges are loans, with the remaining $4.6 billion in the form of grants. As of April 2007, Iraq had accessed about $436 million in loans and $3 billion in grants.
|
While the federal government is deeply involved in many aspects of air travel, it is not involved in the creation or distribution of airline tickets. Instead, such matters are left to the airline industry. There are a variety of sources and types of airline ticket stock. For example, when airlines issue tickets directly to passengers, they use individualized ticket stock bearing their corporate name. However, the vast majority of U.S. travel agencies obtain their ticket stock from ARC—an airline-owned corporation created, among other things, to accredit U.S. agencies and to facilitate the distribution of airline ticket stock to the agencies. In 1998, there were 32,694 ARC-accredited retail travel agencies. In 1998, ARC issued 1.1 billion pieces of ticket stock to U.S. travel agencies. The ticket stock is blank except for the inscription of a unique identifying number and has no value until it is made into a ticket. Over 90 percent of the ticket stock ARC issued in 1998 was in automated form. Travel agents imprint this stock by computer with data—such as flight dates, flight numbers, and fares—generated from the airlines’ computer reservation systems. According to ARC, a typical ticket is composed of six to seven pieces of ticket stock—the airline auditor’s record, an agent’s record, several flight coupons (one for each segment of a traveler’s journey), a charge form, and the passenger’s receipt. When combined together, these pieces of stock form an airline ticket. International suppliers, including the International Air Transport Association, issue similar ticket stock. Figure 1 illustrates ARC’s automated ticket stock. Definitive information on the amount and value of ticket stock stolen annually does not exist. However, from 1989 through 1998, subscribing airlines and other worldwide suppliers of ticket stock reported 11.3 million pieces of stolen ticket stock for inclusion in ARINC’s database—an annual average of about 1.1 million. ARC reported the majority of these losses—8.2 million pieces (72 percent). Although the amount of stolen ARC ticket stock is sizeable, it represents less than one-hundredth of 1 percent of the over 9.2 billion pieces of ticket stock that ARC issued to travel agencies during the period. International ticket stock suppliers, including the International Air Transport Association, were the second largest source of the reported losses. They reported 2.8 million pieces of stolen ticket stock (25 percent). Eleven U.S. airlines reported the remaining 350,000 pieces of stolen ticket stock (3 percent). ARINC’s database cannot be analyzed by year, and given the predominance of ARC ticket stock losses and the absence of other information sources, we relied on the data supplied by ARC for annual information. However, ARC’s database only tracks 27 percent of the stolen stock it reported to ARINC. During 1989 through 1998, ARC’s annual status reports identified 2.2 million pieces of stolen ARC ticket stock—an annual average of about 220,000. After peaking in 1995, as shown in figure 2, the amount of ARC ticket stock reported stolen has declined considerably. ARC attributes the decline to (1) the more stringent security rules that it established in April 1996, (2) its increased fraud prevention activities, (3) increased awareness by travel agency personnel of the problem, and related to this, (4) the agents’ increased compliance with ARC’s required security measures. In 1997 and 1998, about 97 airlines voluntarily reported to ARC that 8,330 tickets were created from stolen ARC ticket stock and used for travel on their airlines. The tickets had a total face value of about $10.5 million and an average value of about $1,260. Using this average, we estimate that the 447,000 pieces of ARC ticket stock identified as stolen in ARC’s status reports for 1997 and 1998 could be valued at as much as $302 million—$186 million for the stock stolen in 1997 and $116 million for 1998. The $302 million represents the “worst-case” scenario and therefore is likely to overstate the potential loss from stolen ARC ticket stock. This is because, while travel agents normally use six to seven pieces of ticket stock to create an airline ticket, we assumed that individuals creating fraudulent tickets would, wherever possible, try to minimize the amount of stock used for each ticket so as to maximize the number of tickets they could create. For example, according to ARC, individuals creating tickets with stolen stock could use—for a direct, one-way trip—as few as two pieces of stock (one for the flight and one for the passenger’s receipt) to create an airline ticket. As a result, rather than using six to seven pieces of ticket stock to create an airline ticket—as in the case of a typical ticket—we assumed that only two pieces would be needed and used.Likewise, we assumed that all of the stolen ticket stock would be used. This is extremely unlikely because, during 1997 and 1998, officials from the four airlines we contacted said that their airlines had confiscated over 1,900 pieces of stolen ARC ticket stock before the stock could be used. The airlines bear the financial risk for most situations involving stolen ticket stock, but their losses are small relative to their total annual revenue and represent only a small portion of their total losses from all airline-related fraud. In contrast, losses incurred by travel agencies held liable for not adequately safeguarding ARC ticket stock can result in serious financial hardships. Of the 447,000 pieces of ticket stock stolen in 1997 and 1998, we estimate that the potential financial risk to the airlines and travel agencies would be about $151 million each. In actual practice, however, travel agencies would likely incur significantly smaller losses because airlines frequently settle for far less than the amounts that travel agencies owe. U.S. tax officials believe that the financial consequences to the federal government from the use of stolen ticket stock are minor and of limited interest for auditing purposes compared with higher-risk tax issues, such as the depreciation of airline assets. Airlines incur the financial burden for stolen ticket stock under a wide range of scenarios. For example, the airlines are liable for losses resulting from the use of their own stolen ticket stock. From 1989 through 1998, 11 U.S. airlines reported the loss of 350,000 pieces of their own blank ticket stock to ARINC’s database. The actual value of the airlines’ losses is not known because the database does not include information on the value of used ticket stock. In addition to losses from the use of their own stolen ticket stock, airlines bear the financial burden for much of the ARC stock that is reported stolen. For example, airlines bear the cost of crimes committed by travel agencies against the airline industry, including missing stock from terminated agencies and travel agency “bust-outs.” Bust-outs involve the fraudulent acquisition or retention of ARC ticket stock by a travel agency. For example, according to ARC officials, because it is relatively easy to obtain ARC’s accreditation as a travel agency, some persons set up agencies solely to acquire and flee with ARC ticket stock. Moreover, according to ARC officials, travel agencies sometimes file for bankruptcy or otherwise discontinue their operations without paying for the tickets they have sold and without returning their remaining inventory of ARC ticket stock. In 1997 and 1998, ARC reported 33 travel agency terminations and bust-outs involving 150,000 pieces of ARC ticket stock. These losses represent about 34 percent of ARC’s total ticket stock losses during this period. In many cases, airlines also suffer losses from crimes perpetrated against travel agencies. For example, ARC holds travel agencies harmless for the use of ARC ticket stock that is stolen during an armed robbery. In 1997 and 1998, about 23,000 pieces of ARC ticket stock were stolen in 29 armed robberies of travel agencies. In addition, if the agency followed ARC’s ticket security procedures, ARC holds travel agencies harmless for ticket stock that is (1) stolen in burglaries, daytime thefts, and other thefts or (2) reported missing. In 1997 and 1998, 232 travel agencies reported the loss of about 274,000 pieces of ARC ticket stock from these types of crimes. ARC determined that 81 of these agencies, which had reported a total of about 45,000 pieces of stolen ticket stock, had followed ARC’s security procedures, and thus the airlines were responsible. Taken together, the airlines are responsible for the use of 217,000 pieces of ARC ticket stock stolen in 1997 and 1998. In the unlikely event that all of this stock is used, we estimate that the airline industry could incur about $151.3 million in losses—$94.7 million for ticket stock losses in 1997 and $56.6 million for losses in 1998. Figure 3 summarizes, by year and type of incident, the airlines’ potential losses resulting from this stolen ticket stock. In addition to the losses for which they are responsible under ARC’s policy, airlines frequently assume a large portion of a travel agency’s financial liability. For example, according to representatives of 15 travel agencies held liable for ARC stock stolen in burglaries in 1997, their agencies were able to negotiate 33 settlements with 12 airlines. Ten of the 12 airlines agreed to forgive 90 percent of the amount owed by the agencies in 21 of the 33 settlements. For 10 of the 12 other settlements, airlines agreed to forgive 67 percent to 85 percent of the total amount owed by the agencies. The remaining two settlements were for 50 percent and 25 percent of the amounts owed. Many of the airlines also agreed to waive the travel agencies’ liability for the future use of the remaining stolen ticket stock—a factor that is likely to increase the airlines’ future losses. While airline officials acknowledge that the losses from stolen ticket stock are considerable, they note that the airlines’ losses are small relative to the billions of dollars that the airlines earn annually. Furthermore, according to surveys of the airline industry, the losses are minor compared with the airlines’ total losses from other types of airline fraud. Specifically, according to a survey of 31 airlines in 1995, the losses from stolen ticket stock accounted for about 15 percent of all external fraud losses. The largest source of fraud (28 percent) resulted from abuses of the airlines’ frequent flyer programs. Another survey, conducted in 1996, identified tariff abuse as the primary source of airline-related fraud. Other types of airline fraud included mail fraud; cargo theft; and internal airline fraud, such as expense account abuses and inventory fraud. ARC determined that travel agencies had not adequately protected ARC ticket stock in 151 of the 232 incidents involving travel agency burglaries, daytime thefts, other thefts, and situations involving missing ticket stock in 1997 and 1998. As a result, ARC deemed the agencies liable for the use of about 230,000 pieces of ARC ticket stock. In the unlikely event that all of this ticket stock is used, we estimate that—absent settlement agreements—it could cost the travel agency industry as much as $151.1 million—$91.6 million for losses in 1997 and $59.5 million for the 1998 losses. Figure 4 summarizes, by year and type of incident, the travel agencies’ potential losses resulting from the losses of ARC ticket stock in 1997 and 1998. Potential losses of this magnitude create serious financial hardship for travel agencies even when airlines forgive the majority of the travel agencies’ existing debt as well as the agencies’ liability for the future use of outstanding stolen ARC stock. As of the end of December 1998, 33 of the 232 travel agencies victimized by thefts or otherwise missing ARC ticket stock in 1997 and 1998 were no longer in business. Of those found liable for the loss, 24 of 151 were no longer in business, including 3 of the 15 travel agencies we contacted. We could not determine the extent to which the theft of ARC ticket stock was a factor in the agencies’ failure.However, the owners of the three agencies told us that their actual and potential liability for the loss of ARC ticket stock was the primary factor in closing their businesses. These owners were held liable for the use of over 15,000 pieces of ticket stock stolen in burglaries in 1997, and they closed their businesses without attempting to negotiate lesser payments with the airlines. One of the three former owners owed 12 airlines between $100,000 and $500,000, and the other two each owed more than $500,000 to numerous airlines. Representatives of 7 of the 12 remaining travel agencies told us that their airline debts could force them out of business. While some airlines settle for less than they are owed and waive liability for the future use of stolen ARC ticket stock, representatives of four of the seven agencies noted that other airlines refuse to (1) negotiate at all or (2) waive a travel agency’s future liability for the use of stolen ARC ticket stock. As a result, travel agencies often owe thousands of dollars. For example, one travel agency we contacted owed nine airlines between $100,000 and $500,000. While the agency settled with seven of the nine airlines, paying them between $10,000 and $50,000, two airlines refused to settle for less than the total amount owed. Furthermore, only two of the seven airlines agreed to waive the agency’s future liability. Consequently, the travel agency is liable for the future use of stolen ARC ticket stock on the other five airlines. According to the travel agency owner, a potentially huge financial burden continues that could eventually force the agency out of business. While still in business at the end of March 1999, five agencies we contacted told us that they had incurred a variety of economic hardships, including having to sell or mortgage their assets, downsize their operations, lay off staff, and forgo their salaries to pay their airline debts. For example, one agency owner told us that she had sold her home and her car and used her retirement savings to pay $50,000 to $100,000 in settlements to seven airlines. Two of the other three airlines she owed, however, refused to settle and instead turned the debts over to collection agencies. Many of the other travel agencies reported similar experiences. For example, while generally successful in negotiating greatly reduced payments, seven agencies that had been billed by the airlines reported that at least two airlines refused to negotiate reduced payments, and five agencies indicated that at least one airline refused to waive the agencies’ future liability. Moreover, all 14 agencies that had been billed by the airlines reported that the airlines turned their debts over to a collection agency. Similarly, 12 of the 14 agencies that had been billed for the use of stolen ARC ticket stock told us that at least one airline either temporarily or permanently terminated business with them—a factor that, depending on an airline’s dominance in an agency’s market, can greatly diminish an agency’s financial viability. Seven of these 12 agencies in operation as of March 31, 1999, expressed serious concerns about the future viability of their businesses. Representatives from 12 of the 14 travel agencies that had been billed by the airlines also expressed concern about the accuracy of the airlines’ bills. This is because airlines typically bill travel agencies for the face amount shown on the ticket, regardless of whether (1) the fare is accurate or (2) the traveler actually completed the entire trip. At our request, one airline compared its actual fares on the day each ticket was issued with the fares printed on 66 tickets that were created from stolen stock and used on the airline from July 1998 through September 1998. The fares shown on 61 of the 66 tickets were correct. The fares shown on the remaining five tickets were underpriced by $19 to $130. According to airline officials, the airlines do not adjust their bills to travel agencies to reflect overpriced tickets or portions of tickets that were not used. Such adjustments are not necessary, according to one airline official, because airlines routinely consider the possibility of overpriced and underused tickets in arriving at their settlement decisions with travel agencies. Several efforts have been undertaken to increase travel agencies’ awareness of the need for adequately securing ARC ticket stock to reduce the likelihood and impact of losses from thefts. For example, owing to the prevalence of travel agency burglaries in the Chicago area, ARC initiated a crime prevention program for agents in and around Illinois. Specifically, between July 1998 and December 1998, ARC hired a retired law enforcement officer to assess the adequacy of the travel agencies’ compliance with ARC’s ticket security requirements and, where warranted, provide advice about needed improvements. In April 1998, ARC also began (1) sending travel agencies information about recent crimes, including photographic images of suspected thieves, and (2) increasing its interactions with local and federal law enforcement agencies. More recently, ARC has begun advocating that travel agencies reduce their exposure to crime by minimizing the amount of ticket stock they maintain on their premises. According to ARC, this can be accomplished by using less ticket stock when an agent creates an airline ticket. For their part, the Association of Retail Travel Agents and the American Society of Travel Agents—both of which helped develop ARC’s ticket security requirements—are encouraging travel agencies to adhere to ARC’s requirements because doing so ensures that the agencies will not be held accountable for any losses. Federal tax law allows airlines, travel agencies, and other taxpayers to deduct (write-off) from their taxes a variety of losses, including losses from bad debt. This has resulted in concern within the travel agency community that airlines may not be properly accounting for (1) losses resulting from the use of stolen ticket stock—a form of bad debt, (2) revenues received from travel agency settlements, and (3) federal taxes due on the airline tickets. The Internal Revenue Service (IRS) is aware of these concerns. However, its knowledge of airline accounting practices,including the airlines’ treatment of bad debt, has led it to conclude that the tax consequences to the government are “not material” compared with higher-risk tax issues, such as the depreciation of airline assets. According to IRS, its specialist for the airline industry and others involved in airline audits are considering examining the issues during future airline audits. However, according to IRS, any decision to examine the issues will be made on a case-by-case basis and only after full consideration of each issue’s relative importance compared with other airline tax issues.Appendix III provides information about the airlines’ tax accounting practices. The traveling public does not appear to be at any greater risk from terrorists or illegal aliens who could use tickets created from stolen ticket stock than they are from individuals who travel on legitimate tickets. In part, this is because the airline industry believes that unsuspecting passengers purchase and use the majority of these tickets. While efforts to combat ticket stock thefts have focused on identifying passengers who use stolen ticket stock, the airline industry believes that more effort is needed to target individuals responsible for stealing and distributing the ticket stock as a means to reduce the thefts. According to federal law enforcement and intelligence officials from the Central Intelligence Agency, the Customs Service, the Immigration and Naturalization Service (INS), the National Security Agency, the Secret Service, the Federal Bureau of Investigation (FBI), and officials at four airlines, no individual they know of has traveled on tickets created from stolen ticket stock to conduct terrorist activities. The officials stressed that terrorists are unlikely to knowingly use tickets created from stolen stock because doing so is likely to increase their risk of detection. Moreover, they pointed out that organized terrorist groups have sufficient financial backing and access to falsified personal identification to purchase legitimate tickets. Officials from the Federal Aviation Administration agreed with this assessment. While no terrorists are known to have used stolen ticket stock for travel, the Customs Service identified a passenger with stolen stock in his possession. The individual was attempting to enter the United States through Miami and had links to two terrorist groups. Further investigation revealed that he had refunded, for cash, numerous stolen tickets in Europe. ARC and two of the four airlines we contacted suspect that illegal aliens often use tickets created from stolen ticket stock for travel. However, they could not provide definitive evidence that this is the case. Their view, according to airline industry officials, is supported by analyses of the usage of stolen ticket stock, including the prevalence of specific (1) types of surnames (Hispanic and Middle Eastern) and (2) combinations of origin and destination locations within the United States. For example, of the 8,330 tickets created with stolen ARC ticket stock that airlines reported to ARC as used in 1997 and 1998, about 54 percent (4,476 tickets) were for flights originating from one of three U.S. locations—Los Angeles, California; Phoenix, Arizona; or San Diego, California. Similarly, about 28 percent (2,342 tickets) were for flights destined for one of five U.S. locations—Los Angeles, California; Charlotte, North Carolina; Chicago, Illinois; Atlanta, Georgia; or the New York/New Jersey area. ARC and other industry officials believe that the illegal aliens are unknowingly sold tickets created from stolen stock as part of a package deal that includes fraudulent personal identification and the promise of a job. Only INS can determine if an individual is an illegal alien. INS inspectors must interview the suspected passengers and scrutinize their identification to determine if they are in the country illegally. Likewise, only the airlines can definitively establish whether a ticket has been created from stolen ticket stock. As a result, while illegal aliens have used tickets created from stolen ticket stock, the extent of their travel by this means is not known because INS and the airlines have not routinely worked together to identify illegal aliens using stolen ticket stock. Consistent with its mission to prevent individuals from illegally entering the United States, INS does not typically scrutinize domestic travelers. Instead, INS concentrates its resources at U.S. ports of entry, such as the five destination locations discussed above. As a result, unless requested to intervene by an airline, INS would not normally come into contact with illegal aliens using stolen ticket stock for domestic travel. One exception was in February 1999 when INS inspectors apprehended 162 illegal aliens on three domestic flights at the Sky Harbor International Airport in Phoenix, Arizona. INS confiscated the illegal aliens’ tickets and returned them to the airlines so that they could cancel the seat reservations. However, while one of the two airlines had the capacity to determine whether the tickets were produced from stolen ticket stock, it did not do so. Instead, it—like the other airline—returned the tickets to INS, which, in turn, gave them to the illegal aliens before deporting them. As a result, no one knows whether, or to what extent, the 162 illegal aliens were traveling domestically on stolen ticket stock. Given the airlines’ and INS’ separate capabilities, airlines sometimes request INS’ assistance in determining whether individuals they suspect of traveling on stolen ticket stock are illegal aliens. For example, in 1997, one airline determined that 22 percent of its passengers using tickets created from stolen stock were destined for, or passing through, the Minneapolis/St. Paul Airport. The airline requested INS’ assistance, and between January and February 1998, INS responded to five flights involving passengers who the airline suspected were illegal aliens. Twelve passengers were detained, all of whom were using tickets created from stolen stock. INS found that 11 of the 12 passengers were illegal aliens. None of the passengers had a criminal history, so local law enforcement officials declined to prosecute them. Instead, the illegal aliens were turned over to INS for deportation. While INS and the airline industry have not routinely worked together to identify illegal aliens using stolen ticket stock, cooperation in this area is increasing. For example, in 1996, INS initiated a program for detecting stolen ticket stock at the Miami International Airport in Florida and the John F. Kennedy International Airport in New York. While INS could not provide details on the program’s success, an INS inspector told us that, in cooperation with airlines at the Miami International Airport, he had identified about 100 illegal aliens using tickets created from stolen stock between July 1996 and February 1998. Moreover, in April 1999, INS—with ARC’s assistance—began training its inspectors in manual methods for detecting stolen ticket stock as an additional tool in interdicting illegal aliens. According to INS officials, INS is also interested in securing access to the airline industry’s database on stolen and other fraudulent ticket stock and, as a result, intends to request the airline industry’s cooperation in this area. Efforts to combat the theft of ticket stock have focused on passengers who use tickets created from stolen stock. Recognizing that this is only part of the problem, airline and travel agency representatives told us that additional efforts to identify and arrest those responsible for stealing and distributing the ticket stock would significantly reduce the supply of stolen ticket stock. Typically, the theft and distribution of stolen ticket stock involves federal crimes, such as the interstate transportation of stolen goods and mail and wire fraud, which fall under the purview of the FBI. While ARC believes it has made substantial progress in reducing ARC ticket stock thefts from travel agencies, according to ARC officials, additional FBI assistance is needed to achieve more dramatic results. According to ARC, it has been unable to enlist sufficient investigative support from the FBI. FBI and U.S. Attorney’s Office officials told us that the theft of airline ticket stock is a “property crime” that competes with higher-priority investigations, such as crimes involving narcotics, counterterrorism, and violent crimes. In addition, the U.S. Attorney’s Office’s typical threshold for prosecuting stolen property cases exceeds $50,000. This threshold is difficult to establish for stolen airline ticket stock because it is considered valueless until used. Notwithstanding these difficulties, as of August 1998, the FBI had about 20 pending cases involving stolen airline ticket stock, 17 of which were classified as violations involving the interstate transportation of stolen property. Four of the 20 cases have since been closed for various reasons. More recently, according to FBI officials, crimes involving stolen airline ticket stock have received increased FBI scrutiny. For example, the FBI said that, in December 1998, it held a training conference for its agents in Chicago and has been devoting additional resources to address these crimes. The Miami-Dade Police Department has linked a criminal group to the theft of airline ticket stock. According to individuals involved in this case, the group is composed of the thieves who steal the stock; “fences” who buy and store the stock; salespeople who sell the tickets; and individuals who print the tickets. The salespeople are located throughout the United States and solicit buyers for the tickets through unaccredited travel agencies, newspaper advertisements, and word-of-mouth. They transmit their sales by telephone or fax to the individuals who print the tickets. These individuals imprint the passenger’s itinerary on the blank stock using home computers and send it to the applicable salesperson for delivery to the passenger. According to Miami-Dade police personnel, a salesperson normally calls the airline—about 1 to 4 days prior to the flight—to make the passenger’s reservation. While the ARINC database can be used to detect stolen ticket stock at airport check-ins, many airlines do not subscribe to the database. Moreover, according to the airline officials we contacted, their airlines do not routinely use the database because the time required to query it delays passenger processing. When used with ARINC’s database, other technologies—optical scanners, bar code readers, and magnetic strip readers—detect stolen ticket stock more quickly. The airline industry also has other detection initiatives under way. The steady increase in electronic ticketing, according to many airline officials, will reduce the need for airline ticket stock and thus may eventually reduce ticket stock thefts. ARINC maintains a centralized database, available since 1989, that the airline industry uses to report and detect fraudulent ticket stock. Access to the database (1) is limited to participating airlines and (2) varies depending on the type of subscription each airline purchases. According to an ARINC official, the organization had 140 subscribers to its ticket stock listing and on-line detection services in 1998, including 26 airlines that had integrated ARINC’s database into their computer systems. On-line airline subscribers can query the database for stolen ticket stock by manually entering the ticket stock’s unique identification number. If the inputted number matches the number on ticket stock that has already been reported as stolen, the database indicates a match. Officials of the four subscribing airlines we contacted stated that the ARINC database is generally effective in detecting the use of stolen ticket stock. In fact, most of the 1,900 pieces of stolen ticket stock that these airlines confiscated in 1997 and 1998 were identified using this database. Nevertheless, the officials cited several disadvantages that preclude routinely using the database. The primary drawback was the time required to query the database, which delays the processing of passengers. We observed the database in operation at two different airline ticket counters at Washington Reagan National Airport near Washington, D.C., and recorded a total query time per ticket of between 6 seconds to 10 seconds. While each search takes only seconds, according to one airline official, the cumulative time required to check every ticket on a flight—as has been recommended by some in the travel agency industry—makes the idea impractical because, in his view, flights would never leave on time. Moreover, airline ticket agents sometimes enter the number incorrectly, thereby producing a false result. Conversely, airlines fear that they might detain a passenger using a legitimate ticket because suppliers, such as ARC, periodically re-use their stock control numbers. As a result, purchasers of legitimate tickets could find themselves with the same ticket number as stock that was previously reported stolen. Finally, the database cannot be used to detect altered ticket stock, which, according to one major U.S. airline, represents a significant portion—40 percent—of the fraudulent ticket stock it unknowingly accepted in 1997. Some airlines are experimenting with other types of technology that, in conjunction with the airlines’ computer reservation systems and ARINC’s centralized database, can help detect stolen ticket stock. For example, the four airlines we contacted had all tested or planned to test optical scanners, bar code readers, or magnetic strip readers that can quickly capture a wide range of passenger information without increasing passengers’ check-in times. The latest generation of these devices is capable of reading ticket information, including the ticket’s identification number; credit cards; frequent flyer cards; and machine-readable passports, visas, and other passenger identification. The scanner and reader devices address some of the problems associated with using ARINC’s database. For example, the devices reduce the time needed to query the database because airline ticket agents need not manually enter each ticket’s identification number. This eliminates keystroke errors and allows the number to be read and compared with the database with only one swipe of the ticket. In addition, the devices can be programmed to determine whether the ticket stock has been altered. We observed demonstrations of two of these technologies. Each technology was capable of reading and displaying information from a variety of documents, including airline tickets. According to the manufacturers’ representatives, when interfaced with the airlines’ computer systems and ARINC’s database, the devices can (1) read and compare airline ticket numbers with ARINC’s database and (2) display the results of the search within 2 seconds. Without other proven benefits, however, airline officials said that the use of scanner and reader devices solely to detect stolen ticket stock would be unduly expensive. Each of the devices costs between $2,000 and $3,000—a considerable amount when multiplied by all of the airlines’ ticket counters. Moreover, airlines would still have to pay about $14,000 annually to subscribe to ARINC’s database on fraudulent ticket stock. While these devices are being tested, we know of only one airline that is using them. This international airline initially used a device for reading passports and, in 1997, contracted with the manufacturer to develop an enhanced device capable of also reading ticket stock. In June 1998, the airline purchased 300 of the devices at a cost of about $660,000 for use at its U.S. locations. According to officials of this airline, the device is about 99-percent accurate, and as a result, the airline paid for its investment through reduced losses from stolen ticket stock within the first 5 months of its use, even though the device was not introduced for this purpose. ARC and the four airlines we contacted analyze information about the use of stolen ticket stock to profile common characteristics and identify patterns for use in targeting their detection efforts. These analyses produce statistical profiles that the airline industry uses to set priorities for their detection efforts, including the targeting of specific geographic locations. Moreover, to enhance their detection initiatives, many airlines offer rewards to employees who detect and confiscate fraudulent ticket stock. In 1998, for example, the International Air Transport Association found that 14 of the 20 airlines it randomly surveyed had programs for rewarding employees who detect fraudulent tickets. The rewards included $25 to $50 for each ticket confiscated, first-class travel anywhere the airline flew, and 5 percent of the value of the confiscated ticket. The four airlines we contacted also reward employees who detect and confiscate fraudulent ticket stock. According to industry representatives, there is currently no single solution to prevent the theft of airline ticket stock. However, many officials commented that the steady increase in electronic ticketing may soon resolve the problem because, with paperless tickets, travel agencies and other ticket distributors will have less need to retain large inventories of airline ticket stock. According to ARC, reducing the amount of stock that a travel agency keeps on hand has two potential benefits. It reduces the agency’s potential losses should a theft occur, and it lessens the agency’s attractiveness as a target of crime. At the end of 1998, over 32 percent of all airline transactions reported by travel agencies to ARC were for electronic tickets, compared with 14 percent at the beginning of the year. We provided a draft of this report to the Departments of Transportation, Justice, and the Treasury for their review and comment. Collectively, these Departments have responsibility for the Federal Aviation Administration, the Federal Bureau of Investigation, the U.S. Attorney’s Office, the Immigration and Naturalization Service, and the Internal Revenue Service. We also provided a draft of this report to the Airlines Reporting Corporation for its review and comment. The Departments of Transportation, the Treasury, and Justice (on behalf of the Federal Bureau of Investigation and the U.S. Attorney’s Office) had no comments on the report. However, we met with officials from Justice’s Immigration and Naturalization Service, including the Directors of the Service’s Office of Carrier Affairs and Office of Investigations. While the Service generally agreed with relevant information in the draft report, it provided additional information and suggestions for improving the clarity and accuracy of the report. The Airlines Reporting Corporation also provided editorial and technical comments. Finally, we provided relevant sections of the report to the American Society of Travel Agents, the Association of Retail Travel Agents, the Aeronautical Radio, Inc., American Airlines, Continental Airlines, Northwest Airlines, and United Airlines. Each of these organizations also provided technical comments. We incorporated all comments, as appropriate. We performed our work from April 1998 through July 1999 in accordance with generally accepted government auditing standards. A detailed description of our scope and methodology appears in appendix I. We are sending copies of this report to appropriate congressional committees; the Honorable Rodney E. Slater, Secretary of Transportation; the Honorable Jane F. Garvey, Administrator, Federal Aviation Administration; the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Charles Rossotti, Commissioner, Internal Revenue Service; the Honorable Janet F. Reno, Attorney General of the United States; the Honorable Louis J. Freeh, Director, Federal Bureau of Investigation; the Honorable Doris Meissner, Commissioner, Immigration and Naturalization Service; and the Honorable Jacob Lew, Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Appendix IV lists key contacts and contributors to this report. The Chairman of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, asked us to examine issues related to the theft of ticket stock used to create U.S. airline tickets. In consultation with the Subcommittee, we agreed to address the following questions: (1) What is the number and value of airline ticket stock stolen annually? (2) What financial implications are associated with the use of stolen ticket stock? (3) What other issues are potentially associated with the use of stolen ticket stock? and (4) What technological interventions and other initiatives are available to detect the use of stolen ticket stock? To determine the number and value of stolen airline ticket stock, we obtained information from Aeronautical Radio, Inc. (ARINC)—an airline-owned company in Annapolis, Maryland—that since 1989 has maintained a centralized, proprietary database of stolen, lost, missing, and other fraudulent airline ticket stock worldwide. ARINC provided an aggregate listing spanning 10 years, from 1989 through 1998. We used this information to determine the total amount of ticket stock reported stolen to ARINC by the Airlines Reporting Corporation (ARC) (on behalf of ARC-accredited travel agencies) and other domestic and international suppliers of ticket stock over the 10-year period. We could not use ARINC’s information to estimate the value of this ticket stock because ARINC’s database does not contain information about the use and value of the ticket stock. Given ARINC’s data limitations, the predominance of ARC’s ticket stock losses (ARC accounted for 72 percent of all stolen stock reported to ARINC), and the absence of other data sources, we compared the ARINC data with information in ARC’s annual status reports over the 10-year period.Because the amount of stolen ticket stock reflected in these reports totaled only 27 percent of the amount ARC reported to ARINC, we took additional steps to ensure the accuracy of selected ARC data. Specifically, for 1997 and 1998—the only years we could use to estimate the ticket stock’s potential value—we examined all (294) of ARC’s Fraud Prevention Bulletins for 1997 and 1998 and compared our results with ARC’s annual status reports and, on an incident-by-incident basis, reconciled differences with ARC officials. Over 80 percent of the difference between ARINC and ARC’s data related to ARC’s treatment of stock associated with terminated travel agencies. As a precautionary measure, according to ARC officials, ARC reports to ARINC’s database, the unique identifying numbers of all unaccounted for ARC ticket stock previously provided to the agencies it has subsequently terminated even though some portion of this stock has probably been issued by these agencies. However, in compiling its status reports, ARC includes only the ticket stock that has been used or is likely to be used. For example, according to ARC officials, ARC’s status reports do not include unreported ticket stock of terminated agencies if the stock is over 5 years old and has not yet been used. Likewise, according to ARC officials, ARC reports to ARINC all subsequent adjustments to the initial amount of ticket stock reported stolen by the travel agencies but does not adjust its annual status reports. As reflected in this report, the amount of ARC ticket stock reported stolen in 1997 and 1998 represents the amounts agreed to by ARC and us through this reconciliation process. To determine the value of airline ticket stock stolen annually, we used ARC’s Field Investigation and Fraud Prevention Department’s informal database of information provided voluntarily by the airlines about the number and value of stolen ARC ticket stock used for travel on their airlines. ARC supplied us with a copy of its database, which contained 15,542 records (each record represented a ticket). After eliminating (1) duplicate records according to criteria supplied by ARC and (2) ARC’s incomplete listing of losses earlier than 1997, we had 8,330 records in our database. Using the average cost per ticket derived from these records, we computed the estimated value of the ARC ticket stock reported stolen in 1997 and 1998. While six to seven pieces of ticket stock are normally used to create a legitimate airline ticket, as discussed in the report, we assumed that only two pieces would be used. Likewise, contrary to industry experience, we assumed that all of the stolen ARC ticket stock eventually would be used for travel. Thus, our estimate provides a “worst-case” scenario and, consequently, is likely to overstate the potential loss from the use of stolen ARC ticket stock. We performed a reliability assessment on the data elements we used from the ARINC and ARC databases and found that the data were reliable enough for our uses in this report. We performed limited reasonableness tests on the data, conducted extensive interviews with ARINC and ARC officials regarding the creation and control of the data, and reviewed information about their data quality factors. To identify the financial implications associated with the use of stolen ARC ticket stock, we first determined the extent to which airlines and travel agencies were held liable for the use of the ARC ticket stock reported stolen in 1997 and 1998. Then, using the average cost per ticket that the airlines’ reported to ARC, we computed the potential losses for the airlines and travel agencies if all of the ARC ticket stock reported stolen in 1997 and 1998 were used for travel. We also collected information about the number, value, and impact of losses from stolen ARC ticket stock from American Airlines, Continental Airlines, Northwest Airlines, and United Airlines. These four airlines accounted for 37 percent of the revenue passenger miles flown by all U.S. airlines during the 12-month period ending March 1998 and 35 percent of the ticket stock reported stolen to ARINC by U.S. airlines as of December 31, 1998. Moreover, officials from these airlines expressed a willingness to cooperate in our review even though much of the information we obtained is considered proprietary and confidential by the airline industry. To evaluate the impact on travel agencies held liable for the theft of ARC ticket stock, we conducted a telephone survey of 24 agencies that ARC identified as being liable for ARC stock stolen in burglaries in 1997. Representatives of three of these travel agencies told us that they were not held liable for their ARC ticket stock losses so we deleted them from our survey. Between February 12, 1999, and February 22, 1999, we were able to contact 15 of the remaining 21 travel agencies, which agreed that they had been burglarized and held liable for the use of the ARC ticket stock stolen in 1997. We interviewed representatives of these 15 travel agencies to, among other things, verify the amount of ARC ticket stock stolen and to discuss the airlines’ billing practices for the stolen stock and the impact of the burglaries on their agencies. We also reviewed and discussed travel agency complaints filed with the Travel Agent Arbiter. Finally, we interviewed Internal Revenue Service (IRS) officials to identify the tax and reporting requirements for airline carriers and travel agencies that have experienced losses arising from the use of stolen ticket stock and to determine the extent to which IRS has examined their compliance with tax requirements. To identify other potential issues, we focused on possible safety concerns related to the use of stolen ticket stock by terrorists and illegal aliens. We interviewed federal law enforcement and intelligence agency officials from the Central Intelligence Agency, the Customs Service, the Immigration and Naturalization Service, the National Security Agency, the Secret Service, and the Federal Bureau of Investigation, as well as Federal Aviation Administration personnel and officials at four airlines to determine whether they knew of individuals traveling on tickets created from stolen ticket stock to conduct terrorist activities. To determine whether illegal aliens are using tickets created from stolen ticket stock, we interviewed officials from ARC, the Immigration and Naturalization Service, and officials at four airlines. We also interviewed officials from ARC, the Miami-Dade Florida Police Department, the Federal Bureau of Investigation, and the U.S. Attorney’s Office about law enforcement efforts to combat the theft and distribution of stolen ticket stock. To identify the technological interventions available to detect the use of stolen ticket stock, we also interviewed officials from ARINC, ARC, and officials at four airlines. We also observed ARINC’s database in operation at two airline ticket counters at the Washington Reagan National Airport near Washington, D.C. In addition, representatives from two manufacturers of optical scanner and magnetic reader devices provided us with demonstrations of their products. We also discussed other initiatives to detect the use of stolen ticket stock and the use of electronic tickets as a means of reducing ticket stock thefts with ARC officials and officials at the four airlines. We performed our work from April 1998 through July 1999 in accordance with generally accepted government auditing standards. The number of travel agencies reporting ARC ticket stock stolen in burglaries declined from 62 in 1997 to 44 in 1998. ARC attributes this decline to (1) the more stringent security rules it established in April 1996, (2) its increased fraud prevention activities, (3) the increased awareness by travel agency personnel of the problem, and related to this, (4) the agents’ increased compliance with required measures for safeguarding ARC ticket stock. The locations of travel agency burglaries in 1997 and 1998 follow. According to IRS officials, for federal tax purposes, airlines normally report revenue when a passenger takes a flight. That is, if a travel agency reported the sale of a $1,100 airline ticket, the airline would normally account for the transaction in three separate book entries. Specifically, the airline would recognize the cash ($1,100) from the sale along with two offsetting entries representing the airline’s future liability to transport the passenger ($1,000) and to pay the federal excise tax due IRS ($100). When the ticket is used, according to IRS, the airline reverses these entries by eliminating its $1,000 liability and recognizing the $1,000 as revenue. Likewise, when the airline pays the excise tax, it eliminates its $100 liability to IRS. Furthermore, according to IRS, when an airline accepts a ticket for travel that was not reported as sold by a travel agency—as in the case of a ticket printed on ARC ticket stock stolen from a travel agency—the airline recognizes the transaction as an “unreported sale” and, using the earlier example, sets up a $1,100 account receivable to collect the funds from the travel agency. Since the travel has already occurred, according to IRS, the airline also recognizes $1,000 in (expected) income and records the $100 in federal excise tax due IRS. If the airline succeeds in collecting the $1,100 from the travel agency, the airline eliminates the $1,100 receivable it collected and pays the federal excise tax due IRS. Conversely, if the travel agency fails to pay, the airline writes off—as permitted—the $1,100 receivable as a bad debt, reduces its income by $1,000, and eliminates the $100 in excise tax that it could not collect. When an airline reaches a financial settlement with a travel agency, according to IRS, the settlement amount most likely would be treated as “liquidating damages” rather than “an amount paid for the taxable transportation of persons by air.” While any airline income derived from settlements with travel agencies is fully taxable, according to IRS, excise tax is not due on settlements treated as liquidating damages. Finally, according to IRS, if an airline purchases insurance to, among other things, cover losses from the use of stolen ticket stock, any insurance proceeds it receives normally would be accounted for as income and would be fully taxable. The same is true for any recoveries an airline receives from collection agencies. In addition to those named above, Steve Calvo, Tom Collis, Fran Featherston, Judy Pagano, Carol Herrnstadt Shulman, Pamela V. Williams, and Mario Zavala made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on issues associated with the theft of stock used to create airline tickets, focusing on: (1) the number and value of the airline ticket stock stolen annually; (2) financial implications associated with the use of stolen ticket stock; (3) issues that are potentially associated with the use of stolen ticket stock; and (4) technological interventions and other initiatives designed to detect the use of stolen ticket stock. GAO noted that: (1) definitive information on the amount and value of airline ticket stock stolen annually does not exist; (2) however, from 1989 through 1998, worldwide suppliers of ticket stock reported 11.3 million pieces of stolen ticket stock to Aeronautical Radio, Incorporated; (3) the Airlines Reporting Corporation reported the majority of these losses--8.2 million pieces; (4) the Corporation, for a variety of reasons, tracks only 27 percent of the stock it reported and, based on this information, identified over 447,000 pieces of its stock that were stolen in 1997 and 1998; (5) in the event that all of this stolen ticket stock is used, GAO estimates the potential value (loss) attributable to the Corporation's ticket stock could range between $116 million to $302 million, depending on the number of pieces of stock used to create each airline ticket; (6) the airlines bear the financial risk for most situations involving the theft of ticket stock; (7) for the airlines, however, the losses are relatively small compared with their total annual revenue and their losses from other airline-related fraud; (8) in contrast, the losses incurred by travel agencies that did not adequately safeguard the Airlines Reporting Corporation's ticket stock can result in serious financial hardships; (9) if all of the Corporation's ticket stock stolen in 1997 and 1998 were used for travel, which is unlikely, GAO estimates that the airline and travel agency industries could lose about $151 million each; (10) in practice, however, travel agencies would likely incur significantly smaller losses because airlines frequently settle for far less than the amounts the travel agencies owe; (11) U.S. tax officials believe that the financial consequences to the federal government from the use of stolen ticket stock are minor and of limited interest for auditing purposes compared with higher-risk tax issues; (12) the travelling public does not appear to be at any greater risk from individuals who use tickets created from stolen ticket stock than they are from individuals who travel on legitimate tickets; (13) federal law enforcement and intelligence officials and airline officials were unaware of any individual who had travelled on stolen ticket stock to conduct terrorist activities; (14) the airline industry's centralized database on fraudulent ticket stock is the principal means for detecting the use of stolen stock; and (15) while this database is an effective tool, many airlines do not subscribe to it because the time required to manually query it delays passenger processing.
|
Basic education is defined in this report as all program efforts aimed at improving early childhood development, primary education, and secondary education, as well as training in literacy, numeracy, and other basic skills for adults or out-of-school youth. Basic education also includes efforts that facilitate and support such learning activities, including building host countries’ institutional capacity to manage basic education systems and measure results, constructing and rehabilitating schools, training teachers, increasing parent and community involvement in schools, providing learning materials, and developing curricula. Education for All is a major goal of the international donor community. At Jomtien, Thailand, in March 1990, representatives of the global education community held the “World Conference on Education for All” and declared universal access to education as a fundamental right of all people. In April 2000, the “World Education Forum” met in Dakar, Senegal, where delegates from 181 nations adopted a framework for action committing their governments to achieve quality basic education for all—including ensuring that by 2015, all children--especially girls, children in difficult circumstances, and those from ethnic minorities--have access to completely free primary education of good quality. The framework committed these nations to the attainment of six specific goals dealing with early childhood education, universal primary education, life-skills programs, adult literacy, gender disparities, and quality assurance. The United States supports this international commitment, as well as the UN’s Millennium Development Goal—to achieve universal completion of primary school by 2015. From fiscal years 2001 through 2006, USAID, State, DOD, and MCC allocated more than $2.2 billion to support U.S. international basic education-related efforts. See table 1 for these agencies’ funding allocations specifically for basic education-related programs. During this same period, USDA and DOL allocated an estimated more than $1 billion to programs that included a basic education component that supported their broader mission goals. For example, funding for USDA’s Food for Education program includes basic education activities along with other components, such as providing maternal health centers. Similarly, DOL’s funding for its programs to combat child labor combines basic education-related efforts and other activities, such as job training for older children and income generation opportunities for parents. In addition, the Peace Corps could not identify funding levels specific to basic education because it does not track funding by individual program sectors, rather by overall country programs. This is because volunteers sometimes implement projects in multiple program sectors. Furthermore, other than USAID, U.S. agencies do not have a standard, government-wide, formal definition of basic education or a requirement to report their funding of international basic education activities to a central U.S. government source. See table 2 for these agencies’ funding allocations for programs with international basic education-related components. See appendix II for the countries receiving basic education-related assistance by implementing U.S. agency in fiscal year 2006. From fiscal years 2001 through 2006, USAID funded the majority of U.S. international basic education programs, allocating more than $2.1 billion to implement programs in about 60 countries worldwide. USAID used appropriated funds designated by Congress for basic education and other supplemental appropriations. In addition to the congressionally designated basic education funds, USAID used other appropriated funds, including supplemental appropriations and funding for MEPI activities, to fund basic education activities abroad. By region, Asia and the Near East received the highest level of USAID’s allocated basic education funds at approximately $1 billion, followed by Africa at almost $750 million, Latin America and the Caribbean at around $272 million, and Europe and Eurasia at about $51 million. See figure 1 for a map of the 60 recipient countries of USAID’s basic education funding, ranked by total basic education allocations from fiscal years 2001 through 2006. Since fiscal year 2001, the United States has launched several major education initiatives that direct missions to focus on specific types of basic education activities in certain regions, such as Africa, Latin America and the Caribbean, and the Middle East, to address educational challenges in those regions. Figure 2 summarizes these initiatives. The State and USAID joint strategic plan for fiscal years 2004 to 2009 includes the broad goal of improving education globally, with a particular focus on the Muslim world, as well as support for programs to achieve the United Nations’ Millennium Declaration Goal of universal primary education by 2015. State and USAID have implemented basic education activities that align with these goals. Several other U.S. agencies support activities that directly or indirectly relate to increasing access to or improving the quality of international basic education. State and USAID have strategic goals specific to promoting improved education. Although State and USAID have supported assistance activities relating to education for decades, neither agency had agency-wide strategies to guide these activities until early 2000. Moreover, State’s September 2000 strategic plan only included references to improving education as part of the broader goal of promoting broad-based growth in developing and transitioning economies to raise standards of living, reduce poverty, and lessen disparities of wealth within and among countries. The State and USAID joint Strategic Plan for Fiscal Years 2004 to 2009, includes, for the first time for these agencies, education as a strategic goal. According to the strategic plan, State and USAID will promote improved education globally, with a particular focus on the Muslim world, as well as support the development goals of the UN’s Millennium Declaration call for universal primary education by 2015. Working toward this UN goal, the plan calls for State and USAID to support programs that do the following: Promote equal access to quality basic education. The strategy says that State and USAID would assist and encourage countries to improve their education policies, institutions, and practices in the classroom; give families and communities a stronger role in educational decision making; and focus their efforts on reducing barriers to education for girls. Implement international education commitments. The strategy also states that both agencies will work with donor partners to implement the commitments made at the 2000 World Educational Forum in Dakar, the G–8 Summits at Genoa and Kananaskis, and at the UN Conference on Financing for Development in Monterrey. In addition, the agencies are to help developing countries build their capacity to achieve the global Education for All initiative. Consistent with the joint strategic plan’s education goals, State has implemented programs, mainly through MEPI, to target basic education in North Africa and the Middle East. As the largest provider of U.S. basic education assistance, USAID also supports activities that align with the joint strategic plan, as well as its 2005 education strategy that focuses on improving: (1) access to education, (2) quality of education, and (3) host governments’ capacity to manage education efforts. In addition, USAID has allocated resources toward strategically important countries, as noted in both strategy documents. State generally supports education programs that align with the agency’s broader foreign policy objectives such as promoting democracy and reform in the Muslim world. Primarily through MEPI, the agency supports international basic education activities aimed at increasing access to basic education, especially for girls and women, and improving the quality of basic education through teacher training, curriculum development, and community involvement in North African and Middle Eastern countries and territories. For example, through MEPI, State’s Bureau of Near Eastern Affairs supports a “scholarships for success” program in Morocco to increase access to secondary schools for girls living in remote rural communities through the creation of girls’ dormitories (see fig. 3). As an initiative directed by the administration, MEPI allocates resources for basic education programs in North African and Middle Eastern countries and territories. Under MEPI, basic education funds are allocated for country- specific and regional programs based on information from U.S. embassies and other U.S. agencies with regional programs that can identify areas of need, and through conversations with host governments. Between fiscal years 2001 and 2006, the Bureau of Near Eastern Affairs allocated about $35 million in MEPI funding for 23 basic education-related projects in 11 North African and Middle Eastern countries and territories. In addition to MEPI, during the same period, State’s Bureau of Educational and Cultural Affairs funded one basic education project that allocated, in Indonesia, a total of $2.4 million in fiscal years 2004 and 2005 to fund multiyear scholarships for Indonesian teachers at the secondary and university level to study education in the United States. In the eight countries we visited, we found that USAID implemented programs that targeted the agencies’ emphasized population of primary- level students and girls and aligned with its three main strategic objectives. USAID’s resource allocations of top recipients of basic education funding from fiscal years 2001 through 2006 show consistency with U.S. priorities placed on strategic partner countries. USAID Programs Support Education Strategy Prior to 2005, USAID did not have an agency-wide education strategy and its education programming was generally guided over time by several agency strategies, policies, and operational directives. In April 2005, USAID issued an education strategy that prioritizes the broad education objective of increasing equitable access to quality education, with the more specific focuses on primary education and girls’ education. The strategy directs that USAID focus on (1) increasing access to basic education, (2) improving the quality of basic education, and (3) building the institutional capacity of the host countries’ basic education systems. This strategy also supports the broader State and USAID strategic goals of improving education globally with a particular emphasis on the Muslim world, as it emphasizes the importance of education in strategic countries, as well as implementing international education commitments, such as the Education for All by 2015 initiative. In the eight countries we visited, we found that USAID generally implemented programs that aligned with its three main strategic objectives and targeted the agencies’ emphasized population of primary-level students and girls. According to USAID, as a matter of policy, USAID’s efforts focus on increasing children’s access to quality primary education because the quality and accessibility of primary education plays a critical role in determining whether children gain core skills, such as literacy and numeracy, and have a chance to gain further education. In addition, USAID has a special focus on girls’ education. Missions engaged in basic education are required to assess the extent of educational disadvantage faced by girls at the primary level in the host country and take further steps where this disadvantage is found to be significant. Seven of the eight missions we visited implemented projects to increase access and improve the quality of basic education for primary-school youth. However, USAID also recognizes the need for missions to have flexibility in planning and implementing programs, and taking into account both the conditions of the particular host countries and the activities of other donors in the country. For example, while the mission in Morocco continued to focus on girls’ education, its basic education assistance shifted more toward middle schools, since the mission determined that high dropout rates among primary-school students were often due to the lack of access to quality secondary schools where those students would have continued with their education and because other donors were already investing significant resources into primary education in the country. Following are details about USAID’s programs to support its three strategic goals: (1) increasing access to basic education, (2) improving the quality of basic education, and (3) building the institutional capacity of the host countries’ basic education systems. Access: To increase access to basic education, USAID supports a wide range of programs, such as distance learning, girls’ scholarships, and school construction, that increase the number of boys and girls who enter and remain in school, particularly underserved populations such as girls, the poor, children in rural areas, and out-of-school youth. To increase access, the agency often uses distance learning tools, such as radio, television, and other information and communication technologies, to deliver quality educational content to populations not accommodated by the traditional school system. Agency efforts to increase access to basic education also include, among other things, construction and rehabilitation of school facilities, girls’ scholarships, and adult literacy programs. In six of the eight countries we visited (Egypt, Honduras, South Africa, Mali, Morocco, and Zambia), we found that missions implemented programs in support of this strategic goal. For example, in Egypt, Honduras, South Africa, and Zambia, USAID used distance learning programs, such as prerecorded lessons, to deliver educational content to preprimary, primary, and secondary school youth--particularly girls, children from rural areas, and poor children. In Egypt, Mali, Morocco, South Africa, and Zambia, USAID implemented scholarship programs for girls, while the mission in Egypt also supported the construction of primary schools to increase access and enrollment of girls in underserved communities. See figure 4 for an example of a USAID program aimed at increasing education access. Quality: USAID also implements a wide array of programs to improve education quality. These programs are generally designed to improve teachers’ subject matter knowledge and pedagogical skills, ensure the curriculum includes specific knowledge and skills relevant to students’ lives, and provide learners with access to appropriate workbooks and other learning materials that complement and reinforce teachers’ efforts. Typical forms of assistance include training teachers, along with technical assistance to strengthen the capacity of local teacher training institutions; promoting the adoption of teaching methods that involve students in the learning process; promoting improvements in curriculum content; helping host countries develop methods of student assessment; and providing learning materials, such as textbooks and portable libraries. All eight missions we visited implemented programs to improve quality, using a variety of the approaches described above. See figures 5 to 7 for examples of USAID projects aimed at improving education quality. Capacity building: USAID implements a wide variety of basic education programs to build host countries’ institutional capacity to manage their basic education systems. Typical forms of assistance include training school principals in educational leadership and management; promoting the active participation by parents and parent associations in supporting school improvement; developing effective policy analysis units within education ministries; supporting the adoption and use of appropriate data and educational management information systems, as well as measures to enhance accountability and transparency in the use of public education funds; and the decentralization of educational decision making to local levels. All eight missions we visited implemented programs that either specifically focused on building the host countries’ educational capacity or contained a capacity-building component. For example, in Zambia, USAID implemented a project to decentralize administration of the country’s education management information systems. In Egypt, USAID implemented a project to support the country’s decentralization efforts by rewarding schools and surrounding communities that are active in assessing their needs and successful in planning and implementing measures to improve education quality. Because many USAID programs simultaneously support multiple objectives, USAID could not provide a breakout of funding for its international basic education efforts by strategic objective, such as access or quality, or by program activity, such as teacher training. According to USAID, quality and access are interlinked in important ways, as when quality improvements lead to reduced grade repetition, accelerating children’s progress through school and increasing access for subsequent students. Missions decide whether to concentrate their efforts on increasing access or improving quality and which program approaches to use based on their assessment of how they can achieve the most valuable results in light of country conditions. For example, in Mali, a country in which only about 50 percent of primary school-aged children are enrolled in school, USAID decided to focus its strategy on improving the quality of basic education based on the rationale that the greatest impediment to achieving universal access is the poor quality of education. USAID Resources Directed at Strategic Partners of U.S. Foreign Priorities USAID’s resource allocations for basic education are consistent with USAID and State’s efforts to more closely align foreign policy and development goals. According to USAID’s April 2005 education strategy and USAID officials, the agency allocates resources based on the host country’s needs, commitment, and overall development progress, while acknowledging the importance of geo-strategic states, such as some predominantly Muslim countries. USAID and State’s joint strategic plan also states that their education programs will be particularly focused on Muslim countries following the September 11 attacks. For example, in Mali, a predominantly Muslim country, USAID implemented a girls’ scholarship program in which it focused on girls in traditional, religious communities and also tried to engage local religious Muslim leaders in discussions on how the scholarship program would be structured and invited them to become members of the local management committee. We found that USAID has implemented programs to target strategic states; specifically, from fiscal years 2001 through 2006, many of the top 10 recipient countries of USAID basic education assistance were strategic partners in achieving U.S. foreign policy objectives, including fighting the war against terrorism and promoting regional stability and democracy. Among these top 10 recipients were many predominantly Islamic countries, such as Afghanistan, Indonesia, Iraq, Jordan, and Pakistan, which did not receive any USAID basic education funding in fiscal year 2001, but received significant funding beginning in fiscal year 2002. These countries, along with Egypt and Ethiopia, all ranked among the top 10 recipients of basic education funding from fiscal years 2001 through 2006 and were all considered strategically important allies in the global war on terror, according to USAID officials and USAID and State operational plans. See appendix III for a list of recipient countries of USAID basic education funding from fiscal years 2001 through 2006 and selected educational indicators from the World Bank. USAID began basic education programs in the war-affected countries of Iraq and Afghanistan to support efforts to facilitate their transition to more stable, democratic, and productive states. In 2002, following the defeat of the Taliban, USAID started a basic education program in Afghanistan, which originally focused on four areas: textbook production and distribution, radio-based teacher training, accelerated learning for over-age and out-of-school students, and school construction and rehabilitation. USAID’s current efforts in Afghanistan focus on improving the quality of the country’s basic education system through teacher training. In May 2003, in the immediate aftermath of initial combat operations in Iraq, USAID program efforts supported the resumption of school through the rehabilitation of classrooms and the provision of educational materials. However, according to USAID officials, the mission’s efforts faced many challenges due to attacks on teachers and schools. While the USAID mission in Iraq has rehabilitated 2,962 primary and secondary schools since the conflict began in 2003, the mission does not know whether these schools are currently operating due to the hostile security environment. USAID’s basic education efforts in Iraq have also focused on improving the quality of Iraq’s basic education system through training primary and secondary school teachers, building the education ministry’s capacity to manage and reform its education system, and increasing access to basic education for out-of-school youth through an accelerated learning program. These basic education activities were funded through supplemental appropriations specifically for Iraq. USAID ended its basic education program in Iraq in 2005 due to a change in mission priorities. According to a USAID official, the mission’s current priorities are focused on community stabilization, local governance, economic governance, national capacity development, and private sector development. In addition to State and USAID, several other agencies implement activities that directly and indirectly support increasing access to and improving the quality of basic education in support of programs that address their broader mission goals. These agencies include USDA, DOD, and DOL, as well as the Peace Corps and MCC. USDA’s Foreign Agricultural Service funds and administers basic education-related activities through the provision of food assistance as part of the agency’s broader mission to create economic opportunity for American agriculture by expanding global markets and to support food security worldwide. The agency supports basic education by providing school meals or take-home rations to students overseas and by facilitating the sale of food commodities to support basic education programs in communities. USDA’s efforts, which target low-income, food-deficit countries, particularly focus on girls since they tend to have much lower school attendance rates than boys in many of USDA’s recipient countries. In fiscal year 2001, USDA’s Foreign Agricultural Service administered the Global Food for Education Initiative (GFEI), a pilot program with the overall goal of contributing to universal education by using school meals to attract primary-school children to school, keep them attending once enrolled, and improve learning. Through the program, USDA donated U.S. agricultural commodities and associated technical and financial assistance to the World Food Program, 13 private voluntary organizations, and one national government (the Dominican Republic, see fig. 8). The organizations then used the commodities in 48 school feeding projects in 38 developing countries. For example, in the Dominican Republic, USDA donated wheat and crude soybean oil, which were sold locally, with proceeds used to carry out community-based school feeding and educational improvement programs managed by local NGOs. In fiscal year 2003, the GFEI was continued under USDA’s McGovern-Dole International Food for Education and Child Nutrition Program (Food for Education). The Food for Education (FFE) program also provides nutrition programs for pregnant women, nursing mothers, infants, and preschool children to sustain and improve the health and learning capacity of children before they enter school. USDA allocates basic education resources to low- income, food-deficit countries that are committed to universal education. From fiscal years 2001 through 2006, USDA allocated $599.3 million to implement the GFEI and the FFE program in 42 countries worldwide. DOD funds basic education activities through its Overseas Humanitarian, Disaster, and Civic Aid (OHDACA) program, as part of the program’s broader goal to achieve U.S. security objectives, improve DOD’s access to areas not otherwise available to U.S. forces, build local capabilities and cooperative relationships with a host country’s civil society, and provide basic humanitarian aid and services to populations in need. DOD supports increased access to basic education through its construction of primary and secondary school buildings and refurbishment of existing school facilities (see fig. 9) in all of the Combatant Commanders’ areas of responsibility. According to one DOD command, it often uses the constructed school facilities as centers to manage and coordinate the Department’s natural disaster response activities. Recipient countries of DOD humanitarian assistance are identified through DOD guidance and with input from in-country U.S. agencies on host countries’ need. From fiscal years 2001 through 2006, DOD allocated $16.2 million to fund 232 basic education projects in 50 countries worldwide. DOL’s Bureau of International Labor Affairs (ILAB) funds and administers international child labor projects with basic education components as part of its broader strategic goal to remove or prevent children from exploitative child labor and provide affected children with education, training, or both. Through its international child labor projects, DOL supports basic education by developing formal and transitional education systems that encourage working children and those at risk to attend school; raising awareness on the importance of education for all children and mobilizing support for improved and expanded educational infrastructures; and strengthening national institutions and policies on education and child labor (see fig. 10). The ILAB uses two mechanisms to implement these projects: (1) the International Labor Organization’s International Program on the Elimination of Child Labor (ILO-IPEC), which removes or prevents exploitative child labor and provides affected children with education or training or both, strengthens the ability of host countries to address child labor, and raises awareness on the hazards of child labor and the benefits of education; and (2) Child Labor Education Initiative (EI), which funds projects that promote access to quality basic education for children at risk or engaging in exploitative child labor. The Bureau allocates basic education resources to countries based on its assessment of where there are child labor needs going unaddressed, and where the agency will have the greatest impact. During fiscal years 2001 through 2006, the Bureau allocated $440.4 million to implement basic education activities in 77 countries worldwide. The Peace Corps supports basic education through the activities of its volunteers who work at the local level with host country governments, NGOs, and communities on projects aimed at promoting sustainable development at the grassroots level and enhancing cross-cultural understanding. The Peace Corps provides volunteers to work in developing countries where they have been invited and determines which programs best address a host country’s need by consulting with host country officials. Education is the Peace Corps’ largest sector. The volunteers’ basic education projects include training and mentoring teachers in K-12 schools, using radios to deliver educational content to HIV/AIDS orphans and vulnerable children, and strengthening preschool programs through teacher training and mentoring. For example, in Zambia, Peace Corps volunteers assist the country’s Ministry of Education in implementing a primary school interactive curriculum, which is broadcast over the national radio to increase access to basic education in rural settings (see fig. 11). During fiscal year 2006, 2,674 Peace Corps volunteers provided educational assistance in 52 countries worldwide. In addition, the Peace Corps supports basic education activities through its Small Project Assistance (SPA) program, which provides hundreds of small grants to volunteers’ communities to increase the capabilities of local communities to conduct low-cost, grassroots, sustainable development projects. For example, in Morocco, Peace Corps volunteers used SPA funding to construct latrines to increase children’s attendance, particularly girls. This program operates under the terms of an inter-agency agreement between USAID and the Peace Corps. In fiscal year 2005, 57 Peace Corps posts approved about $766,000 to support 354 different SPA education projects. MCC supports international basic education as part of its larger mission to reduce poverty through economic growth in developing countries that create and maintain sound policy environments. The MCC provides developing countries with monetary assistance—through compact agreements and threshold agreements—to support a variety of development projects, including basic education. For a country to be selected as eligible for an MCC assistance program, it must demonstrate a commitment to policies that promote political and economic freedom, investments in education and health, control of corruption, and respect for civil liberties and the rule of law by performing well on 16 different policy indicators. For example, in fiscal year 2005, the MCC allocated $12.9 million to Burkina Faso, through a threshold agreement to fund a USAID- implemented pilot project with the objective to improve access to, and improve the quality of, primary education for girls in 10 provinces that have historically achieved the lowest levels of girls' primary education completion rates. The project entailed the construction of “girl-friendly” schools with canteens and community-managed child care centers; provision of textbooks, supplies, and take-home rations; teacher training; mentoring; literacy training for women; merit awards for teachers; and a societal awareness campaign on the benefits of educating girls. MCC also plans to provide funding for the implementation of basic education activities in Mali, Ghana, and El Salvador. We found that agencies did not always coordinate in the planning or delivery of international basic education-related activities. From fiscal years 2001 to 2006, there was no government-wide mechanism to facilitate interagency collaboration and, as a result, at the headquarters level we identified instances where agencies missed opportunities to collaborate and maximize U.S. resources. Further, in the eight countries that we visited, we noted several instances where agencies did not collaborate or take advantage of opportunities to maximize U.S. resources in areas in which they had similar objectives of improving the quality of education. In addition, we found that the level of U.S. coordination with host governments and other donors in the eight countries we visited also varied. Without effective coordination, donors cannot easily monitor or assess the host government’s progress toward achieving international goals, such as Education for All by 2015, one of State-USAID’s strategic goals. We found that, for international basic education-related activities that we reviewed, between 2001 and 2006 there was no government-wide coordination mechanism to facilitate interagency planning and delivery of U.S. basic education assistance. While some agencies met periodically to discuss and plan specific basic education activities—usually those involving joint- or multiagency agreements—these activities often did not include all cognizant officials or agencies responsible for planning or delivering basic education assistance. As a result, at the headquarters level, interagency coordination was mixed and resulted in some missed opportunities to collaborate on the planning of U.S. basic education assistance. The following are some examples: DOD guidance calls for Combatant Commands to coordinate Humanitarian Assistance Program projects with other agencies at the country level before they are submitted to the Defense Security Cooperation Agency (DSCA), which then forwards the program descriptions to State for review and concurrence. However, staff we spoke to within USAID’s Economic Growth, Agriculture, and Trade Bureau (EGAT), which manages USAID’s basic education activities, were not aware of DOD humanitarian assistance projects. USDA calls annual meetings with USAID’s Food for Peace Office, State, and Office of Management and Budget officials to discuss and coordinate upcoming projects for its McGovern-Dole International Food for Education Program. However, staff from USAID EGAT do not attend these meetings, even though some of USDA’s school feeding activities coincide with USAID’s basic education activities. DOL officials provided several examples of efforts to coordinate programs with other agencies, including USAID and State. For example, DOL’s Office of Child Labor, Forced Labor, and Human Trafficking (OCFT) convenes annual meetings with State and USAID to discuss its upcoming programs, including those related to DOL’s Child Labor Education Initiative. Until 2004, USAID had an informal focal point who attended these meetings. After this focal point retired in early 2004, DOL sent a letter to USAID in April 2004 requesting a formal point of contact. According to DOL officials, USAID never replied to this letter. Since then, although DOL has regularly requested the attendance of USAID desk officers and technical staff to brief them on its upcoming projects, those USAID staff did not always attend, and those that attended may not have been the most knowledgeable about existing basic education programs. Although one member of USAID EGAT attended the February 2007 coordination meeting, there is still no formal USAID focal point for these meetings. In addition, DOL copies State on letters to foreign governments regarding DOL programming in their countries. Peace Corps officials stated that the agency does not coordinate programming priorities with USAID in Washington because programming is determined by host governments, in collaboration with the Peace Corps, once the agency is invited to serve in country. Beyond USAID’s implementation of the single MCC basic education program in Burkina Faso, coordination between MCC and USAID was characterized by USAID and MCC officials as minimal, namely because MCC is not organized around technical sectors. However, MCC officials said that they share proposals and lessons learned with other U.S. agencies. State’s coordination of basic education activities with USAID at the headquarters level occurred primarily through the MEPI program, in which USAID serves as an administrative partner and manages over one-third of MEPI’s basic education programs. This coordination included formal and informal meetings to discuss the results of joint State and USAID strategic reviews of existing bilateral development assistance in the Middle East and North Africa and the identification of reform areas that were not being addressed by other U.S. agencies. We have previously reported on the importance of collaboration among executive agencies in maximizing performance. Officials at all of the agencies that we reviewed agreed that coordination of basic education- related activities could be enhanced. USAID officials believe that annual meetings involving all of the U.S. agencies involved in international basic education would produce better U.S. policy coherence. However, USAID does not have the authority to formally convene such a meeting. In June 2004, in response to a fiscal year 2005 congressional directive, USAID informed State it would develop an agenda for such a meeting if State, as a cabinet-level agency, would convene it, but according to USAID, State has not yet convened an interagency meeting on international basic education. Although State’s DFA has begun to address the issue of better coordinating all U.S. foreign assistance by bringing together core teams to discuss U.S. development priorities in each recipient country, it is unclear to what extent these efforts will be accepted and implemented by agencies whose foreign assistance programs are not under DFA’s direct authority. During our fieldwork, we found several examples of good coordination among U.S. agencies implementing basic education projects. Among these examples were the following: In South Africa, the Peace Corps provided USAID with a volunteer to support the implementation of a USAID distance learning project. The volunteer assisted in improving teacher training models and in utilizing program content, in addition to providing ongoing technical feedback to the project implementer on the function and efficiency of the project’s media delivery system. Additionally, DOD and USAID cooperated to provide signs bearing the U.S. and South African flags for display at project sites, including schools. In Mali, USAID allocated SPA funding for the implementation of community-based projects in communities where Peace Corps volunteers were working. In addition, the Peace Corps provided USAID with one volunteer to assist in USAID’s implementation of a girls’ scholarship program in the northern region of the country. Also, the U.S. embassy purchased 750 radios for listening groups in the northern region, and 200 of the radios were distributed directly to a USAID distance-training program for teachers. In Morocco, the Peace Corps has used SPA funding to construct a library, school latrines, and residential student housing. In Honduras, a regional DOL program seeking to provide educational opportunities to children engaged in, or at risk of, exploitative labor incorporated an existing USAID distance-learning program into its set of 14 pilot projects. In the municipality of this particular pilot project, children, aged 13 to 16, were quitting school after the sixth grade in favor of working on the local coffee farms. The objectives of the local DOL implementer were to reduce the working hours of these children and provide them with an opportunity to complete their primary-level education. The USAID distance-learning program was particularly suited to these objectives, as it was capable of targeting children in seventh through ninth grade, was aligned with the national curriculum and certified by the Ministry of Education, came with predesigned materials, and could be tailored to fit participants’ scheduling needs. In the Dominican Republic, USAID and USDA, along with the local host government, coordinated to provide school lunches in order to increase primary school student enrollment. Originally begun under the GFEI in 2001, the program continued under USDA’s FFE program in 2004. In addition to the school lunches, activities under this program included repairs to existing schools, renovation of buildings and water systems, health and nutrition workshops, deworming, vitamin distribution to supplement nutrition, and animal husbandry activities to supplement incomes. In Zambia, the Peace Corps supplied over 20 volunteers to work with the USAID-funded implementer of a radio-based, primary-level, distance learning program. The volunteers focused on mentoring and training school committees in leadership and school management, with the hope that communities will become better equipped to support and maintain their own learning institutions. The volunteers also assisted the implementer in piloting new educational initiatives. Despite these examples of good coordination, we also observed several instances where agencies, particularly USAID and DOL, missed opportunities to collaborate and maximize their program efforts. In some of the countries we visited, we found that USAID and DOL implementers of projects to increase children’s access to basic education did not take advantage of opportunities to collaborate and leverage resources when coordination of activities would have been of mutual benefit. In several of these countries, DOL could have joined USAID’s efforts to affect policy reforms directed at rural youth by using USAID’s delivery mechanisms of radio and television programming as well as printed materials to raise public awareness of child labor issues. Likewise, USAID could have utilized the Student Tracking System developed by DOL to monitor enrollment and retention rates in its sponsored schools. Additional examples of coordination between USAID and other agencies follow. Unlike USAID, which had education teams in the countries we visited to coordinate and manage implementation of its education-related activities, DOL does not have a physical presence in-country and attempts to coordinate through other means. Specifically, DOL coordinates as follows: After holding their annual coordination meeting with USAID and State staff, DOL planners in Washington, D.C., communicate by cable activities planned for the fiscal year to State staff at overseas embassies. These cables list DOL’s planned projects, their prospective countries, estimated funding amounts, and a deadline for when the project Requests for Proposal will be made public. Although DOL’s fiscal years 2004 and 2005 cables do not mention coordination with USAID in- country, the fiscal year 2006 cable lists one USAID/EGAT staff member as an addressee and requests that the information be passed to the local USAID mission “where applicable.” DOL is represented in country by selected State embassy staff that it informs of its upcoming projects through cables. State representatives serving in these positions that we interviewed appeared to have general knowledge of DOL’s basic education activities in-country but did not appear to have detailed project knowledge that would be required to coordinate effectively with USAID. This means that DOL must rely on either these State embassy staff or its project implementers to coordinate with the local USAID mission. In its Solicitation for Grant Applications for basic education projects, DOL informs potential applicants of ongoing USAID efforts and expects applicants to implement programs that complement, and do not duplicate, existing efforts. Despite these efforts, coordination between local USAID missions and DOL project implementers varied across the countries we visited. For example, in Honduras, DOL’s implementer was collaborating with the USAID mission in country to adapt the mission’s distance-learning program to a child labor project. However, in Peru, the USAID mission lost its institutional knowledge of an existing DOL program upon the departure of its education team leader. The remaining USAID education team remained unaware of this project until the DOL implementer briefed the new USAID education contact 3 years into the project’s implementation. Additionally, in Peru, the USAID mission was not aware of a public DOL Request for Proposal to conduct new basic education activities in country. In Morocco, USAID and the local DOL implementers were aware of each other’s programs but did not directly coordinate beyond minimal information exchanges. By contrast, in South Africa, a DOL implementer was unaware that USAID was also conducting basic education activities in-country. Similarly, in Zambia, the local USAID mission knew of a DOL EI program in country, but was unaware that the ILO-IPEC program also operating in country was DOL- funded. The turnover of agency and implementer staff in overseas locations may lead to challenges in coordination efforts. In Morocco, the USAID mission’s strategy stated that projects to create rural dormitories for girls may be implemented in partnership with Peace Corps volunteers who would assist with the community’s management of the dormitories and development of after school programs. However, the Peace Corps and USAID senior staff we spoke with in country had not considered such an idea during the actual planning and implementation of the girls’ scholarship program. USAID and DOD almost missed an opportunity to coordinate their construction of school dormitories in Morocco. Prior to 1999, the local USAID mission did not know that DOD was implementing humanitarian assistance projects in Morocco. At the time, USAID’s basic education program in country had concluded that one reason rural girls were dropping out of school before sixth grade was that the middle schools were too far away from their homes. According to USAID officials, parents had safety concerns about sending their daughters to attend school so far away and were reluctant to make the financial sacrifice of having their daughter finish primary school if she could not also attend secondary school. Subsequently, USAID and DOD coordinated with local communities to build school dormitories for middle school girls in three towns. According to the USAID officer responsible for coordinating this initiative, the coordination between USAID and DOD resulted in DOD building five dormitories. Coordination between the United States, host governments, and donors varied in the countries we visited. Coordination was stronger in countries, such as Egypt, Mali, Zambia, and Honduras, that possessed a combination of strong host government commitment to education reform, formal donor- led working groups specifically for education, and systems of mutual accountability, such as the World Bank’s Education for All-Fast Track Initiative. For example, in Egypt, the host government was working closely with international donors to develop a new National Strategic Plan for Education. Under the leadership of USAID, each donor had assumed responsibility for developing a portion of this plan. Additionally, the major education donors in Egypt met monthly to discuss division of responsibilities and upcoming efforts. We observed a similar situation in Mali, where the host government had allocated 30 percent of its budget toward education—60 percent of which went to basic education—and worked with donors to establish a framework through which the donors could invest in specific education sectors. These education donors in Mali held monthly meetings among themselves, as well as separate meetings with the host government, and collaborated on strategic planning, action plans, and common progress indicators, among other issues. At the time of our review, Mali, Zambia, and Honduras had also implemented, or were in the process of implementing, systems of mutual accountability associated with the World Bank’s Education for All-Fast Track Initiative. The Initiative provides for mutual accountability, where international donors provide coordinated and increased financial and technical support in a transparent and predictable manner, while host governments commit to primary education reform through the development of national education strategies in concert with the donors. Donors in Honduras met monthly and pooled their funding to provide direct budget support to the education sector to accelerate progress. According to donors, the pooled funding gave donors a means to ensure that the host government continued to implement the national education strategy. They stated that this is very important in countries where there is frequent political turnover. Although USAID usually does not give funds directly to government institutions, in Zambia, the USAID mission provides some funds to the Ministry of Education to support policy reform. The USAID mission also participates in high-level meetings and contributes to the decision-making process. Coordination between the United States, host governments, and donors was weaker in countries lacking a lead donor or host government committed to coordinating donor assistance. This included the Dominican Republic, Morocco, South Africa, and Peru. For example, in recent years donors have sought to strengthen local ownership of the education reform process by assigning host governments a key role in the donor coordination process, according to USAID. However, governments in several countries we visited lacked the capacity or will to hold such meetings. In Peru, for example, officials from bilateral donors and the host government stated that the concentration of donor efforts in rural areas working with regional administrators had isolated those projects from the national government, which tended to view project schools as “donor schools” unconnected to the larger education system. According to these officials, the disconnect between the central government and the bilateral programs inhibited the expansion of these programs to other areas and threatened their long-term sustainability. Similarly, in South Africa, the host government Ministry of Education had not called a donor meeting in almost a year and was not aware of all ongoing donor activities in basic education. In Morocco, one donor was unaware of the details of USAID’s basic education activities, and both agencies had independently developed their own matrices of other donors’ basic education projects, neither of which were updated or complete. By contrast, the host government in the Dominican Republic did call high-level donor meetings but discouraged the donors from meeting on their own. None of these countries had strong, donor-led coordination groups, with the exception of Peru, where donors had formed a formal coordination group, as well as an informal group of three donors, including the United States, focusing on decentralizing the host government’s education system. According to USAID, host government commitment, the development of sound education strategies, and effective donor coordination are essential to reforming basic education. Most donors we spoke to acknowledged that further improvements in coordination could result in more efficient delivery of assistance. Without good coordination, donors, including the United States, cannot easily monitor or assess host governments’ progress toward achieving Education For All by 2015—which is a strategic goal shared by State and USAID. While U.S. agencies we reviewed conduct basic education-related activities to achieve different goals, most assess and report on the results of their activities by collecting and using output measures–-or the direct products and services delivered by a program, such as numbers of schools built or children enrolled. While USAID can measure education access through outputs such as the numbers of students enrolled in primary school programs, it does not, in many instances, measure education quality, a key program outcome measure–-or result of products and services provided, such as increased literacy rates. Our analysis showed that USAID can report on some outcomes such as primary school retention rates but faces challenges in collecting valid and reliable data on student learning in areas such as math and reading, which, according to USAID, provides the most direct outcome measure of increased educational quality. Furthermore, USAID cannot compare its program results between countries. To better assess its goal of improving education quality, USAID is developing a standardized test that could provide data on primary-level reading ability and would be comparable across countries. Other agencies measure progress in relation to their respective missions. In addition, State’s Office of the Director of Foreign Assistance plans to work toward developing methods to assess outcomes of all foreign assistance; however, these efforts are only in the early discussion phase. Without this information, agency officials cannot determine if programs are achieving their strategic goals. We have previously reported that both output and outcome measures are extremely valuable for determining success of federally funded programs. Table 3 shows the measures reported by U.S. agencies in their fiscal year 2006 Government Performance and Results Act (GPRA) performance and accountability reports. USAID works with its project implementers to establish project performance measures before an activity is approved. These measures vary according to the objectives of the specific activities. The implementers then collect information on the required measures and submit quarterly or annual reports detailing progress against those measures to technical officers at the local USAID mission. Missions are then required to submit annual reports summarizing the progress of their activities, which often contain both specific output and outcome measures. Some of these measures are input to the Annual Report Application (AR) system, which currently serves as the repository of USAID performance data from all USAID missions. Information in the AR system is used in USAID headquarters to support strategic planning, budget preparation, and performance reporting requirements. To report on its agency-wide progress, USAID reports on students enrolled in primary school, students completing primary school, and adult learners completing basic education. These output measures have also been used to determine which education programs have not met, met, or exceeded their output objectives. Some of the programs that have exceeded these output objectives have been terminated. For example, the joint State-USAID Congressional Budget Justification for the 2007 budget request showed that India and South Africa had exceeded their program goals for basic education. These countries were eliminated from the list of countries proposed to receive basic education allocations in the 2008 budget request. USAID, the primary provider of U.S. basic education assistance, is the only agency to track progress toward an agency-wide, education-specific goal— promoting increased access to quality basic education. However, USAID faces challenges collecting data on student learning, such as levels of reading comprehension, and cannot compare the results between countries. As a consequence, USAID is unable to report on the overall results of its basic education activities on the quality of education, which can deny planners valuable information needed to prioritize and fund future programs. Prior GAO work on assessing performance measures for federally funded programs shows that both output and outcome measures are extremely valuable for determining program success. USAID has begun to address this issue by developing systematic methods to compare education quality across countries and working with donors to identify common indicators for assessing student learning. In addition, USAID is considering the development and administration of new tests to assess learning outcomes in a select number of countries. According to USAID and UNESCO, testing of student achievement is a good measure of educational quality—particularly tests that assess learning in core subjects such as reading and basic mathematics. However, obtaining this type of data remains a challenge for various reasons. According to USAID, designing tools to assess student learning and, particularly, deciding on which methodology or standards to apply, can be time-consuming and expensive when done independently by USAID implementers and may also not be cost-effective given the objectives of a program. For example, a USAID official at one mission stated that a change in teacher practices resulting from a teacher training program would be significant in itself and that not all basic education interventions should be expected to result in improved student achievement. Poor host-country infrastructure, unfriendly geography, or both can also make systematic nationwide testing expensive and difficult. In countries where the USAID mission has the benefit of working with an existing national student examination, those exams may not test to existing international standards, and any changes to the national examination and its underlying curriculum can be politically sensitive. However, in some countries such as the Dominican Republic, teachers’ unions can be resistant to the use of tests to evaluate student learning for fear that they will be held accountable for the results. Even if a national exam is successfully administered, the host government may not have the methodological expertise necessary to reliably compile and analyze the resulting statistics. We examined 40 basic education programs in the eight countries we visited—including both USAID basic education programs and DOL programs to combat child labor through the provision of quality primary education—and found that about half of the 40 programs utilized outcome performance measures, or the results of products and services. These included, among other things, increased student performance, improved instructional methods, and increased community participation. Not all of these outcome measures were related to education quality. For example, DOL projects contained outcome measures specific to child labor, such as media coverage and local awareness of child labor issues. Most of the programs that utilized outcome measures set baselines and targets for these measures. All 12 of the Department of Labor programs we examined reported outcome measures compared with approximately one-third of the 28 USAID programs that did so. The remaining 19 USAID programs did not use outcome measures. See appendix V for more details on our analysis. According to USAID and UNESCO, testing of student achievement is a good measure of educational quality. USAID programs aimed at improving educational quality varied in their measurement of student achievement. Several lacked means to fully gauge student performance. For example, In South Africa, one teacher training program could not monitor student achievement in its preservice training component due to insufficient funds, although the program’s in-service component did contain student testing. In addition, a distance-learning program in one country province contained no means to assess teacher performance or student achievement, yet was planned to be expanded to a second province. In Zambia, a teacher training program contained output indicators mandated by the Africa Education Initiative and the President’s Emergency Plan for AIDS Relief, such as the number of teachers trained, but these initiatives did not require an evaluation of teacher or student performance. The program independently added an additional measure to evaluate teachers on their implementation of the program materials and used student pass rates on the host-country’s seventh grade graduation test as a substitute, or proxy, measure of student achievement. Such graduation tests are designed to identify students who will advance to the next phase of schooling but are not necessarily designed to provide data on trends in student learning. In Peru, Honduras, and the Dominican Republic, a regional Latin American teacher training program begun in 2002 did not require implementers to begin measuring impact on student performance until 2005. Other programs we examined, however, did have or were developing student assessment components, as follows: In Egypt, we observed perhaps the most extensive evaluation component for a program that was working closely with the host- government’s Ministry of Education to develop tools for assessing student learning, teacher performance, and school management capacity nationwide. The student learning assessment tool specifically measured critical thinking capacity, problem solving skills, and subject matter knowledge in Arabic, science, and math. In Honduras, one program was developing primary school learning standards to strengthen the host government’s national student testing process. Additionally, according to USAID, one distance learning program is developing standardized testing to monitor variations in student achievement. In the Dominican Republic, a similar program was developing test instruments and analytical techniques to build the evaluation capacity of the host government’s educational system. In Peru, one pilot program conducted student testing solely in its sponsored schools specifically to demonstrate the effectiveness of the program to the host government’s Ministry of Education. In the absence of an indicator to illustrate improved quality across countries, USAID uses primary school completion rates as a proxy measure in its agency-wide reporting. However, USAID acknowledges that completion rates do not directly correlate to educational quality. As described earlier, according to USAID and UNESCO, testing of student achievement is a good measure of educational quality. However, while national examinations may exist in certain countries, the curricula these tests are based on vary widely in their subject matter and academic standards. Additionally, very few developing countries incorporate existing international standards for student learning in their testing. These factors prevent meaningful comparisons of educational quality between countries, which could inform funding and programmatic decisions at the headquarters level. For fiscal year 2005, USAID’s Bureau for Policy and Program Coordination (PPC) began collecting data to better allow USAID to find an appropriate indicator to measure quality outcomes of its basic education programs. The information that USAID began collecting in its annual reporting system database included, to the extent available, results of host country national- level testing systems and USAID attempts to measure learning achievement. However, this information was never fully analyzed, and USAID’s database for the information will be terminated in fiscal year 2007, and replaced by a new joint State-USAID performance measures database called the Foreign Assistance Coordination and Tracking (FACT) system, which will be managed by the DFA. According to DFA officials, the FACT system is meant to primarily contain numerical output indicators common across State and USAID missions and not include the mission-specific outcome indicators contained in USAID’s former annual reporting system. These indicators contained in the FACT system will be used to develop policy priorities, assess performance, and inform resource decisions. USAID, independent of the DFA process, began a new initiative in September 2006 to develop a better measure of educational quality across countries through the development of new testing instruments. These instruments are designed to provide data on primary-level reading comprehension comparable across countries. This project grew out of a World Bank Initiative in Peru that developed a Spanish-language reading comprehension test. USAID is attempting to build on the World Bank’s success by developing a simple screening instrument, which can provide general information on literacy within a given community, and an in-depth assessment instrument intended to provide cross-country comparisons of the degree of reading skill acquisition, determination of the grade at which a country’s education system is able to impart the capacity to read, and identification of the specific areas of weakness. According to the contract for the instruments, performance data provided by the new tests should permit comparison across countries and the tracking of changes in performance over time and should also be adaptable across languages and cultures to the degree possible. USAID plans to field test the instruments in English, Spanish, or French and is in negotiations with two host governments to begin pilot testing. USAID plans for the contract implementer to submit a report on the pilot tests’ results and their implications by September 30, 2007. According to one USAID official, it is expected that these new instruments, if successful, will allow USAID to better measure and compare educational quality across countries where it conducts basic education activities. USAID has also initiated discussion with other Education for All-Fast Track Initiative donors on how donors can assess the collective impact of their basic education assistance on learning outcomes. Additionally, in an effort to collect better data on education quality, USAID’s Education Office is considering the development and administration of new tests to assess learning outcomes in 10 countries over 12 months. The goal is to produce an assessment that will better demonstrate the impact of projects to improve educational quality, but that can be adapted by different missions facing different educational circumstances. The proposal recommends identifying two or three countries from each of the new foreign assistance categories, with representatives from Africa, Asia, and Latin America. The tests would cover literacy and mathematics and target fourth and eighth grade students, but would be adjustable for different grades and ages. The 12- month activity would cover initial development, and country applications would occur through mission buy-in into the activity. Although the primary purpose of this assessment would not be to directly compare different programs or countries with respect to what students know, the proposal estimates that, for cost-effectiveness, likely two-thirds of the test materials would be portable across countries, with the remaining items unique to local circumstances. While USAID, as noted earlier, is the only agency to track progress toward an education-specific goal, other agencies track progress related to their agency-specific missions or do not address their basic education activities in their agency-wide performance reporting because these activities are not directly related to their overall agency objectives. For example, agencies track progress as follows: DOL and USDA report performance measures related to their particular agency objectives. For example, DOL primarily uses education activities as a mechanism for alleviating child labor and reports on children removed or prevented from exploitive work. USDA reports on the number of beneficiaries of its school lunch program. Both of these measures are tied to enrollment and attendance rates collected at the project-level and are, therefore, related to educational access. DOL programs include project-level quality indicators, such as primary school completion rates. The MCC initially reported a single “rate of reform” measure based on multiple outcome-based health and education-related indicators, including total public expenditure on primary education and girls’ primary education completion rates. MCC now breaks these individual indicators to compare performance among countries with threshold programs and compacts, as well as to determine the eligibility of countries for MCC assistance. DOD provides basic humanitarian aid and services to avert political and humanitarian crises, as well as promote democratic development and regional stability. It collects information on how many projects it has funded and their costs, but does not address the educational impact of these projects. A DOD official stated that he would like to see the Humanitarian Assistance Program begin to measure its impact on countering terrorism, promoting goodwill, stabilizing the country, and increasing economic growth. Although the Peace Corps tracks the number and location of its volunteers, it does not assess the impact of its basic education activities because, according to Peace Corps officials, these activities are too small in scale to be suitable for such monitoring. In January 2006, the Secretary of State appointed a DFA and charged him with directing the transformation of the U.S. government’s approach to foreign assistance and ensuring that foreign assistance is used as effectively as possible to meet broad foreign policy objectives. Specifically, the DFA: has authority over all State and USAID foreign assistance funding and programs, with continued participation in program planning, implementation, and oversight from the various bureaus and offices within State and USAID, as part of the integrated interagency planning, coordination, and implementation mechanisms; has created and directed, through a foreign assistance framework, consolidated policy, planning, budget, and implementation mechanisms and staff functions required to provide umbrella leadership to foreign assistance; plans to develop a coordinated U.S. government foreign assistance strategy, including multiyear, country-specific assistance strategies and annual country-specific assistance operational plans; and plans to provide guidance to foreign assistance delivered through other agencies and entities of the U.S. government, including MCC and the Office of the Global AIDS Coordinator. According to a DFA official, the DFA’s office spent its first year developing the foreign assistance framework, preparing the proposed 2008 consolidated State and USAID budget, and providing guidance for country teams to develop operational plans. The foreign assistance framework includes five objectives: (1) peace and security, (2) governing justly and democratically, (3) investing in people, (4) economic growth, and (5) humanitarian assistance. Basic education falls under the objective of investing in people. According to a State official, the new budget and planning process is intended to give the Secretary of State the ability to evaluate the effectiveness of foreign assistance to improve effectiveness, impact, and efficiency through better coordination, at every level. Looking forward, the DFA is examining ways to improve (1) coordination of foreign assistance, including basic education and (2) measurement of program outcomes. While the DFA has begun to address the issue of better coordinating all U.S. foreign assistance by bringing together core teams to discuss U.S. development priorities in each recipient country, it is unclear to what extent these efforts will be accepted and implemented by agencies whose foreign assistance programs are not under DFA’s direct authority. According to DFA officials, during the first phase of coordination efforts, USAID, State, and DOD (as an implementing partner of certain USAID and State programs) have been meeting to discuss coordination of assistance. The DFA plans to engage other agencies such as USDA and DOL in the coordination discussions. However, DFA officials stated that there is no requirement for other agencies to participate in these dialogues. DFA acknowledges the need for outcome measures to better describe the impact of basic education, as well as other foreign assistance areas. According to a DFA official, developing outcome indicators for all assistance programs is difficult because of the differing program objectives those programs may possess. For example, some programs may meet the political objectives of the United States, while others may meet purely development objectives. DFA plans to use as many outcome measures as possible generated by third parties, such as World Bank statistics and UNESCO literacy rates. Also, DFA plans for missions to submit “Foreign Assistance Reports” back to Washington, which would combine their FACT data with locally generated outcome measures to demonstrate the cumulative effects of their programs. However, this process and the outcome measures it might contain have not been developed, and DFA does not currently have a timetable for implementing these initiatives. Although an agency can use outputs, outcomes, or some combination of the two to reflect the agency’s intended performance, the GPRA is clearly outcome-oriented and thus an agency’s performance plan should include outcome goals whenever possible. DFA officials acknowledged that the new performance reporting system as it currently stands will not report the outcome results of basic education programs to managers in headquarters. Without a government-wide mechanism to systematically coordinate all agency efforts in basic education at the headquarters level, agencies’ programs may not maximize the effectiveness of U.S. assistance. The new State DFA efforts to implement a country-wide program planning and budgeting process, which is designed to better manage the delivery of foreign assistance, may improve coordination of basic education programs at the country level, but this process is still evolving, and it is yet to be determined what impact these efforts will have on future strategic planning of education-related assistance. Moreover, having reliable and systematic methods to determine if basic education programs are meeting their goals could help better inform U.S. agencies’ decisions regarding the planning and execution of basic education-related assistance. Although the DFA plans to work toward developing methods to assess outcomes of all foreign assistance, these efforts are only in the early discussion phase. To enhance efforts to coordinate and better assess the results of U.S. international basic education-related activities, we are making three recommendations: to improve interagency coordination of basic education efforts at headquarters in Washington, we recommend that the Secretary of State work with the heads of executive branch agencies responsible for international basic education-related assistance to convene formal, periodic meetings at the headquarters level amongst cognizant officials; to improve interagency coordination in recipient countries, we recommend that the Secretary of State direct the relevant countries’ Ambassadors to establish a mechanism to formally coordinate U.S. agencies’ implementation of international basic education-related activities in the relevant country; and to better assess the results of U.S. basic education assistance, we recommend that the Secretary of State, through the DFA, work with USAID and to the extent practicable, with other U.S. agencies providing basic education related-assistance to develop a plan to identify indicators that would help agencies track improvements in access to quality education. Indicators could include: output measures, such as the numbers of U.S. programs designed to improve curriculum and teacher training, and to develop and validate student tests; and outcome measures, such as literacy and numeracy assessments of student achievement. We provided a draft of this report to State, USAID, USDA, DOD, DOL, MCC, and the Peace Corps. We obtained written comments on the draft of this report from State, USAID, and USDA (see apps. VI, VII, and VIII). State generally concurred with our recommendations and noted that its Office of the Director of U.S. Foreign Assistance is in the process of developing mechanisms to ensure coordination of U.S. assistance programs with other federal agencies, implementers, and stakeholders. In addition, State’s Office of the Director of U.S. Foreign Assistance is working with USAID, State, and others in the international community to develop appropriate measures for learning outcomes. We agree that these are positive steps toward improving the coordination of U.S. supported basic education programs and the ability to measure whether basic education programs abroad are achieving their goals, and we encourage State to continue to work with the heads of executive agencies to this end. USAID concurred with our recommendations and agreed with the need for greater U.S. government coordination and that more needs to be done in the areas to improve education outcomes in country and to better understand the impact of U.S. support to basic education. USDA concurred with our recommendations and indicated that it will work with the Department of State in the manner which the report recommends. We also received technical comments on this draft from State, USAID, DOL, MCC, and the Peace Corps, which we incorporated where appropriate. We are sending copies of this report to appropriate Members of Congress, the Secretaries of the Departments of Agriculture, Defense, Labor, and State, as well as the Administrator of the U.S. Agency for International Development, the Director of the Peace Corps, and the Chief Executive Officer of the Millennium Challenge Corporation. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. To describe U.S. agencies’ basic education activities and how the activities are planned, we obtained and analyzed strategic, budget, and programmatic documents for fiscal years 2001 through 2006 from the Departments of Agriculture (USDA), Defense (DOD), Labor (DOL), and State (State), as well as the Millennium Challenge Corporation (MCC), the United States Agency for International Development (USAID), and the Peace Corps. The documentation included, when available, strategic plans at the mission, country, regional, and global levels. We also interviewed program officials and requested data from these agencies in Washington, D.C., to identify the types of basic education-related activities, the recipient countries of these activities, and the estimated funding levels of the programs. These included educational activities that corresponded to USAID’s definition of basic education, such as primary education, secondary education, early childhood development, and adult literacy. These activities also included those implemented under special or administration-directed initiatives related to basic education. We assessed the reliability of the funding data by reviewing existing information about the data and the system that produced them and interviewing agency officials knowledgeable about the data. USDA and DOL did not disaggregate funds specifically allocated to the basic education components of their larger programs. We found all agencies’ data sufficiently reliable for representing the nature and extent of their program funding and activities. We did not assess the reliability of the World Bank’s selected indicator data because they were used for background purposes only. To learn about the implementation of international basic education assistance overseas, we observed ongoing program activity in the following eight countries: Dominican Republic, Egypt, Honduras, Mali, Morocco, Peru, South Africa, and Zambia. We selected a nonprobability sample of foreign countries designed to ensure geographic diversity and representation of basic education programs from multiple U.S. agencies and international donors. In addition to geographic diversity and representation of multiple agencies and international donors, our sample was designed to include countries that implement special or administration-directed initiatives related to basic education. In the countries, we met with representatives from State, USAID, USDA, DOD, DOL, the MCC, and the Peace Corps; officials representing embassies and USAID missions in the countries visited; officials administering international basic education programs; and officials from foreign governments, nongovernmental organizations (NGOs), the United Nations (UN), and other international organizations. Within each country, we examined all U.S. agency basic education activities ongoing at the time of our visit and discussed these activities with relevant agency officials. To determine the mechanisms the United States uses to coordinate national and international basic education assistance, we analyzed agency coordination documents and interviewed relevant U.S. agency, host government, and international donor officials in our eight sample countries. Documentation we examined included e-mails, meeting minutes, memoranda of understanding, policy agendas, host government education sector strategies, and other supplemental documentation. We met with officials from State, USAID, USDA, DOL, DOD, the Peace Corps, and the MCC in Washington, D.C., to discuss interagency coordination at the headquarters level. In each of our eight sample countries, we discussed coordination of international basic education assistance with relevant officials from U.S. agencies, U.S. program implementers, host countries’ Ministries of Education, and international donors with basic education programs in-country. To evaluate how U.S. agencies monitor and assess the results of their international basic education programs, we obtained and examined contractual and monitoring and evaluation documents for each of the basic education projects we visited. For each ongoing project, we interviewed officials from the implementing organizations, as well as any U.S. agency official(s) monitoring the implementer’s progress. In our interviews, we discussed project monitoring, data baselines, and progress indicators. We supplemented these interviews with a review of reporting documentation associated with 40 of the basic education projects we discussed with program implementers. This sample included all ongoing projects that we visited in our eight sample countries. The documentation that we reviewed included the contracts, cooperative agreements, statements of work (program descriptions), performance monitoring plans, and monitoring reports for the 40 projects. Furthermore, to describe USAID’s process for collecting and using performance measures, we interviewed USAID officials and analyzed agency documents. To describe the new planning process for foreign assistance and its impact on collecting indicator data, we interviewed State and USAID officials and analyzed relevant documentation. To determine the extent to which the projects had outcome measures, used baselines, and set targets, we identified and analyzed the performance measures in the programs’ documentation. We coded performance measures as outcomes if they were linked to program objectives and had clearly reported results. We also assessed whether the outcome measures we identified established clear baselines and set targets. To ensure accuracy in our coding, two coders independently reviewed the program documentation and met to reconcile any initial differences in their coding. In addition, another staff member independently reviewed the coding decisions. Although the findings from our site visits in each country and our review of ongoing basic education projects are not generalizable to the population of basic education programs, we determined that the selection of the countries and programs reviewed was appropriate for our design and objectives. We conducted our fieldwork in Washington, D.C., and in the Dominican Republic, Egypt, Honduras, Liberia, Mali, Morocco, Peru, South Africa, and Zambia from December 2005 to March 2007 in accordance with generally accepted government auditing standards. N/A=Countries that are not eligible for MCC funding Fast Track Initiative countries. International Labor Organization’s International Program on the Elimination of Child Labor (ILO/IPEC) Combating Child Labor Through Education in Central America and the Dominican Republic, “Primero Aprendo” Advancing Learning and Employability for a Better Future ADROS: Combating Child Labor Through Education in Morocco Combating Child Labour in Morocco by Creating an Enabling National Environment and Developing Direct Action against Worst Forms of Child Labour in Rural Areas Andean Centers of Excellence for Teacher Training Innovations in Decentralization and Active Schools (AprenDes) Communities Supporting Health, HIV/AIDS, Nutrition, Gender and Equity Education in Schools 2 Quality Education Services Through Technology Educational Combating and Preventing HIV/AIDS-induced Child Labour in Sub-Saharan Africa: Pilot Action in Uganda and Zambia Combating Child Labour Through Education - An Education Initiative (Child and Community Participatory Approach for Impact and Sustainability) We studied 40 programs from the eight countries visited during fieldwork. The programs had multiple performance measures and often included a mix of outcome and output measures. We identified measures using criteria that required them to be clearly identified as performance measures, have clearly reported results, and be clearly linked to program objectives. See appendix I for more details about how the programs were selected for study and about the methodology we used to analyze their measures. Table 4 below shows the type of measures contained in the programs we examined. Table 5 shows the characteristics of the outcome measures being used by the programs, and Table 6 shows how these outcome measures were used. The following are our comments on the Department of State’s letter dated March 26, 2007. 1. State said that its Office of the Director of U.S. Foreign Assistance over the past year has undertaken a process to ensure the kind of coordination necessary for coherent U.S. government assistance programs in all areas, including basic education. Also, State said that its experience to date has demonstrated willingness by other federal agencies such as the Department of Defense, U.S. Trade Representatives, and the Millennium Challenge Corporation to work with us within the Foreign Assistance framework. In addition, the fiscal year 2008 operational planning process is expected to be the first year wherein there is full participation by other agency implementers and stakeholders. At the time of our review, some of the other agency officials that we met with in Washington said that their respective agencies have not yet been invited to participate in such coordination efforts. Therefore, we believe that State should continue towards this end to improve coordination, both at the headquarters and in recipient countries, among all agencies involved in international basic education- related activities. 2. State said that USAID, State, and others in the international community are working together to try to develop appropriate measures for learning outcomes that would address the question of whether a quality education is being provided. Also, State noted that its Office of the Director of U.S. Foreign Assistance is building on the long history and best practices that USAID and other agencies have accumulated from many years of performance management and thorough evaluation. Our report notes the efforts of State, USAID, and the international community in this regard and that these efforts have just begun. However, we maintain that a plan should be developed to better guide these efforts to help agencies track improvements in the access to quality education. The following are our comments on the Department of Agriculture’s letter dated March 22, 2007. 1. We deleted the statement that “USDA. . . can not disaggregate the amount of funds allocated specifically for basic education related activities.” from the report. Also, in the report we explain that USDA funding allocations include basic education components that support its broader mission goals and provide examples accordingly. 2. We acknowledge USDA’s coordination efforts with State, USAID, and the Office of Management and Budget as a good example of interagency coordination. In addition to the individual named above, Zina Merritt, Assistant Director; Virginia Chanley; Martin de Alteriis; Harriet Ganson; Emily Gupta; David Hancock; Victoria Lin; Grace Lui; Grant Mallie; Patricia Martin; Deborah Owolabi; and Anne Welch made key contributions to this report. The team benefited from the expert advice and assistance of Joseph Carney, Elizabeth Curda, Joyce Evans, Etana Finkler, Bruce Kutnick, Jena Sinkfield, and Cynthia Taylor.
|
Pub. L. No. 109-102, section 567, mandated that GAO analyze U.S. international basic education efforts overseas. In this report, GAO (1) describes U.S. agencies' basic education activities and how the agencies plan them; (2) examines U.S. coordination of basic education efforts among U.S. agencies, and with host governments and international donors; and (3) examines how U.S. agencies assess the results of their basic education programs. In conducting this work, GAO obtained and analyzed relevant agencies' documents and met with U.S. and foreign government officials and nongovernmental organizations, traveling to selected recipient countries. Several U.S. agencies--the Departments of Agriculture (USDA), Defense (DOD), Labor (DOL), and State, as well as the Millennium Challenge Corporation (MCC), U.S. Agency for International Development (USAID), and the Peace Corps--support basic education activities overseas. State and USAID have strategic goals specific to promoting improved education. Several other U.S. agencies support basic education-related activities as part of programs that address their broader mission goals. For example, DOL supports alternative school programs as a way to remove children from exploitative work, USDA provides school meals or take-home rations to students, and DOD constructs dormitories and schools to provide better access for children who have to travel long distances to attend classes. GAO found that agencies did not always coordinate in the planning or delivery of basic education-related activities. From 2001 to 2006, there was no government-wide mechanism to facilitate interagency collaboration and, as a result, GAO identified instances where agencies missed opportunities to collaborate and maximize U.S. resources. In addition, GAO found that the level of U.S. coordination with host governments and other donors in the eight visited countries varied. Without effective coordination, donors cannot easily monitor or assess the host government's progress toward achieving international goals, such as Education for All by 2015, one of State-USAID's strategic goals. While U.S. agencies GAO reviewed conduct basic education-related programs to achieve different goals, most collect and use output measures, such as the numbers of schools built or children enrolled, to assess and report on results. USAID is the only agency with an education-specific goal of increasing access to quality basic education. However, in many instances, USAID faces challenges in collecting valid and reliable data needed to measure improvements in education quality. Without this information, agency officials cannot fully determine if the programs are achieving their strategic goals.
|
DOD aircraft are used to perform a variety of different missions. However, for the purpose of this report, we have grouped them into five basic categories: (1) various models of fighter/attack aircraft, such as the F/A-18 Hornet, provide air superiority or close air support of ground forces; (2) bombers, such as the B-1 Lancer, provide long- and short-range delivery of heavy munitions; (3) electronic command and control aircraft, such as the E-3 Sentry, provide airspace and battlefield reconnaissance, command, and control services; (4) tankers and cargo aircraft, such as the KC-135 Stratotanker and the C-5 Galaxy, respectively, provide air refueling services and the ability to carry troops and equipment anywhere in the world; and (5) helicopters, with their ability to hover as well as conduct long- and short-range operations, are used for a variety of missions, including transportation of troops and equipment, air assault and reconnaissance, and search and rescue operations. Our review included a total of 49 different aircraft models (over 5,600 individual aircraft in 2002) in these five categories. These aircraft were considered by the services to be their key active-duty operational aircraft. Table 1 lists these aircraft models, along with the military service using them, and their MC and FMC goals for fiscal year 2002. DOD Instruction 3110.5, dated September 1990, requires all military services to establish quantitative availability goals and corresponding condition status measurements for these aircraft and other mission- essential systems and equipment. The goals established must estimate the maximum aircraft performance that is achievable on the basis of the aircraft’s design characteristics and planned peacetime usage, and assuming full funding and optimal operation of the peacetime manpower and logistic support systems. Military personnel, civilian contractors, or both may perform the required maintenance under these systems. The instruction prescribes a basic set of condition status measures, including FMC, partial MC, and MC, that each service must use to describe the capability of systems or equipment. FMC indicates that an aircraft has all of the mission-essential systems and equipment it needs to perform all of its missions installed and operating safely. Mission-essential systems are those required to perform primary functions such as fire control, bombing, communications, electronic countermeasures, or radar. Partial MC indicates that an aircraft has the operable mission-essential equipment it needs to perform at least one of its missions, but not all. For example, an aircraft expected to be able to carry troops into combat during wartime in all weather conditions, as well as to be able to fly humanitarian missions during peacetime, would be considered partial MC if some of its equipment were broken and it could fly only humanitarian missions in clear weather. MC consists of the sum of the partial MC and FMC measures; that is, the number of MC aircraft is equivalent to the sum of the aircraft rated partial MC and the aircraft rated FMC. This report focuses on MC and FMC goals because the Army, Navy/Marines, and parts of the Air Force do not establish separate partial MC goals. Many of DOD’s key aircraft have been unable to meet their MC and FMC goals since at least 1998. For example, during fiscal years 1998-2002, only 23-35 percent of the 49 aircraft models we reviewed were able to meet their MC goals, and 31-49 percent met their FMC goals. In most cases, the actual rates were at least 5 percentage points below the goals. Average MC and FMC rates varied by service and type of aircraft. For example, the Army and Air Force had the highest average MC rates, followed by the Marines and the Navy. These rates have increased slightly since fiscal year 2001 in all services except the Navy. Among aircraft types, the average MC rates varied from 60 to 80 percent. Average MC rates were the highest for helicopters, followed by cargo aircraft and tankers, fighter/attack aircraft, bombers, and electronic command/control aircraft. While the rates have fluctuated, MC and FMC goals have generally remained constant over time. Since 1998, only 11 of 49 aircraft models (22 percent) experienced a change to their goals—and 7 of these changes were to raise the goals. DOD’s key, high-demand aircraft have experienced widespread difficulties in meeting MC and FMC goals since at least 1998. (Appendix I provides a full listing of MC and FMC goals, rates, and other information by year for each aircraft model we reviewed.) For example, during fiscal years 1998- 2002, the percentage of aircraft models meeting their MC goals never exceeded 35 percent. (See fig. 1.) During this period, the rates for the individual aircraft models were more than 5 percentage points below their MC goals in 62 percent of the cases. The percentage of aircraft models meeting FMC goals during the same period ranged from 31 to 49 percentage points, and 71 percent of the cases were more than 5 percentage points below the goals. At the service level, Army aircraft generally met their MC goals the most frequently, followed by the Marine Corps, Air Force, and Navy. (See fig. 2.) The same rank order held for FMC goals. As previously shown in table 1, the level at which the goals were set showed little consistency, varying widely even among the same type of aircraft. For example, MC goals for the bombers in our review ranged from 50 to 80 percent, and MC goals for the fighters, from 65 to 83 percent. Actual MC rates also varied between services and the various aircraft types. MC and FMC rates are based on the ratio of the number of hours an aircraft was actually available to the total number of hours it could have been available. The Navy/Marines and Air Force reduce the latter figure by the amount of time an aircraft was away for scheduled depot maintenance, while the Army does not make this adjustment. We computed the average rates by service and aircraft type from service data on the total number of hours each aircraft model was MC and FMC, and the total hours each aircraft model was available each year. The average annual MC and FMC rates for the services as a whole are shown in figures 3 and 4. The Army and the Air Force had the highest average MC rates, at 77-83 percent during fiscal years 1998-2002; followed by the Marines, at about 71-75 percent; and the Navy, at 61-67 percent. A similar pattern follows for the average FMC rates for the services. When grouped by type of aircraft, average annual MC rates were highest for helicopters (76-80 percent), cargo/tankers (75-79 percent), and fighter/attack aircraft (75-77 percent). Average annual MC rates for bombers (64-69 percent) and electronic command/control aircraft (60-67 percent) were somewhat lower. Average FMC rates showed similar rank orders. (See figs. 5 and 6.) MC and FMC goals have generally remained constant over time. Since 1998, only 11 of 49 aircraft models (22 percent) experienced a change to their MC goals, FMC goals, or both. Seven models had their goals raised, and three had their goals lowered. One model’s MC goal was changed but then returned to its initial level. Ten of the 11 changes were for aircraft operated by the Air Force. The remaining change was for a Marine Corps aircraft. (See app. I for additional details.) In fiscal year 2002, for example, the Air Force raised the MC goal for its E- 8 Joint Stars electronic command and control aircraft from 73 to 75 percent. According to officials, the E-8 is a relatively new (3-year-old) aircraft that is slowly increasing its performance level as it matures and Air Force maintenance personnel understand the aircraft better. The increase in the MC rate was based on an analysis of actual E-8 MC rates that were showing an upward trend in performance. The Air Force is the only service that routinely conducts formal reviews of its goals. Air Force officials told us that they generally try to keep the goals high because it is difficult to stop the goals from dropping further once they begin to be lowered. Moreover, officials believed that contractors need to be held to high standards to keep spare parts inventories and other aspects of maintenance at high levels. In another case, the MC goal for the Marine Corps’ F/A-18D Hornet fighter was raised from 60 to 75 percent, and its FMC goal, from 46 to 58 percent at the beginning of fiscal year 2000. According to Navy documents, this increase was due to a change in the aircraft’s assigned mission. While most of the goals were either unchanged or increased, the Air Force’s Air Combat Command developed a set of interim goals in fiscal year 2000 for some of the fighters, bombers, and electronic command/control aircraft under its command. These interim goals were lower than its official MC goals. In 1999, the Command determined that problems with suppliers and manpower shortages were undercutting its ability to meet MC goals and lowering unit morale. To combat this problem, the Command developed the interim goals listed in table 2. In 2002, the Command returned to using the pre-2000 goals for all but six aircraft (A-10, E-3, F-15 C/D, F-15 E, RC-135, and U-2). According to Command officials, the lower goals applied only to their units. Goals for suppliers remained at official levels to keep spare parts inventories high. Neither the other services nor the Air Force’s other major commands responsible for aircraft operations have developed interim goals. According to DOD officials, difficulties in meeting MC and FMC goals are caused by a complex combination of interrelated logistical and operational factors, with no dominating single problem. The complexity of aircraft design, the lack of availability and experience of maintenance personnel, aircraft age and usage patterns, shortages of spare parts, depot maintenance systems and other operational factors, and perceived funding shortages were all identified as causes of difficulties in meeting the goals. As indicated below, our work found that some indicated factors were valid causes, while the impact of others was less certain. Officials believe that the complexity of military aircraft affects its availability, and thus its ability to meet MC goals. Military aircraft are designed to handle a specific set of missions and provide a specific set of capabilities over a projected useful lifespan. According to officials, each aircraft can be inherently complex and maintenance intensive, or, depending upon the missions and capabilities it was designed to provide, simple and easy to maintain. For example, the B-2 bomber had the lowest MC rates (32-44 percent) of any aircraft we reviewed. However, according to Air Combat Command officials, one reason for these low rates is the complex design of the aircraft. The B-2 is a very advanced aircraft with low observable (stealthy) characteristics using new composite materials, and Air Force personnel are still learning how to maintain the aircraft. In contrast, the B-52 bomber had some of the highest MC rates (76-84 percent) of all the aircraft we reviewed. According to Air Force officials, the B-52 is a relatively simple and flexible design intended for ease of maintenance and durability. Service officials also frequently linked shortages of the total number of maintenance personnel, as well as their experience level, to the failure to meet MC goals. Navy officials told us that the growing sophistication of their aircraft in general requires maintenance personnel to take longer to learn the complex computer and electronic skills needed to handle the aircraft. However, high demand for these skills in the private sector makes it difficult to retain personnel with these maintenance skills, leading to turnover and increasing the difficulty in meeting the MC goals. Similarly, a recent study published in the Air Force Journal of Logistics found that the number and experience level of maintenance personnel correlated highly with the MC rates of F-16 aircraft. As the number of experienced personnel assigned to an aircraft increased, the MC rates increased as well. Army officials also cited shortages of experienced maintenance personnel as a cause of lower MC and FMC rates. However, they also stated that it may be possible to raise the rates by maximizing the time that maintenance personnel actually spend maintaining the aircraft. For example, one Army Audit Agency study in 1998 found that maintenance personnel at one unit were spending about 70 percent of their time on nonmaintenance activities such as administrative duties, training, and time attending to personal duties. Personnel management is an area that we have cited as a major management challenge and program risk for DOD. For years, DOD has been wrestling with shortages of key personnel because of retention problems. In 1999 we reported that the majority of factors cited as sources of dissatisfaction and reasons to leave the military were related to work circumstances, such as the lack of spare parts and materials needed to perform daily job requirements. The advancing age and usage patterns of aircraft were other factors often cited by service officials as reasons why aircraft did not meet MC goals. DOD’s inventory of aircraft is getting older. The Congressional Budget Office recently reported that from 1980 to 2000, the average age of active- duty Navy aircraft rose from 11 years to more than 16 years; Air Force aircraft, from 13 to more than 20 years; and Army helicopters, from 10 to over 17 years. Logistics officials told us that aging influences on MC rates typically follow a cyclical pattern over the life of an aircraft. When aircraft are initially introduced, they go through a “shake down” period and have low MC rates as new equipment and supply systems stabilize and maintenance personnel learn to understand the aircraft. Eventually, MC rates begin to rise and then stabilize at a higher working level. However, as more and more flying time is accrued over the passing years, problems due to materials and parts fatigue, corrosion, and obsolescence increase, and MC rates begin to fall again. Modernization programs are then instituted to replace worn and obsolete equipment, and the pattern begins again. Although age may affect MC rates, we found no statistical evidence that age alone explains difficulties in meeting MC goals. For example, our analysis of average aircraft ages and 2002 MC rates found no indication that older aircraft have the lowest MC rates. (See table 3.) With an average age of 40 years, the B-52 is the second oldest aircraft in DOD’s inventory. However, its MC rate of 81 for 2002 and historical MC rates consistently in the upper 70s and low 80s rank it among the highest performers we reviewed. According to Air Force officials at the Air Combat Command, in addition to their simplicity, B-52s have a relatively low number of actual flight hours, averaging about 16,000 hours each despite their age. These officials believed that accrued flight hours are a more appropriate measure of wear and tear than chronological age. Moreover, according to these officials, the B-52 was originally scheduled to retire in the mid-1990s. However, because of its durability and flexibility, the Air Force decided to retain the aircraft until the average age reaches 32,000 hours, projected at about 2040. Logistics officials also believe that MC rates are affected by usage patterns and whether the aircraft is operated under the conditions for which it was designed. Officials told us that the large increase in deployments in recent years has caused many DOD aircraft to be operated at rates higher than expected during their design, thus accelerating aging problems. For example, according to the Air Force Journal of Logistics study, F-15 fighters sent to Saudi Arabia in 1997 were flown at over three times their normal rate. Shortages of spare parts have been recognized by us and others for years as a major contributor to lower-than-expected MC rates. As a result, we have also cited DOD inventory management as a major management challenge and program risk since 1990. Service officials continued to cite spare parts shortages as a frequent cause of difficulties in meeting MC goals. Spare parts shortages are caused by a number of problems, including underestimates of demand, and contracting and other problems associated with aging aircraft or small aircraft fleets. We have reported on DOD’s problems in estimating aircraft spare parts requirements for years. For example, in 1999 and again in 2001, we reported that shortages of spare parts caused by inaccurate forecasting of inventory requirements was degrading MC rates for key Air Force aircraft such as the B-1B bomber, C-5 cargo planes, and F-16 fighters. In 2001 we reported that key Navy aircraft were also having readiness problems because of spare parts shortages resulting from underestimates of demand. Officials continued to raise this issue as an underlying factor in spare parts shortages. In addition, some officials also believed that the higher operating tempos associated with increased deployments have caused parts to fail quicker than expected, exacerbating weaknesses in forecasting inventory requirements. Air Force officials told us that aging aircraft, in particular, may experience parts shortages and delays in repairs because original manufacturers may no longer make required parts. To obtain a new part, officials must wait for it to be manufactured. However, this may not be a high priority for the commercial supplier because of the relatively low profit potential. Alternatively, another company could make the part if the original manufacturer were willing to give up its proprietary rights. However, this can take longer and be more expensive than simply waiting for the original manufacturer. Moreover, officials also told us that spare parts inventories are sometimes reduced when aircraft are nearing the end of their projected life. For example, Air Force officials said that in the mid-1990s they began to shut down the spare parts supply for the B-52 because of its anticipated retirement. This resulted in a depletion of inventories, the canceling of contracts, and ultimately a drop in MC rates from 1997 to 2000. As a result of the decision to retain the B-52, the supply system is recovering and MC rates are moving up. Similarly, the size of the aircraft fleet can also influence spare parts inventories and MC rates. According to officials, manufacturers may see little profit in stocking large inventories of spare parts for a small fleet of specialized military aircraft. Small fleets of aircraft can also suffer from having their MC rates strongly influenced by the MC failures of just a few aircraft. Large fleets of aircraft also have an advantage in having more opportunities to remove serviceable parts from one aircraft and install them in another—termed “cannibalizing”—thus helping to insulate their MC rates from the impact of parts shortages. However, we recently reported that while cannibalization is a widespread practice among the services, it increases maintenance personnel workloads and lowers morale and retention. Air Force and Navy officials cited changes to their maintenance approaches as a significant cause of slower repair times and lowered MC rates. In the mid-1990s the Air Force changed from a three-level maintenance approach to a two-level approach. This change moved much of the intermediate maintenance functions, such as the replacement or emergency manufacture of parts, away from the air base level to centralized maintenance depots. According to officials at both the Air Combat Command and Air Mobility Command, these changes slowed the pace of repairs significantly. Repair expertise was taken away from the base level, and aircraft were shipped away from home base more often for repairs. Moreover, officials believed that many experienced maintenance people were lost as they refused to move to other locations associated with the reorganizations. In this regard, our 1996 review of depot closures noted that DOD’s outplacement program helped limit the number of involuntary separations and that jobs were often available for employees willing to relocate. The Army continues to use a three-level maintenance system, as does the Navy. However, Navy officials said they also changed their system in the mid-1990s by introducing the integrated maintenance concept. This approach, in contrast to the Air Force approach, increased the amount of aircraft modernization and other work performed at the base level during a time when funding for depot-level work was being reduced. However, officials believed this change overloaded the base-level maintenance systems and ultimately lowered reported MC rates. From fiscal year 1988 to fiscal 2001, DOD reduced the number of major depots from 38 to 19. During this same period, the maintenance workforce was reduced by about 60 percent (from 156,000 to 64,500). These reductions were the result of overall force structure reductions since the end of the Cold War, as well as DOD’s desire to reduce costs by relying more on the private sector for the performance of depot maintenance. We have raised concerns that DOD’s downsizing of its depot infrastructure and workforce was done without sound strategic planning and that investments in facilities, equipment, and personnel in recent years have not been sufficient to ensure the long-term viability of the depots. Other operational factors can also affect MC rates. For example, from 1997 to 2000, the Air Force’s B-1 bomber had a major power system problem that lowered MC rates by 12 points. To address the problem, the Air Combat Command instituted a system of frequent video teleconferences between the offices involved in the maintenance response to provide more intensive management of the response. This approach worked, as the MC rate climbed by 9 points by 2002. Management integration between the operations and logistics sides of the organization was also viewed as key. Good coordination between these two groups is essential because of the complex and multifaceted causes of MC problems. Finally, Air Force officials noted that some of the problems with Air Force MC rates could be explained by a change in reporting procedures. During the mid-1990s, the Air Force returned an aircraft to MC status after it was repaired but prior to the actual check flight to ensure that it was operating correctly. Now, the aircraft must pass the check flight before being classified as MC. Officials believe that this change would tend to lower MC rates slightly. Officials from all services cited underfunding of spare parts inventories, maintenance depots, and other aspects of the maintenance and supply systems as a key problem. For example, Army and Navy officials told us that they often use remanufactured parts instead of new parts to save money. DOD reports in its Fiscal Year 2000 Performance Report that it has increased funding for spare parts and depot maintenance requirements. For example, the report indicates that funding for depot maintenance increased from $5.58 billion to $7.01 billion from fiscal year 1997 to fiscal 1999 (most recent year that data are available). However, the report also acknowledges an unfunded requirement of about $1.18 billion in fiscal year 1999. Notwithstanding claims regarding the lack of funding for spare parts, we recently reported that when provided additional funds for spare parts, DOD was unable to confirm that those additional funds were used for that purpose. The pressures for more funding to maintain DOD’s aircraft may well go up even more in coming years as the aircraft inventory continues to age. The Congressional Budget Office estimates that spending for operations and maintenance for aircraft increases by 1 to 3 percent for every additional year of age. Despite the importance of MC and FMC goals as measures of readiness and logistical funding needs, we found widespread uncertainty over how the services’ MC and FMC goals were established and who is responsible for establishing them, as well as basic questions about the adequacy of those goals as measures of aircraft availability. The services could not explain and document how the original MC and FMC goals were set for any of the aircraft in our review. Furthermore, some officials questioned which goals are the best to use in reviewing aircraft availability: MC goals, FMC goals, or perhaps a new type of goal. DOD’s instruction provides little or no guidance on these and other key issues. DOD officials told us that the instruction has not been updated to reflect the current environment of increased deployments and other changes since the end of the Cold War. MC and FMC goals are used as fundamental measures of readiness throughout DOD, used as indicators of operational effectiveness, and used to help determine the size of spare parts inventories and other logistical resources needed to maintain aircraft availability. As a result, the level at which the goals are set can influence not only perceptions about operations and readiness, but also millions of dollars in spending for logistical operations. In addition to the requirement to maintain MC and FMC data set forth by DOD Instruction 3110.5, the services use MC and FMC measures as a component of overall unit readiness determinations under DOD’s Global Status of Resources and Training System. The System requires commanders to rate their unit’s readiness at levels 1 (highest) through 5 on the basis of a combination of their professional judgment and the readiness ratings in four specific areas: personnel, training, equipment on hand, and equipment condition. MC and FMC measures are used to determine the ratings for equipment condition. For example, the Army measures equipment condition (termed “serviceability” by the Army) for aircraft by using the FMC rate. An FMC rate of 75 percent or more is required for a level-1 readiness rating, the highest available. Congress also requires DOD to include Status of Resources and Training System information on the condition of equipment as well as specific information on equipment that is not mission capable in its quarterly readiness reports to Congress. These reports assist Congress in its general responsibilities for overseeing DOD readiness and operations. Similarly, according to DOD and service officials, MC and FMC goals are used as management tools within DOD units to diagnose problems and motivate personnel. For example, officials in the Air Combat Command told us that their use of lower interim goals beginning in fiscal year 2000 was an attempt to raise unit morale that had suffered as a result of their inability to meet the actual goals owing to shortages of personnel and spare parts. In this regard, DOD’s instruction specifically calls for the services to use the goals and condition status measurements, such as MC and FMC, to review maintenance and supply effectiveness and to have programs to identify and correct problems with systems and equipment. Service officials told us that the goals also affect DOD’s funding levels because the goals are used to help determine the size of spare parts inventories and other logistical resources needed. Higher goals require more money to maintain parts inventories and other resources needed to achieve the goals. For example, officials told us that in the early 1990s, a $100 million contract for logistics support for one Air Force aircraft contained an MC goal of 90 percent. During this period, the contractor kept supply bins full of parts and MC goals were met. However, in the mid- 1990s a new contractor was brought in, and the MC goal was dropped to 85 percent. According to Air Force officials, their decision to lower the MC goal by 5 percentage points allowed the contractor to lower spare parts inventories and reduced the price of the maintenance contract by $10 million. However, MC rates also dropped and eventually fell below the new goal. The services have developed mathematical models to determine the size and cost of the spare parts inventories needed to support various levels of MC and FMC goals and other measures of aircraft availability. For example, the Navy uses a model called “Readiness Based Sparing” that takes a given FMC goal and determines the level of funding and spare parts inventories needed to reach that goal. Such models are useful in the case of spare parts inventories. However, we were not able to identify any models in widespread operational use that integrated the other influences on MC rates, such as maintenance personnel assigned, into an overall model able to predict the impact of changes in those resources on MC and FMC rates. Army and Air Force officials told us that they had recently developed such integrated models, and they are currently in limited use to test their validity. Navy officials told us that they did not yet have an integrated model. The potential amount of funds affected by the level at which MC and FMC goals are set is large. Military service estimates of the spending of operations and maintenance funds for aircraft spares and repair parts were over $7 billion in fiscal year 2001. This figure does not include spending from other sources such as procurement and working capital funds. Precisely how MC and FMC goals are established is unknown. DOD officials said that a combined DOD and military service team establishes operational requirements and MC goals during the acquisition process. After approval, these requirements are recorded in the Operational Requirements Document or other documents associated with the process. According to officials, part of this process involves an engineering analysis of the expected operational availability of the aircraft and the underlying level of maintenance support elements needed. “Operational availability” is an engineering term referring to the probability that equipment is not down owing to failure. In comparison, MC and FMC goals represent the expected percentage of time that an aircraft will be able to perform at least one or all of its missions, respectively. Service officials reviewed the acquisition documents for many of the aircraft in our review, but were unable to explain and document how the actual MC and FMC goals were chosen. According to officials, many of these aircraft were acquired 20 to 30 years ago, under processes that have changed over the years, and with no clear documentation of the basis for the specific goal chosen. Moreover, there was often confusion over which organizations were responsible for setting the goals. For example, Navy officials pointed to a 1996 Center for Naval Analyses study that attempted to determine how the MC and FMC goals for Navy aircraft were originally computed. According to the study, however, “no one knows the origin of the numbers or the method used to compute them. Now, the numbers are routed to knowledgeable people for revision, which are made without documenting the rationale for the changes.” In a July 17, 2002, letter to us, the Navy further explained that it believed that the MC goals were established in the early 1980s “to be in line with the reported status quo for the day” with “no analytical rigor applied at the time of their birth.” We requested a written explanation of how the goals were set because, despite repeated referrals to various offices over several months, no Navy official could explain how the goals were established or identify the responsible office. According to Navy officials, there was uncertainty between the program and policy offices as to who is responsible for establishing the goals and who should answer our questions. Similarly, Army officials could not explain how their goals were set, and two separate Army organizations believed the other was responsible for setting the goals. The Army’s written response to our request for an explanation of how the goals were set (dated July 31, 2002) was prepared by officials from the Army’s Training and Doctrine Command and forwarded to us by a letter from the Office of the Deputy Chief of Staff for Logistics. The Deputy Chief of Staff’s letter states that MC goals for Army aircraft are extracted from the System Readiness Objective contained in the Operational Requirements Document established during an aircraft’s acquisition, and that the Training and Doctrine Command is responsible for establishing the System Readiness Objectives. However, the Training and Doctrine Command’s letter states that it does not set System Readiness Objectives and that the Deputy Chief of Staff for Logistics is responsible for establishing readiness goals. Nonetheless, the Training and Doctrine Command researched the operational requirements documents for the Army aircraft in our review in an attempt to answer our question about how the MC and FMC goals were set. The Command’s letter identified the operational availability requirements for most of the aircraft but did not explain how these requirements were set or make any reference to the MC or FMC goals. Officials from the Office of the Deputy Chief of Staff for Logistics told us that the Army is considering changing the FMC goals for all its aircraft to 75 percent to match the requirement for the highest-level readiness rating for equipment serviceability under the Global Status of Resources and Training System’s criterion. They did not know how the 75-percent-readiness-rating criterion was chosen. Air Force officials also could not explain how the initial MC and FMC goals for their aircraft were established. Officials from the Air Combat Command—responsible for Air Force fighters, bombers, and electronic command/control aircraft in our review—told us that they could find no historical record of the process used to establish most of the goals. Similarly, officials from the Air Mobility Command—responsible for the cargo and tanker aircraft—stated that the Command was formed in 1992 out of elements from the Military Airlift and Strategic Air Commands and did not know how the previous Commands had established the goals. According to these officials, each of the major Commands that operate aircraft and other major weapon systems in the Air Force is responsible for establishing its own MC goals, and no one has published a standardized methodology to use. Moreover, some of the documentation related to the goals was lost when the Military Airlift and Strategic Air Commands were deactivated. Similar to the Navy, however, officials from both Commands believed that the goals were set on the basis of the historical performance of similar aircraft and/or subjective Command judgments. While Air Force officials could not explain how the initial goals were established, they told us that their annual reviews of the goals are based on a mix of historical trend analysis and requirements reviews. The Air Force is the only service that conducts formal reviews of its goals each year. According to officials from the Air Mobility and Air Combat Commands, until 1997-98, reviews of the goals in both Commands were based on an analysis of actual historical MC and FMC rates. For example, analysts at the Air Mobility Command compared the goals with the actual rates for the previous 2 years. Depending upon actual performance, the goal could then be changed, sometimes on the basis of subjective judgments. According to Air Combat Command officials, the MC goal for the B-2 bomber was set in fiscal year 2000 using an analysis of historical rates and command judgment. The first B-2 was delivered in 1993. In 1997-98, the two Air Force Commands began to develop so-called “requirements-based analyses” to review the standards. According to officials at the Air Combat Command, for example, it was recognized that the historical approach to reviewing the standards can perpetuate relatively low standards because it simply accepts the low funding levels and other problems that may lower MC rates without focusing on actual mission needs. The new approach attempts to factor in wartime operational requirements, peacetime flying hour requirements for pilot training, and other such requirements. A mix of both approaches is currently used by the commands to review the goals. The services also differed in their treatment of other important aspects of managing the goals, such as whether to vary the goals on the basis of an aircraft’s deployment posture. The Navy was the only service to tier its goals on the basis of its traditional practice of cyclical deployment schedules on board its ships and aircraft carriers. Operational aircraft in the Navy follow a cyclical pattern of deploying to sea on aircraft carriers and other vessels for a set period of time, such as 6 months. Once the deployed units are replaced, they experience a stand-down period during which they recover from the rigors of deployment until it is time to begin preparing for the next movement. The Navy varies the intensity of its maintenance and its MC and FMC goals according to this pattern. Navy aircraft more than 90 days away from a deployment have goals that are 5 percentage points lower than aircraft within 90 days of a deployment, and aircraft actually deployed have goals that are 5 percentage points higher than those within 90 days of deploying. In comparison, aircraft in the Marine Corps and other services have a level approach to maintenance where the goals do not vary, and maintenance is kept at a relatively constant level. Navy officials believed that the cyclical approach to maintenance could lower overall MC rates over time compared with the level approach. This is because of the reduced maintenance attention when the aircraft are not deployed. Some officials questioned whether the MC and FMC goals are adequate measures of an aircraft’s availability. For example, officials from the Air Force’s Air Mobility Command stated that they focused on the MC goal and not the FMC goal because their primary readiness objective is the specific mission currently assigned, not every possible mission the aircraft was designed for. Moreover, the Air Combat Command did not even establish FMC goals. This Command was the only one we reviewed that did not set FMC goals for its aircraft. Air Combat Command officials told us that they could find no documentation to explain why the Command did not establish FMC goals. In contrast, Army officials stated that their units focus primarily on the FMC goal because it is directly connected to readiness ratings under the Status of Resources and Training System. Furthermore, Navy officials stated that the military is moving away from the MC and FMC goals in newer aircraft, such as the Joint Strike Fighter. This is because the MC and FMC goals provide only a limited historical perspective and do not address issues that are important to war-fighting commanders such as how often an aircraft can fly missions over the course of a day and the probability that the aircraft will complete its mission. The Joint Strike Fighter, for example, is using a concept called “mission reliability” instead of MC and FMC goals. Mission reliability is the probability that the Joint Strike Fighter will complete its required operational mission without a failure. According to Navy officials, the predictive value and information on flight frequency and reliability provided by this new measure is very valuable to war-fighting commanders and is better for mission-planning purposes than the MC and FMC measures. Officials said that the mission-reliability concept could be used throughout DOD’s inventory of aircraft. DOD Instruction 3110.5 provides only vague or no guidance on many of the key issues raised in this report. For example, the instruction requires each military service to establish availability goals for its mission-essential systems and equipment, and a corresponding set of condition status measures relative to those goals. The instruction specifically identifies MC, FMC, and other specific capabilities as measures that the services must maintain. However, it does not identify the specific goals that must be established—MC, FMC, or any other—or the primary readiness objective to be served. In this regard, the instruction states that the services should assume planned peacetime usage in setting the goals. According to Air Force officials, peacetime usage can be more taxing than wartime usage because of the extra training and other requirements. Air Combat Command officials told us that they believed that the instruction regarding what goals—including the FMC goal—were required to be established was unclear. The instruction also provides little guidance on the methodology to be used in setting the goals. It states that the services should provide estimates of the maximum performance that is achievable, given the design characteristics of the aircraft, and that full funding and optimal operation of the logistics support system should be assumed. Service officials said they believe that actual levels of funding, personnel, spare parts inventories, and other key resources should be factored into the process of setting the goals, since full funding has not been provided for years. The instruction is silent on the issue of whether it is appropriate to use historical trends of similar aircraft in determining the goals, as opposed to a more analytical approach using actual requirements, for example. The instruction is also silent on whether the aircraft availability goals should vary on the basis of the aircraft’s deployment posture. Moreover, it includes no requirement for the services to identify the readiness and cost implications of setting the goals at different levels, to help clarify the pros and cons of available choices and the guiding principles used to decide on those choices. Similarly, the instruction provides little organizational structure for the goal-setting process in DOD. For example, it does not require the services to identify one office as the coordinating organization for goal-setting and other related activities. Furthermore, it does not require the services to document the basis for the goals chosen or outline any of the basic historical documentation that should be maintained for goal-setting and other key activities during the process. According to DOD officials from the office responsible for the instruction, DOD Instruction 3110.5 dates back to the 1970s when readiness concerns had reached a high point. The focus was on getting the services to set benchmark readiness goals, and the instruction gave them latitude to choose those goals, the methods for setting them, and the processes for managing them. The instruction was revised in 1990. However, officials told us that it has not been updated to reflect the current environment of frequent deployments and other changes since the end of the Cold War, and some now consider it a relic. We performed our work from February through November 2002 in accordance with generally accepted government auditing standards. The final publication of this report was delayed by the impact on DOD’s report review and classification process of the terrorist attacks on September 11, 2001 and DOD’s preparations for potential conflict in Iraq. While many of DOD’s key aircraft are not meeting MC and FMC goals, it is difficult to determine how significant this problem is because of the uncertainty and lack of documentation of the basis for the existing goals. Moreover, without knowing the basis for the existing goals, it is also difficult to know whether that basis is appropriate for the demands of the new defense strategy. DOD’s Instruction 3110.5 fails to clearly define the specific availability goals that all services must establish. Without the perspective provided by clear, consistent, and up-to-date goals, the perceptions of actual performance are subject to continuing uncertainty and disagreement, and confidence in the funding requests based on those perceptions is undermined. Moreover, the lack of a standard methodology for the services to use in setting the goals removes a safeguard for objectivity from the process, risking the possibility that the methods used do not realistically reflect actual requirements. This risk is increased when there is uncertainty or disagreement over basic questions such as whether it is appropriate to base the goals on a historical analysis or an analysis of actual requirements, and whether full funding of logistical support systems should be assumed in an era of reduced funding. Furthermore, the absence of information on the readiness and cost implications of setting the goals at different levels results in a lack of understanding of the pros and cons of available choices and the guiding principles used to make those decisions. Ultimately, inappropriately set goals can unnecessarily raise or lower the cost of spare parts inventories and other logistical resources by millions of dollars. Also, DOD’s instruction requires the services neither to designate one office to coordinate the establishment and maintenance of aircraft availability goals, nor to document the basis for the goals chosen or other key issues in the process. Clear responsibilities and requirements in these areas are fundamental to the effective management of any performance system. Without the transparency provided by adequate documentation of the process, neither DOD nor the Congress can be reasonably assured that the services have selected the optimal goals on the basis of preferred principles. To ensure that aircraft availability goals and their performance measures are appropriate to the new defense strategy and based on a clear and defined process, we recommend that (1) DOD and the services determine whether different types of aircraft availability goals are needed, (2) as appropriate, DOD and the services validate the basis for the existing MC and FMC goals, and (3) the Secretary of Defense revise DOD Instruction 3110.5 to clearly define the specific aircraft availability goals required to be established by the military services and their accompanying performance measures; establish a standard methodology identifying objective principles of analysis to be used by all services in setting the goals, including an identification of the readiness and cost implications of setting the goals at different levels; and require each service to identify one office to act as a focal point for coordinating the development of the goals and for maintaining a documentary record of the basis for the goals chosen and other key decisions in the goal-setting process. In written comments on a draft of this report, DOD concurred or partially concurred with all our recommendations. The department agreed to determine whether different types of aircraft availability goals are needed, including the option of tailoring such goals to unique military service and mission requirements. DOD also agreed to validate the basis for the existing goals, including the DOD Instruction 3110.5 requirement that full funding of support systems be assumed in establishing availability goals. In addition, DOD indicated that it would explore alternative methodologies for setting goals, such as one based on unit deployment cycles currently in use by the Navy. DOD partially concurred with our recommendation for a series of revisions to DOD Instruction 3110.5. It agreed with our recommendation that the instruction be revised to require each service to designate a focal point for the development and historical documentation of the goal-setting process. However, DOD did not agree with the part of our recommendation calling for it to include the performance measures associated with the aircraft availability goals in the instruction. DOD believed that that requirement implied that those performance measures should be the sole or primary measure of the overall state of materiel readiness. That was not our intent. Our recommendation is meant to ensure that the goals and accompanying performance/status measures selected are clearly defined in the instruction. As pointed out in the report, this is not currently the case. We agree that determinations of overall materiel readiness require the consideration of a variety of factors, such as maintenance manning and supply fill rates, as well as metrics such as an aircraft’s availability. However, we believe that the instruction should continue its current requirement to include performance/condition status measures relative to those goals. Clearly identifying the goals that are sought and their performance measures in the instruction will help avoid further uncertainty and disagreement over the level of basic aircraft performance, and does not preclude the consideration of other metrics in broader assessments of overall readiness. For these reasons, we believe no change to our recommendation is needed. DOD also disagreed with the part of our recommendation calling for the Secretary of Defense to revise the instruction to establish a standard methodology identifying objective principles of analysis to be used in setting the goals. It believed that the services should establish the detailed analytical methodology because the types of goals and their basis may vary by service, and the services have a better understanding of the differences and complexities of their individual environments. We agree with the need for some leeway at the service level to handle individual differences between them. However, we continue to believe that all services should adhere to a standardized set of overarching principles of analysis in order to safeguard objectivity and transparency in the goal setting process. Such principles could be identified in coordination with the services during the department’s planned evaluation of the basis for the current goals and alternative methodologies. The services could then develop detailed methodologies consistent with these principles but tailored to their individual environments. Consequently, no change to our recommendation is required. The department’s comments are reprinted in appendix III. DOD also provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (757) 552-8100 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix IV. Legend: MC = mission capable, FMC = fully mission capable, FY= fiscal year, EC/C = electronic command and control. Fiscal year 2002 rates are as of February for the Army, Navy, and Marine Corps, and March 31 for the Air Force. Aircraft ages are as of September 30, 2001 for the Navy/Marines; December 31, 2001, for the Air Force; and April 2002 for the Army. Aircraft costs/flying hour are as of January 2001 for the Army, and September 30, 2001, for the Air Force, Navy, and Marines. To identify Department of Defense (DOD) and service policies and practices regarding mission capable (MC) goals and rates, we obtained briefings, reviewed DOD and service regulations and prior reports by us and others; and interviewed officials at the Office of the Secretary of Defense: the Office of the Joint Chiefs of Staff; headquarters offices of the Army, Navy/Marine Corps, and Air Force; and aviation commands and other locations as appropriate. To determine whether key DOD aircraft were meeting established MC and FMC goals, we requested that each service identify its key active-duty operational aircraft. We excluded reserve units from the scope of our review, as well as active-duty training units and executive aircraft used to transport officials on official business. The resultant list included a total of 46 different models of aircraft from the four military services, which we categorized into five basic types: bombers, cargo/tanker aircraft, electronic command/control aircraft, fighter/attack aircraft, and helicopters. Three aircraft models (F/A-18A, F/A-18C, and EA-6B) were used by both the Navy and Marine Corps. For our review, we counted the Navy and Marine Corps versions of these aircraft as separate models, resulting in a total of 49 aircraft models for review. We requested MC and full mission capable (FMC) goal and rate data, aircraft age and cost, and other data for these aircraft back to 1991 to provide a historical perspective on goal and rate history. The Army and Air Force provided comprehensive data from fiscal year 1991 to mid-fiscal year 2002. However, the Navy and Marine Corps could provide data separated by service only from fiscal year 1998 forward. These services changed their reporting system in 1998 and were unable to provide comparable data for prior years. As a result, we focused our report on the 5-year period beginning in fiscal year 1998. However, we included the full array of Army and Air Force data in appendix I. We used these data to conduct analyses of whether the aircraft were meeting their goals. We also provided each service with these databases for review, and they confirmed the results for accuracy. To identify the causes of difficulties in meeting MC and FMC goals, we reviewed prior reports by us and others and conducted a variety of comparative analyses of our data by service, aircraft type, model, age, cost, and fiscal year. We then held discussions with each service to gain their perspectives on the causes of observed difficulties in meeting the goals. To determine whether DOD has a clear and defined process for setting MC and FMC goals, we reviewed DOD Instruction 3110.5 and other regulations and conducted discussions with officials from the Office of the Secretary of Defense and service headquarters in Washington, D.C., and with officials from the headquarters of the Air Force’s Air Mobility and Air Combat Commands; the Naval Air Systems Command; and Army Training and Doctrine Command officials at Fort Rucker, Alabama. Because of the difficulty in obtaining clear information on this issue, we also wrote formal letters of inquiry to the Secretaries of the Army and Navy requesting clarification of how the goals were established. Their responses to those letters of inquiry were used in preparing our report. We performed our work from February through November 2002 in accordance with generally accepted government auditing standards. The final publication of this report was delayed by the impact on DOD’s report review and classification process of the terrorist attacks on September 11, 2001, and DOD’s preparations for potential conflict in Iraq. In addition to those named above, Bernice Benta, Katherine Chenault, and R.K. Wild made key contributions to this report.
|
The attacks on September 11, 2001, show that threats to U.S. security can now come from any number of terrorist groups, at any number of locations, and in wholly unexpected ways. As a result, the Department of Defense (DOD) is shifting to a new defense strategy focused on dealing with uncertainty by acting quickly across a wide range of combat conditions. One key ingredient of the new strategy is the availability of aircraft to carry out their missions. Key measures of availability include the percentage of time an aircraft can perform at least one or all of its assigned missions, termed the "mission capable" (MC) and "full mission capable" (FMC) rates, respectively. GAO examined whether key DOD aircraft have been able to meet MC and FMC goals in recent years, and DOD's process for setting aircraft availability goals. Less than one-half of the 49 key active-duty aircraft models that GAO reviewed met their MC or FMC goals during fiscal years 1998-2002. The levels of mission capability varied by military service and type of aircraft, and the levels at which the goals were set also varied widely, even among the same type of aircraft. However, the MC and FMC goals for each model changed little over time. Since 1998, only 11 of 49 aircraft models (22 percent) experienced a change to their goals. Seven of the changes were to raise the goals to higher levels. Difficulties in meeting the goals are caused by a complex combination of logistical and operational factors. Despite their importance, DOD does not have a clear and defined process for setting aircraft availability goals. The goal-setting process is largely undefined and undocumented, and there is widespread uncertainty among the military services over how the goals were established, who is responsible for setting them, and the continuing adequacy of MC and FMC goals as measures of aircraft availability. Uncertainty and the lack of documentation in setting the goals ultimately obscures basic perceptions of readiness and operational effectiveness, undermines congressional confidence in the basis for DOD's funding requests, and brings into question the appropriateness of those goals to the new defense strategy. DOD guidance does not define the availability goals that the services must establish or require any objective methodology for setting them. Nor does it require the services to identify one office as the coordinating agent for goal setting or to document the basis for the goals chosen. DOD officials told GAO that the guidance has not been updated since 1990 to reflect the new security environment of increased deployments and other changes since the end of the Cold War.
|
The federal government contracts for a variety of services, from elevator maintenance to program management support, and often has a need to continue these services beyond the lifespan of an individual contract. However, in certain situations, it may become evident that a base contract and any option years will expire before a subsequent contract to meet the same need can be awarded. In these cases, because of time constraints, contracting officers generally use one of two options: (1) extend the existing contract for up to 6 months or (2) award a short-term stand-alone contract to the incumbent contractor on a sole-source basis to avoid a lapse in services. While these contracting options have been informally referred to as bridge contracts by some in the acquisition community, no formal definition of bridge contracts exists nor is there a requirement to track them in the Federal Acquisition Regulation (FAR). For the purposes of this report, we established the following definitions: Bridge contract. An extension to an existing contract beyond the period of performance (including option years), or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract. Predecessor contract. The contract in place prior to the award of a bridge contract. Follow-on contract. A longer-term contract that follows a bridge contract for the same or similar services. This contract can be competitively awarded or awarded on a sole-source basis. Contract extensions and the award of stand-alone bridge contracts are established in different ways. If a contracting officer needs a bridge contract and opts to extend an existing, predecessor contract, the contracting officer may use a number of different authorities to do this. If the predecessor contract included the “option to extend services clause,” the contracting officer could use this clause to extend the contract for up to six months, based on the FAR. If the contracting officer determines that a new short-term sole-source contract should be awarded to avoid a gap in services, the FAR generally requires that the contract award be supported by a written justification known as a justification and approval document (J&A). The J&A must include sufficient facts and rationale to justify the use of a sole-source contract and include, among other things, the following information: The nature or description of the action being approved; A description of the supplies or services required to meet the agency’s need, including the estimated value of the contract; The statutory authority being cited to justify a noncompetitive contract—for example urgency, only one-source available, etc; A demonstration that the proposed contractor’s unique qualifications or the nature of the acquisition requires use of the authority cited; and A determination by the contracting officer that the anticipated cost to the government will be fair and reasonable. While OMB has stated that noncompetitive contracts can play an important role in helping agencies address the needs that arise during emergencies, we and others have noted that competition is the cornerstone of a sound acquisition process and OMB has issued guidelines for federal agencies to increase competition and reduce their spending on sole-source contracts. Further, the FAR prescribes policies and procedures to promote full and open competition. There are few, if any, federal contracting reviews or reports focused solely on bridge contracts. However, we and others have identified such contracts in prior reviews and, in some cases, reported on challenges related to their use. For instance, in an August 2011 report on acquisition planning, we reported that a prior GAO bid protest decision found that the Department of Homeland Security’s U.S. Customs and Border Protection had not properly justified an $11.5 million bridge contract and had failed to engage in reasonable advanced acquisition planning. In a March 2012 report on competition, we found that 18 out of the 111 J&As we reviewed were for bridge contracts with a total value of over $9 billion. We found that these bridge contracts were caused by delays in the acquisition planning process, unexpected expansion of requirements, and bid protests. In March 2014, we issued a report on noncompetitive contracts awarded on the basis of urgency. We found that 12 of the 34 contracts we reviewed were bridge contracts. The average period of performance for these 12 contracts was 11 months with a total contract value over $466 million. Additionally, in a March 2010 report on competition for services contracts, the Institute for Defense Analyses reported that nearly one in four sole-source contracts reviewed were bridge contracts. That report noted that bridge contracts represented a potentially large cost to DOD due to process inefficiencies such as the cost of administering the bridge contracts, the strain on the limited DOD contracting workforce because bridge contracts must be justified and awarded while the follow- on contract was being planned, and the loss of benefits associated with competition during the period that the bridge contracts are in place. The agencies we reviewed had limited or no insights into their use of bridge contracts. None of the agencies have agency-level policies to manage and track their use of bridge contracts, nor do their acquisition regulations define bridge contracts. HHS officials told us that their agency has no overarching policy because the agency does not have a standard definition for bridge contracts. Officials at DOD said that, at the department-level, the agency did not have any policies because bridge contracts had not previously been raised as a specific concern at the department. DOJ officials indicated they see defining bridge contracts as a government-wide issue, and officials from one of their components told us that the concept of defining bridge contracts was a new one to them. HHS officials also stated that the agency has some visibility into high- dollar bridge contracts through the FAR-required reviews of J&As. Two of the eight components—the Navy and DLA—established policies in 2012 and 2013, respectively, regarding the use of bridge contracts. Both components’ policies were established to reduce reliance on bridge contracts and note that bridge contracts can be an impediment to competition. DLA’s policy further states that bridge contracts may be indicative of a lack of adequate preparation for follow-on acquisitions. DLA officials we spoke with told us that there was concern at DLA regarding the impact bridge contracts could have on competition, since they effectively delay competition by extending existing contracts or awarding sole-source contracts to incumbent contractors. Officials said that they hope the policy will increase competition at DLA by focusing management attention on the use of bridge contracts and tracking their use. In both cases, these components’ policies go beyond the standard J&A requirements for sole-source contracts to specifically address bridge contracts. Features of these components’ policies on bridge contracts are included in table 1. As the table shows, DLA’s definition of bridge contracts explicitly includes contract extensions whereas the Navy’s has additional guidance as to when contract extensions are considered bridges. A DLA official told us that they included contract extensions in their definition because extensions still enable officials to bridge a gap in service without competition. The Navy report to the Office of the Deputy Assistant Secretary of the Navy, Acquisition and Procurement includes contract numbers, periods of performance for the predecessor and bridge contracts, dollar values, and the rationale supporting the use of a bridge contract, among other information. The DLA report to the Acquisition Operations Division includes contract numbers, periods of performance for the bridge contract, dollar values, number of the bridge contracts awarded for the requirement, and other information. According to Navy officials, the department is monitoring the contract values of bridge contracts awarded. For example, officials told us that in fiscal year 2014 the Navy made bridge contract awards in excess of $1.6 billion. Navy officials told us that while it is too early to quantify, the implementation of the policy brought about a cultural shift away from more frequent use of bridge contracts and helped significantly curb prolonged use of bridge contracts. According to a DLA official responsible for compiling bridge contract information, DLA awarded $1.3 billion in bridge contract awards in fiscal year 2014. DLA officials also told us that they were seeing reductions in the use of bridge contracts based on an internal review process. Increased attention to bridge contracts, according to a DLA official, sends a message to program-level activities that DLA wants to reduce its use of bridge contracts, and requiring approval appears to be an effective deterrent to awarding bridge contracts if the program or contracting office does not have a good reason to do so, such as poor acquisition planning. In addition, one activity within the Army—the Health Care Acquisition Activity (HCAA), which was not included as part of our review—issued a policy memorandum in November 2008 that established a definition and an approval and tracking mechanism for bridge contracts. Similar to the policies at the Navy and DLA, HCAA’s policy was established due to concern over the increasing reliance on bridge contracts at the activity. In particular, the policy stated that there was concern that bridge contracts, which prevent competition, were being awarded to expand the scope of the original requirement, which was increasing costs. The policy and compliance branch at HCAA developed a tracking system to account for the number of bridge contracts awarded. According to HCAA officials, issuing the policy memorandum and requiring officials to report their use of bridge contracts has enhanced the activity’s ability to track bridge contract use and prevented the award of bridge contracts that increase the scope of work established by the predecessor contract. Federal internal control standards state that agencies should identify, analyze, and monitor risks associated with achieving objectives, and that information needs to be recorded and communicated to management so as to achieve agency objectives. One common procurement objective at federal agencies is to maximize competition. However, without a definition for bridge contracts, and strategies for tracking and managing their use, agencies are not able to fully identify and monitor the risks related to these contracts, and therefore may be missing opportunities to increase competition. As we noted earlier, the FAR does not define bridge contracts. Staff from OMB’s OFPP, one of the entities responsible for initiating revisions to the FAR, acknowledged that the use of bridge contracts may introduce risks related to a lack of competition, such as the risk of higher contract prices. Similarly, contracting, program, and policy officials we spoke with also stated that while bridge contracts are an important “tool in their toolbox” for ensuring continuity of services, some officials indicated that their prolonged use poses a risk to competition, and that use of bridge contracts should be avoided when possible. DOD, DOJ, and HHS awarded bridge contracts to procure a diverse array of services, ranging from professional and administrative support to housekeeping. While most of the 73 contracts we reviewed had periods of performance of six months or less, when we did a deeper dive on 29 of these contracts, we found that more than half of these actually had periods of performance far greater than initially apparent. Some spanned several years. Overall, roughly one-third of the 29 contracts had periods of performance that exceeded two years. The increased periods of performance also corresponded to increased contract values. In terms of pricing, contracting officers generally based the prices of bridge contracts we reviewed on historical prices, and our price analysis found some instances of increased prices between the predecessor and bridge contracts. However, even after lengthy bridge contracts, we found that competition occurred in most cases. For 23 of the 26 cases where follow- on contracts were in place, they had been competitively awarded. In some cases, we were able to quantify savings from the competition of the follow-on contracts based on our price analysis. Competition has generally been considered to be associated with achieving more favorable prices, our prior work and those of others has cited potential savings from competition. DOD, DOJ, and HHS awarded bridge contracts for a wide range of services. Figure 1 shows a break-out of the types of services procured through the 73 bridge contracts included in our review. Over a quarter of the 73 bridge contracts we reviewed were awarded to ensure the continued provision of professional and administrative services, such as the employment of graphic artists and public affairs officers to assist in Navy recruiting efforts, as well as the organization of an NIH-sponsored coalition to adopt nationwide medical imaging standards. Another 23 percent of the bridge contracts we reviewed were awarded for information technology services, including base-wide multimedia and broadcast services for the Army; text mining software used by NIH officials to categorize and report on research findings; and technology used to track evidence at DEA. Fifteen percent of the bridge contracts were awarded by BOP to provide residential reentry services for eligible inmates, which includes employment, housing, and other opportunities to assist federal offenders’ transition back into their communities. Bridge contracts were also awarded for a variety of other services, such as utilities; housekeeping (which runs the gamut from janitorial services to pest control); research and development; and maintenance and repair of equipment or facilities. Most of the 73 bridge contracts had periods of performance of less than six months. However, when we conducted our more in-depth review of 29 of these contracts, we found that more than half involved one or more bridges that spanned much longer periods of time. Specifically, we found that 20 of the 29 contracts had additional bridges that were not apparent in our review of the initial documentation, and that more than half of the 29 contracts had periods of performance greater than six months. For example, during our initial review of J&A documentation for an NIH bridge contract for utility services at a research facility, we found no record of an additional four-month bridge contract. Through our interviews with contracting officials, however, we learned that another bridge contract had been awarded prior to the bridge contract we had identified. In another example, our initial review of a J&A for a residential reentry services contract at BOP indicated that contracting officials granted approval for a seven-month bridge contract, but upon further review, we found that there were five separate bridge contracts awarded over a 27- month period between the predecessor and follow-on contracts. Figure 2 depicts the multiple bridges and indicates the 7-month bridge that we had initially identified. In another example, our initial review of the J&A documentation for an Army bridge contract to procure computer support services indicated that contracting officials had granted approval for a bridge contract that was not to exceed 12 months. However, we later learned from speaking with officials and reviewing additional contract file documentation that the actual period of performance spanned 42 months, as shown in figure 3. The longer periods of performance observed in our in-depth review corresponded with an increased value of the contracts from what was apparent in our initial review. Most of the 73 contracts included in our high-level review had relatively small dollar values—less than $1 million, while ten percent of the contracts had values greater than $10 million— with the highest valued at $79 million. Our in-depth review, however, revealed the value of the majority of the 29 bridge contracts included in that review to be greater than initially apparent. For example, the J&A for a bridge contract to provide computer support services at the Army awarded to an Alaska Native Corporation—included as a part of our high level review—had an estimate of $20 million. However, based on our in- depth review, the total reported value of the two bridge contracts awarded to bridge the gap in services was over $28 million. In another example, a BOP contract for inmate reentry services, the J&A we initially reviewed estimated the bridge contract value to be about $454,000, but our in- depth review revealed that the value of the five stand-alone bridge contracts awarded for this requirement exceeded $1.2 million. In all, the value of the stand-alone bridge contracts awarded on the contracts we reviewed in-depth was over $225 million. The fact that the full length of a bridge contract, or multiple bridge contracts for the same requirement, is not readily apparent from the review of an individual J&A presents a challenge for those agency officials responsible for approving the use of bridge contracts. Approving officials, signing off on individual J&As, would not have insight into the total number of bridge contracts that may be put in place by looking at individual J&As alone. Without a definition and a policy for bridge contracts, J&A documentation generally provides information on the individual contract covered by the J&A, and on the anticipated period of performance and estimated contract value at the time of award, rather than a full picture of the cumulative time and cost associated with bridging a gap in services for a requirement. Overall the average period of performance for the 73 contracts we reviewed at a high-level was 8 months, and the average period of performance for the 29 contracts we reviewed in-depth was 21 months. Figure 4 illustrates that the actual periods of performance for these 29 bridge contracts ranged from two weeks to over five-and-a-half years; about one-third of the contracts had periods of performance that exceeded two years. For 20 bridge contracts included in our in-depth review, contracting officials used the option to extend services clause to bridge, at least in part, the predecessor to the follow-on contract. This clause allows contracting officials to award more than one extension as long as the total period of performance does not exceed six months, but we found that for five of the 29 cases, three of which were in the Army, contracting officials failed to follow the clause in that they had extended the contract beyond the six-month timeframe. For instance, in the example displayed in Figure 3, Army contracting officials extended a bridge contract on two occasions, with each extension lasting six months in duration. This means an additional year was added to the initial period of performance of one year. For both extensions, contracting officials cited the option to extend services clause. Additionally, we learned from contract file review documentation that contracting officials had attempted to extend this bridge contract a third time, but the local office of small business programs denied this request because the incumbent contractor no longer qualified as a small business. Because of the recurring nature of this issue at one location within the Army, we plan to report on the issue separately. While Navy bridge contracts spanned similar lengths of time to those of other agencies, we found that the Navy contract files had much more robust documentation and generally identified the reasons for the use of bridge contracts in each J&A. Some of the Navy’s J&A documentation included a full account of the length and cost of the bridge contract. For example, the J&A we reviewed for a nine month bridge contract for electromagnetic spectrum management support included the periods of performance for the predecessor contract and one prior bridge contract, and provided a detailed account of the reasons for the delays. In addition, the Navy submitted a follow-on J&A to account for a four-month extension to the bridge contract. The initial J&As listed the value of the bridge contract at almost $4 million. Our in-depth review showed that the combined value of all bridge contracts awarded for this requirement was $4.6 million, which was roughly similar to the estimate provided in the J&As. J&A documentation we reviewed from other components in our review generally did not detail information on the total cost of the bridge contract in the individual J&A. Further, in some cases, the combined value of the total bridge was more than had been conveyed in an individual J&A. For example, a J&A for a contract to provide scientific, logistical, and administrative support to NIH indicated that a contract extension for six months was estimated to cost $1.5 million. However, our in-depth review of this contract, as well as its predecessor and follow-on contracts, showed that the combined value of all bridge contracts awarded for this requirement was approximately $5 million. The FAR requires that contracting officers establish that the prices paid for contracts are fair and reasonable and expresses a preference for comparison of prices obtained through competition. Because competition is absent with the award of a bridge contract, contracting officers’ fair and reasonable price determinations become imperative. We were able to collect information on how a contracting officer determined price reasonableness for 73 bridge contracts. Most contracting officials noted that they compared the proposed prices to the historical prices paid for the same or similar services, or used more than one method to establish price reasonableness (see figure 5). To determine the extent to which the price paid by the government changed when a bridge contract was awarded to the incumbent contractor for the same services acquired under a previous contract, we conducted a price analysis for 10 of the 29 bridge contracts included in our in-depth review. We compared the rates of selected individual Contract Line Item Numbers (CLIN) for 5 of the 10 bridge contracts to those of their predecessor and competitive follow-on contracts. For 4 of the 10 bridge contracts, which provided residential reentry services to federal inmates, we compared the daily rate paid per inmate, and for the one remaining bridge contract included in this analysis, we compared the hourly price paid for three labor categories to those of the predecessor and follow-on contract. For the remaining 19 contracts we were unable to establish a direct comparison of CLINs or labor categories due to changes to the scope of the requirement or pricing type of the predecessor, bridge, or follow-on contract. Although our analysis was by necessity limited to those CLINs or labor categories that could be traced across the predecessor, bridge, and follow-on contracts, it provided insights into pricing trends for similar services over time. We found that for 5 of the 10 contracts, the price paid for services on the initial stand-alone bridge contract or contract extension increased from that of the predecessor contract. For example, the monthly rate for administrative and information technology support services increased by nearly $47,000, or 6.4 percent, under a Navy bridge contract, awarded when the predecessor contract expired. However, when the contract was further extended, the price paid decreased by nearly $105,000, or 13.5 percent. Similarly, the CLIN for monthly materials and travel, under a stand-alone Army bridge contract for research and development testing and evaluation services, increased by approximately 5 percent, or $67,400, when compared to the rate of it’s predecessor contract. Moreover, when that bridge contract was further extended, the price increased by another 16.6 percent, or $265,000. However, the price paid remained unchanged when the bridge contract was further extended. Of the remaining 5 contracts, in 4 cases the price paid remained the same, and for the remaining contract the price decreased. Follow-on contracts were competitively awarded for 23 of 26 contracts included in our in-depth review. The 3 remaining follow-on contracts were awarded on a sole-source basis. As noted above, competition generally leads to more favorable pricing. The fact that the vast majority of follow-on contracts were competed after the bridge contract expired highlights the urgency of ending bridge contracts as soon as possible, since these contracts are almost always sole-source. The government has opportunities for savings when the contract awarded following a bridge is competitively awarded. For 7 of the 10 contracts where we conducted a price analysis, savings were achieved upon the award of the follow-on contract. Examples include: An Air Force contract for logistic support services that resulted in a monthly rate reduction of approximately $22,400 or 34 percent; A daily rate reduction of $10.00 per inmate, or 12.5 percent, for residential reentry services at BOP; For a Navy contract providing administrative and professional support services, the rate was reduced by 15.6 percent, or approximately $16 per man hour; As shown in Figure 6, the hourly rate changed for three labor categories for an Army computer support services contract. While the rate increased from the predecessor contract to the first bridge, it decreased from the first to second bridge, and decreased again from the second bridge to the competitive follow-on contract. Most significant is the rate reduction for the Database Management Specialist; the award of the follow-on contract resulted in a decreased hourly rate of nearly $21.00, or 28 percent. The contracting official responsible for this contract told us that by awarding the follow-on contract competitively, the incumbent contractor had to re-evaluate what price the market demands for these services. Competition has generally been considered to be associated with achieving more favorable prices, and we and others have cited potential savings from competition in prior work. For example, a 2013 report by the Department of Veterans Affairs’ Office of the Inspector General estimated that the Veterans Affairs’ Technology Acquisition Center could have saved 20 percent, or approximately $57.9 million, in acquisition costs if task orders for information technology services had been competed. A variety of reasons caused delays that resulted in the use of bridge contracts, but late completion of documentation needed to solicit follow- on contracts was the most frequent reason that we identified across our sample of 73 contracts. Contracting officials told us that acquisition workforce problems—such as inexperienced staff and frequent turnover of contracting and program office staff—also led to the use of bridge contracts and influenced other delays, such as late completion of acquisition planning documentation and challenges during source selection. The majority of agency officials that we interviewed identified bid protests as a common reason for the use of bridge contracts, and we found that bid protests had caused delays in eight of the 29 contracts included in our in-depth review–roughly a quarter—and that bid protests created substantial delays in awarding follow-on contracts. Based on our reviews of contract documentation and information provided by agency officials, we found that the most commonly cited reasons for the use of a bridge contract across the 73 contracts were related to acquisition planning issues—in particular the late completion of key acquisition planning documentation, such as statements of work, that are needed to begin a solicitation. Acquisition planning activities generally begin when the program office identifies a need, involves research and preparation of acquisition documents by both the program office and the contracting office, and concludes when the contracting office issues a solicitation. Our prior work has identified challenges that agencies faced in relation to acquisition planning on contracts for services, such as defining their needs and providing guidance to program offices on timeframes for pre-solicitation activities, such as defining requirements in a statement of work document. Other frequently identified reasons for delays included delays in source selection, acquisition workforce challenges, and bid protests, among others. Figure 7 illustrates the number of instances each reason was cited for the contracts included in our sample. For most of the contracts, there were multiple reasons driving the use of bridge contracts. Our findings regarding the reasons behind the use of bridge contracts echo the findings of the Institute for Defense Analyses’ March 2010 report on competitiveness in contracts for services. That report noted that bridge contracts occur when a delay in the acquisition process prevents the award of a competitive follow-on contract until after the contract in place is due to expire. The report further explained that these delays arise from various sources, including the requiring agency or program office, the contracting office, and other sources such as bid protests. Our in-depth review of 29 contracts further underscored that acquisition planning issues frequently led to the use of bridge contracts and provided additional insights into the nature of these issues. For example, the majority of the contracting officials that we interviewed cited the late submission of key acquisition planning documentation from program officials as one of the most common reasons why bridge contracts are needed. For 18 of the contracts, contracting officials told us that the statement of work, in particular, was either submitted late by the program office, required multiple rounds of revisions before it was ready to be published, or a combination of those two factors contributed to the need for a bridge contract. Acquisition planning challenges stemming from the coordination of program and contracting offices have been highlighted in some of our past work. For example, in a July 2010 report on competition, we found that several contracting officials from different agencies expressed concern that program offices sometimes do not allow them enough time to execute a sufficiently robust acquisition planning process that could increase opportunities for competition. They told us that program offices are insufficiently aware of the amount of time needed to properly define requirements or conduct adequate market research. A contract awarded by DEA highlights some of the acquisition planning problems that we found across the 29 contracts. In this example, DEA contracting officials told us that the program office was late in submitting the statement of work. According to those officials, a contract extension was awarded for 6 months to accommodate this delay. During this time, the contracting office issued a solicitation for this requirement and received multiple proposals for the follow-on contract, but the source selection board realized during the proposal evaluation phase that the statement of work did not accurately reflect the agency’s needs. Upon realizing that a completely new statement of work was required, DEA decided to cancel the solicitation and awarded a six month bridge contract, which was later extended by three months to accommodate the additional time it needed to award a follow-on contract. Acquisition planning shortfalls caused substantial delays for some contracts we reviewed at other components as well. For example, at BOP, we found a series of 17 stand-alone bridge contracts, most of which were about three months in length, to provide natural gas service at a penitentiary that were put in place after the predecessor contract expired in January 2011—following a four month contract extension. Contracting officials told us that the program office was extremely late in submitting the necessary paperwork to award a follow-on contract, and did not submit the required acquisition planning documents until February 2014, over three years after the predecessor contract expired. As of the date of this review, contracting officials have yet to award a follow-on contract and attribute these delays to personnel shortages within the contracting office. Specifically, contracting officials told us that they are short-staffed; explaining that 119 different requirements are handled by only two contracting officers, therefore, this requirement often gets placed on the backburner, resulting in the need for bridge contracts to prevent a gap in critical services. As the previous example highlights, challenges related to the acquisition workforce can exacerbate delays, and thus contribute to the award of bridge contracts. We found that acquisition workforce challenges—in particular, inexperienced and overwhelmed staff, as well as staff turnover—led to the use of bridge contracts and influenced other delays, such as the late completion of acquisition planning documentation and challenges during source selection. Contracting officials from multiple agencies told us that late statements of work were often a symptom of a lack of knowledgeable and seasoned staff in program offices. For example, contracting and program officials from the Air Force and Army told us that workforce challenges were responsible for inefficiencies or missteps that introduced delays into the acquisition process. Contracting officials at the Air Force told us that in one instance, inexperienced contracting personnel failed to exercise the second annual option for a logistics management contract and the contract expired. However, the contractor continued to provide services during that time without a contract in place for over five weeks before the mistake was realized. As a result of this and other problems, the last two years of the contract could not be used, and a series of noncompetitive bridge contracts totaling 41 months were used until a competitive follow- on contract was awarded. The same Air Force officials also told us that the majority of their contracting workforce had less than five years of experience, which contributed to significant delays in awarding follow-on contracts. Similarly, three Army contracting officials told us that their divisions did not have enough experienced contracting officers or available attorneys to run source-selection boards in order to select vendors for follow-on contracts. One of those officials also told us that their overwhelmed contracting office also struggled to award new contracts in a timely manner. For example, that official told us that a bridge contract that had been in place for 37 months would likely be extended yet again even though it was possible to award a follow-on contract, because it was unclear if anyone in her division would have enough time to dedicate to that requirement before the current bridge contract expired. Contracting and program officials from this Army component concurred that workforce challenges in both the program and contracting offices were the primary reason why they awarded multiple bridge contracts that lasted more than three years. We have found and reported on government-wide acquisition workforce challenges for many years, including DOD’s efforts to rebuild the capacity of its acquisition workforce. Contracting and program officials from all three agencies cited staff turnover as another driver of bridge contracts. Specifically, officials told us that turnover contributed to delays for 10 of the 29 bridge contracts in our in-depth sample. In one example, a program official at DEA told us that awarding a follow-on contract for counseling services was delayed in part because there were three different contracting specialists working on the requirement while it was being recompeted. This official also stated that there may have been a larger staffing issue in the contracting office during this time that contributed to the solicitation being issued later than expected after the statement of work had been finalized. Through our analysis of contract documentation and information provided by agency officials, we also found that a lack of institutional knowledge within the contracting office was apparent for seven of the 73 contracts in our sample. This acquisition workforce problem was generally the result of staff turnover coupled with a lack of contract documentation. For example, after reviewing a contract file for software within DEA laboratories and interviewing contracting and program officials, we were unable to determine the specific reason why a bridge contract was needed. After reviewing the contract documentation following our visit, the DEA was also unable to identify the specific reason for delay that led to a bridge contract. Similarly, DLA could not provide specific reasons beyond the need for continued services for six contracts in our sample. As a point of comparison, the Navy had greater institutional knowledge regardless of staff turnover due to the high level of detail provided in their J&As and contract documentation. Navy officials we spoke with as part of our in- depth review were generally more aware of the facts and circumstances for the bridge contracts they awarded when compared to their counterparts at other components. The contracting officials we spoke with stated that the Navy’s policy on bridge contracts has curtailed their use, especially since contracting officials have faced pressure from their superiors to avoid bridge contracts. When bridge contracts are needed, Navy officials said they know a high degree of scrutiny by management will ensue. The majority of agency officials that we interviewed identified bid protests as a common reason for the use of bridge contracts. While contract documentation cited bid protests as reasons for delay in five of the contracts in our high-level review, when we reviewed the contracts in- depth we found that bid protests caused delays in eight of the 29 contracts—roughly a quarter—and that the protests introduced substantial delays to the acquisition process. For example, NIH received nine protests from the incumbent contractor and other unsuccessful bidders on a requirement for utility maintenance services. In this instance, contracting officers awarded a series of short-term bridge contracts for roughly six years to continue to meet the requirement. Similarly, a BOP contract for residential reentry services received multiple protests that resulted in three stand-alone bridge contracts. The total period of performance for that bridge contract requirement was ultimately 27 months. In seven of the eight instances of bid protests that we identified, the incumbent contractor protested the award of a follow-on contract to a new vendor or the terms of the solicitation. However, only two of those protests were sustained and resulted in the incumbent receiving the follow-on contract. Most of the losing incumbents were unsuccessful in obtaining follow-on contracts. We also found that as a result of the incumbent’s protests, incumbent vendors kept providing services—in a noncompetitive environment—well after the predecessor contracts expired. The relationship between bridge contracts and bid protests was discussed in a recent U.S. Court of Federal Claims decision. In this decision, the Court discussed BOP’s procurement of residential reentry services. In June 2012, BOP issued a Request for Proposals for residential reentry services. During the acquisition process for the follow- on contract, the incumbent’s contract for the residential reentry services expired. To avoid a gap in services while completing the acquisition process, BOP awarded—to the incumbent contractor—a total of three stand-alone bridge contracts with a total period of performance of 21 months. During the period of performance of the last bridge, BOP awarded a follow-on contract to a different vendor. The incumbent contractor filed a protest with GAO in April 2015. Rather than enter into a fourth bridge contract with the incumbent contractor, BOP decided to transfer the inmates of the facility being serviced by the incumbent contractor to other facilities. Based on BOP’s decision to transfer the inmates, the incumbent filed another protest, this time with the U.S. Court of Federal Claims. The Court denied the incumbent’s protest on May 29, 2015. Contracting officials asserted that budget uncertainty and sequestration contributed to delays in the award of follow-on contracts for four of the 29 contracts that we reviewed in-depth. For example, officials responsible for two Navy contracts—one for information technology and administrative support and the other for information technology and information management—told us that budget uncertainties, including furloughs within their office during the government shutdown in October 2013, contributed to delays in the award of follow-on contracts. They also told us that one program office was unable to commit funding to a full-term contract early enough in the acquisition process to award the follow-on contract in a timely manner. Similarly, BOP officials told us that sequestration cuts resulted in the award of an additional short-term bridge contract for residential reentry services during the shutdown. However, that particular bridge contract was bookended by two extensions to the predecessor and four other bridge contracts that were caused by bid protests and source selection challenges. Overall, the impact of budget uncertainties and sequestration was not immediately clear or quantifiable for any of the contracts in our sample. While bridge contracts can be a useful tool in certain circumstances to avoid a gap in services, they are typically envisioned to be used for short periods of time. When these noncompetitive contracts are used frequently or for prolonged periods of time, the government is at risk of paying more than it should for goods and services. Because we found that almost all of the bridge contracts in our review were ultimately followed by competitive contracts—which can lead to savings for the taxpayer—the importance of awarding these contracts in a timely manner is heightened. By defining bridge contracts and implementing a policy related to their use, the Navy and DLA have taken important steps to enhance these components’ management of bridge contracts. However, bridge contracts have been identified not only across the three agencies and eight components included in our review, but at other agencies as well, as evidenced by our past work and that of others. Therefore, the importance of defining and tracking bridge contracts is not limited to those agencies included in our review. A uniform, government-wide definition and strategies for tracking and managing the use of bridge contracts would help ensure all agencies have better insights into their use of these contracts and provide agencies with the information necessary to manage their use. Otherwise, agencies are left without a complete picture or understanding of how long a bridge contract has been in place. Without such information, it is difficult for agencies to take steps to reduce their reliance on noncompetitive bridge contracts or remediate internal deficiencies—such as issues related to acquisition planning or challenges with the acquisition workforce—that may lead to delays in the award of follow-on contracts. To gain visibility and enable efficient management on the use of bridge contracts in federal agencies, we recommend that the Administrator of OFPP take the following two actions: 1. Take appropriate steps to develop a standardized definition for bridge contracts and incorporate it as appropriate into relevant FAR sections, and 2. As an interim measure, until the FAR is amended, provide guidance to a definition of bridge contracts, with consideration of contract extensions as well as stand-alone bridge contracts; and suggestions for agencies to track and manage their use of these contracts, such as identifying a contract as a bridge in a J&A when it meets the definition, and listing the history of previous extensions and stand-alone bridge contracts back to the predecessor contract in the J&A. We provided a draft of this report to OMB, DOD, HHS, and DOJ for review and comment. DOD and DOJ provided technical comments which we incorporated as appropriate. HHS had no comments. In an email response, OMB’s OFPP concurred with our recommendation to provide guidance to agencies on bridge contracts. With regard to our recommendation to develop a definition of bridge contracts and incorporate it in the FAR, OFPP stated its intention to work with members of the FAR Council to explore the value of doing so. Specifically, OFPP stated it agreed with our conclusion that heightened management attention on bridge contracts can help to remediate weaknesses that may sometimes cause protracted reliance on incumbent contractors after contract expiration. The response further stated that, for this reason, OFPP generally concurs with the recommendation to issue guidance and increase agency attention on these vehicles. It noted that while there is a legitimate role for bridge contracts in helping to avoid lapses in service that can cause mission harm, agencies bear a responsibility, as a part of effective risk management, to ensure this authority is being used only to the extent necessary and in accordance with FAR requirements that are designed to promote competition, including limitations on extensions and execution of justifications and approvals when competition is not used. OFPP stated that it intends to work with the members of the FAR Council and U.S. Chief Acquisition Officer’s Council to review relevant FAR coverage and discuss the value of developing a regulatory definition for a bridge contract or making other refinements to address non-competitive work with incumbent contractors beyond the period of contract performance. We appreciate that OFPP will be taking steps to explore the option of adding a definition into the FAR, and we continue to believe that a uniform, government-wide definition for bridge contracts is imperative to providing agencies with the information necessary to monitor these contracts and to ensure they are being used as intended. We are sending copies of this report to the Director of OMB, the Secretaries of Defense and Health and Human Services, the Attorney General, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our report examines (1) the insights of selected agencies into their use of bridge contracts; (2) key characteristics of selected bridge contracts; and (3) the reasons why bridge contracts are being used. Since bridge contracts are not defined by the Federal Acquisition Regulation (FAR), we, in consultation with our general counsel, developed a definition for bridge contracts based on our prior reviews and knowledge of bridge contracts and the Institute for Defense Analyses report on competition for service contracts—which defined bridge contracts. For the purposes of this report, we established the following definitions: Bridge contract. An extension to an existing contract beyond the period of performance (including option years), or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract. Predecessor contract. The contract in place prior to the award of a bridge contract. Follow-on contract. A longer-term contract that follows a bridge contract for the same or similar services. This contract can be competitively awarded or awarded on a sole-source basis. Since bridge contracts are not identified in any federal database, to select agencies and components for our review, we developed a customized search methodology using data from the Federal Procurement Data System-Next Generation (FPDS-NG) to identify potential bridge contracts. Details on this customized methodology are outlined in a separate section below. Using the results of the customized methodology, we selected three agencies (the Departments of Defense (DOD), Health and Human Services (HHS), and Justice (DOJ)) and eight components within those agencies for review. The selected components were as follows: DOD: Air Force, Army, Navy, and Defense Logistics Agency (DLA) HHS: National Institutes of Health (NIH) and Indian Health Service (IHS) DOJ: Drug Enforcement Administration (DEA) and Federal Bureau of Prisons (BOP) To gain insights into the selected agencies’ use of bridge contracts, we collected and analyzed policies and procedures on bridge contracts in place at the selected agencies and components. We interviewed acquisition and contracting officials about their knowledge of the use of bridge contracts and any management controls, such as tracking or approval processes, in place in relation to bridge contracts. Because of its role in providing direction for government-wide procurement policies, regulations and procedures, and to promote economy, efficiency, and effectiveness in government acquisitions, we also interviewed staff at the Office of Management and Budget’s Office of Federal Procurement Policy (OFPP) to discuss their views on the benefits and challenges on the use of bridge contracts. We also used federal internal control standards as criteria for assessing agencies’ insights into the use of bridge contracts. To identify key characteristics of selected bridge contracts and assess the reasons why bridge contracts are being used, we selected 73 bridge contracts across the eight components to be included in our high level review, and a subset of 29 of those contracts to be included in our more in-depth review. The selection process for the contracts is described in detail below. For our high level review, we collected and analyzed contract documentation for the 73 bridge contracts, such as justification and approval (J&A) documents, contract modifications, price negotiation memorandums, and other key file documentation used to support the award of a stand-alone bridge contract or contract extension. We analyzed this information to identify key characteristics of the bridge contracts, such as the period of performance and the authority used to extend the existing contract or award the stand-alone contract. In addition, we reviewed information in FPDS-NG on these 73 contracts to identify the types of services procured and the contract value. To identify the reasons for the award of the bridge contract and the methods used to determine price reasonableness across the 73 contracts, we analyzed the contract file documentation and, in situations where the contract file documentation did not include information on the reason for award or the methods used to determine price reasonableness, we followed up with agency officials. To gain additional knowledge about the facts and circumstances surrounding the award of bridge contracts, we conducted an in-depth review of the subset of 29 contracts. For the in-depth review, we conducted site visits to six locations selected based on the location of contract files, collected and analyzed contract documentation from the predecessor contract, bridge contract(s), and, if awarded at the time of our review, the follow-on contract, and conducted interviews with contracting and program officials for each contract. We analyzed the contract documentation and the interviews to develop a more in-depth understanding of certain characteristics of bridge contracts, such as the length of time between the end of the predecessor contract and the award of the follow-on contract, the extent to which follow-on contracts were competed, and the change in prices between the predecessor, bridge, and follow-on contracts. To determine the extent to which the price paid by the government changed when a bridge contract was awarded to the incumbent contractor for the same services acquired under a previous contract, we conducted a price analysis for 10 of the 29 bridge contracts included in our in-depth review. We compared the rates of individual Contract Line Item Numbers (CLIN) for 5 of these bridge contracts, their predecessor, and competitive follow-on. For 4 of these bridge contracts, which provide residential reentry services to federal inmates, we compared the daily rate paid per inmate, and for the remaining bridge contract included in our analysis, we compared the hourly price paid for three labor categories, commonly referred to as labor rates. The remaining 19 contracts included in our in-depth review were excluded from our price analysis as we were unable to compare these contracts due to changes to the scope of the requirement or pricing type of the predecessor, bridge, or follow-on contract. Although our analysis was by necessity limited to those CLINs or labor categories that could be traced across the predecessor, bridge, and follow-on contracts, it provided insights into pricing trends for similar services over time. We also analyzed the contract documentation and our interviews with contracting and program officials to develop a more in-depth understanding of the reasons for the award of bridge contracts. Since bridge contracts are not identified in FPDS-NG or any other federal database, to select agencies and components for our review, we developed a customized search methodology using data from FPDS-NG to identify potential bridge contracts. We initially searched for the term “bridge contracts” in the description field of FPDS-NG for contracts awarded in fiscal year 2013. We excluded contracts for physical bridges (i.e., structures that carry a pathway or roadway over a gap or barrier). This search yielded a total of 11 bridge contracts. Given the small number of contracts that this search yielded, we developed a customized search methodology using FPDS-NG data fields so as to increase our chances of obtaining a larger data set of potential bridge contracts. Our customized search was based on our definition of bridge contracts and included searches for both extensions to existing contracts and stand-alone bridge contracts: Extensions. To find extensions to existing contracts, we searched FPDS-NG for sole-source and competitive contracts awarded between fiscal year 2010 and 2013 where the current completion date was later than the initial completion date. We excluded annual contract options, contract closeouts, and terminations from this search. Stand-alone bridge contracts. To find potential stand-alone bridge contracts, we searched for contracts awarded in fiscal year 2013 that met the following characteristics: Sequentially awarded contracts (within 90 days) by the same component and contracting organization, at the same location, to the same contractor, for the same services. The second of the sequentially awarded contracts was sole- source and had a period of performance of 12 months or less. We selected the years 2010-2013 so as to increase the likelihood that a follow-on contract had been awarded subsequent to the bridge contracts, and could therefore be included in our review. Using this methodology, we arrived at the selection of the three agencies and eight components identified earlier in the appendix. We selected the agencies and components with consideration of the fact they were among those with the highest number of potential bridge contracts, and with consideration of on-going work we had at those entities. We developed a nongeneralizable sample of 73 bridge contracts for services. We focused on service contracts since agency officials and our prior work indicated that bridge contracts were predominantly for services. We used two processes for identifying the 73 contracts included in our review: (1) 52 contracts were identified through our customized search of FPDS-NG, and (2) 21 contracts were initially identified by selected components and verified by us as bridge contracts. Using the results of the FPDS-NG customized search methodology previously described, we selected contracting offices within the components reviewed based on the number of potential bridge contracts and the location of the contracting offices. We compiled a list of approximately 600 potential bridge contracts from these contracting offices. In selecting these potential bridge contracts, we aimed to ensure that there was a mix of contract extensions and stand-alone bridge contracts. We excluded contracts that had contract values below the simplified acquisition threshold of $150,000, as these contracts are generally exempt from the competition requirements of the FAR. We provided the lists of contracts to each component in our review and asked them to provide contract award and extension documentation, such as J&A documents and contract modifications, to verify whether or not the contracts met our definition of a bridge contract. We excluded some potential bridge contracts with certain features for the purposes of this report. At the end of this process, we had identified 52 contracts as bridge contracts to be included in our review. In addition, we selected 21 contracts—12 from DLA and 9 from NIH— from agency lists of bridge contracts that these two components had provided us at the beginning of the review. These contracts were either awarded, in progress of being awarded, or extended in fiscal year 2013 or fiscal year 2014. With the addition of these 21 contracts, our sample for our high level review totaled 73 bridge contracts. See table 2 for a break out of the contracts in our sample. To gain additional knowledge as to the facts and circumstances surrounding the award of bridge contracts, we selected a subset of 29 of the 73 contracts from 6 of the 8 components for a more in-depth review. These 29 contracts were selected based on several factors, specifically contract value, obtaining a mix of contract extensions and stand-alone bridge contracts, and the location of the contract files. The sample of contracts included in our review is not generalizable to a larger universe, but is designed to provide illustrative examples of the characteristics and rationale for the use of bridge contracts at the selected agencies and components, and supplement the information obtained from our interviews and review of agency policies and procedures. We conducted this performance audit from June 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Janet McKelvey, Assistant Director; Guisseli Reyes-Turnell, Analyst-in-Charge; Peter W. Anderson, Emily Bond, Andrew Burton, Virginia Chanley, Julia Kennon, John Krump, Erin Stockdale, Roxanna Sun, and Holly Williams made key contributions to this report.
|
When an existing contract is set to expire but the follow-on contract is not ready to be awarded, the government can extend the existing contract or award a short-term sole-source contract to avoid a gap in service. These have been referred to as “bridge contracts.” While bridge contracts can be necessary tools, they are awarded without competition, which puts the government at risk of paying too much. GAO was asked to review federal agencies' use of bridge contracts. This report examines (1) insights selected agencies have into their use of bridge contracts; (2) key characteristics of bridge contracts; and (3) the reasons bridge contracts are used. Because bridge contracts are not defined in the FAR, GAO constructed a definition based on its prior work and that of other federal agencies. GAO reviewed policies and procedures at three agencies that were among those with the highest number of potential bridge contracts. GAO analyzed a nongeneralizable sample of 73 contracts for services, based on a customized search of the federal procurement data system and contract information provided by agencies. For a more in-depth review, GAO selected a subset of 29 contracts based on contract value and other factors. The agencies included in GAO's review—the Departments of Defense (DOD), Health and Human Services, and Justice—had limited or no insight into their use of bridge contracts, as bridge contracts were not defined or addressed in department-level guidance or in the Federal Acquisition Regulation (FAR). However, GAO found that two DOD components, the Navy and Defense Logistics Agency, have instituted definitions, policies, and procedures to manage and track their use. The components took these steps due to concerns that bridge contracts were being used too frequently and reducing competition. Federal internal control standards stipulate that management should identify, analyze, and monitor risks associated with achieving objectives, such as maximizing competition. Staff from the Office of Federal Procurement Policy (OFPP), which provides direction for government-wide procurement policies so as to promote efficiency and effectiveness in government acquisitions, acknowledge that the use of bridge contracts may introduce risks related to a lack of competition. Without a definition of bridge contracts and guidance for tracking and managing their use, agencies are not able to fully identify and monitor these risks and increase opportunities for competition. The 73 bridge contracts GAO analyzed varied widely in characteristics such as the type of service and length of contract. Almost half of the contracts were used to procure either professional management services or information technology services. Although bridge contracts are typically envisioned as short-term, GAO found that some bridge contracts spanned multiple years, potentially undetected by approving officials. For example, of the 29 contracts GAO reviewed in-depth, 6 were longer than 3 years. As the figure below illustrates, an Army bridge contract for computer support services was initially planned as a 12-month bridge, but because of subsequent bridges, ultimately spanned 42 months. Even after lengthy bridge contract scenarios, most follow-on contracts were awarded competitively. Of the 26 cases in GAO's review where follow-on contracts were awarded, 23 were awarded competitively, in some instances leading to savings. The fact that competition occurred in almost all cases, which can save the government money, highlights the importance of better management controls over use of bridge contracts. Acquisition planning delays, such as revisions to statements of work and delays in source selection, as well as an inexperienced and overwhelmed acquisition workforce, bid protests, and budget uncertainties contributed to the use of bridge contracts in the cases GAO studied. Often, more than one of these factors led to the use of a bridge contract. GAO recommends that OFPP take steps to amend the FAR to incorporate a definition of bridge contracts, and, in the interim, provide guidance for agencies to track and manage their use. OFPP agreed with the recommendation to provide guidance to agencies and plans to explore the value of adding a definition to the FAR.
|
Carbon offsets can be used by entities that are subject to legal requirements to limit their emissions, such as utilities or manufacturing facilities. Offset programs designed for this purpose are called compliance programs. One such program is the Clean Development Mechanism (CDM), an offset program established by the Kyoto Protocol. The CDM allows nations with binding emissions targets under the Kyoto Protocol— including those participating in the EU ETS—to purchase offsets from projects in developing nations without binding targets. The CDM is the world’s largest offset market, valued at $2.7 billion in 2009, and has registered over 2,700 offset projects in 70 countries. Our prior work found that the CDM provided developed nations with flexibility in meeting their emissions targets but that the program’s effects on emissions were uncertain, in part because the CDM’s screening process could not fully ensure offset quality. There are also “voluntary” carbon offset programs, where purchasers do not face legal requirements to limit emissions but may buy offsets for various reasons. For example, companies may purchase offsets to demonstrate their environmental stewardship, while individuals may purchase offsets to compensate for emissions resulting from their personal travel or consumption of fossil fuels. Because the federal government has not adopted binding limits on greenhouse gas emissions, domestic purchases of carbon offsets generally fall within the voluntary portion of the market. Voluntary programs in the United States include private sector programs, such as the Climate Action Reserve (CAR) and the Voluntary Carbon Standard (VCS), as well as Climate Leaders, an industry-government partnership overseen by EPA. Voluntary offset programs represent a relatively small share of the offset market—in 2009, the total value of the voluntary offset market was approximately $338 million, around one-eighth of the CDM market. Our prior work on U.S. voluntary markets suggests that many quality assurance mechanisms exist but the extent of their use is uncertain. Table 1 lists the compliance and voluntary programs we reviewed. While the project review process can vary by program, it often involves the following basic steps: (1) preparing application documents, (2) establishing that the project meets eligibility criteria, (3) approving the project and registering it in a database, (4) monitoring emissions reductions over time, (5) verifying the amount of emissions reductions produced over a certain time period, and (6) issuing offsets. Existing programs generally have an administrative body to oversee offset projects and ensure they meet established quality criteria. Other key participants include project developers, who identify and perform actions that reduce, avoid, or sequester emissions, and third-party verifiers, who ensure that projects adhere to relevant quality assurance mechanisms. Figure 1 illustrates the CDM’s project cycle. Experts and stakeholders identified five key challenges to assessing the quality of offsets in existing programs. First, many experts and stakeholders agreed that the primary challenge is assessing whether the offset project results in additional emissions reductions. Second, emissions reductions from some types of offset projects, particularly soil and forestry projects, can be difficult to measure. Third, carbon stored through soil and forestry projects may not be permanent. Fourth, in some cases it can be difficult to verify that offset projects complied with program rules and that emissions reductions occurred as expected. Fifth, the types of projects that are the most difficult to assess—forestry, international, and certain agriculture projects—may make up the majority of offsets in a future U.S. program, posing challenges for policymakers designing an offset program. According to many of the experts and stakeholders we interviewed, the primary challenge to assessing offset quality is determining whether offsets generate “additional” emissions reductions—reductions that would not have occurred without the incentives provided by the offset program. In theory, offsets allow regulated entities to emit more while maintaining the emissions levels set established by a cap-and-trade program or other program to limit emissions. However, if the offsets represent emissions reductions that would have occurred anyway, net emissions may exceed the cap and compromise the environmental integrity of the program. We previously identified additionality as a challenge to offsets in 2008 and 2009. Although each program we examined took steps to ensure the additionality of offsets, evidence suggests that non-additional offsets have nonetheless been awarded under some existing programs. For example, the CCX, a voluntary program, awarded offsets to farmers who had practiced the credited activity for years. Several studies on the CDM also suggest that a substantial number of non-additional projects have received offsets, although some experts reported that the CDM has improved the quality of its offsets significantly in recent years. Experts and stakeholders cited a number of reasons why assessing additionality can be challenging, including the following: Difficulty of setting a baseline. Assessing additionality involves comparing a project’s expected reductions against a projected baseline of what would have occurred in the absence of the program. While this is not a challenge unique to offset programs—many policy decisions involve assessing alternative policies against a hypothetical baseline—it may involve a number of assumptions that are uncertain. For example, some programs approve offsets for forest management practices, such as lengthening harvest cycles to allow forests to store carbon for longer periods. An offset program could establish a baseline for these projects by assessing historical data about how forest owners respond to changes in timber prices and other economic variables. However, it may be difficult to account for the variety of decisions a forest owner may make that affect the amount of carbon stored—for example, not all forest owners may want to maximize the amount of timber produced. Assumptions regarding this and other factors that affect the amount of carbon stored can have a significant impact on the number of offsets awarded, according to some studies. For example, one study suggested that the number of offsets awarded for a hypothetical forest management project could vary by an order of magnitude, depending on the approach used to set baselines. Asymmetric information. To evaluate the additionality of a project, program administrators must often rely on information provided by applicants, and in some cases, this information may be difficult to evaluate. One additionality test used by the CDM requires wind power developers, for example, to establish that a project either is not financially feasible without the revenues from offsets or is not the most economically attractive option. This can involve a complex analysis including assumptions about the internal rate of return for the project, the cost of financing, the relative costs of fuels, and the lifetime of the project. Research suggests that it can be difficult to verify these assumptions, especially since applicants know more details about the project than program administrators or verifiers, and may present data selectively to support claims of additionality. Multiple incentives. According to literature we reviewed, in some cases there may be reasons to pursue an activity that are unrelated to the offset program. For example, energy efficiency and renewable energy projects may be profitable on their own, making it difficult to gauge how offset revenue affects these projects’ financial viability. Similarly, conservation tillage is an agricultural practice that can earn offsets because it stores more carbon in soil than regular tillage, but farmers may also practice it for other reasons, such as to help soils retain moisture. One study suggests that conservation tillage increased by 3.5 percentage points between 1998 and 2004 as a share of total planted acres. If conservation tillage offsets are accepted under a future offset program, it may be difficult to determine what portion of future increases is attributable to the offset program. In addition, some land use practices may be eligible for other federal subsidies or policy incentives outside of the offset program, potentially complicating additionality assessments. Misaligned incentives. Some experts suggested that an offset program may create disincentives for policies that reduce emissions. For example, under an offset program that allows international projects, U.S. firms might pay for energy efficiency upgrades to coal-fired power plants in other nations. According to our previous work, this may create disincentives for these nations to implement their own energy efficiency standards or similar policies, since doing so would cut off the revenue stream created by the offset program. For example, some wind and hydroelectric power projects established in China were reviewed and subsequently rejected by the CDM’s administrative board amid concerns that China intentionally lowered its wind power subsidies so that these projects would qualify for CDM funding. In addition, our review of the literature suggests that in some cases an offset program may unintentionally provide incentives for firms to maintain or increase emissions so that they may later generate offsets by decreasing them. This potential problem is illustrated by the CDM’s experience with industrial gas projects involving the waste gas HFC-23, a byproduct of refrigerant production. Because destroying HFC-23 can be worth several times the value of the refrigerant, plants may have had an incentive to increase or maintain production in order to earn offsets for destroying the resulting emissions. As we have previously reported, it can be difficult to accurately measure emissions from some types of offset projects, particularly soil and forestry projects. An offset program needs accurate measurements of emissions to ensure that it awards an appropriate number of offsets. According to our review of the literature, the most straightforward way to measure emissions is through direct monitoring. For example, a project can run methane collected from a landfill or coal mine through a meter to measure the quantity collected and destroyed. Similarly, power plants can install monitors to measure their carbon dioxide emissions. However, direct monitoring is not feasible or cost-effective for all types of offset projects, and does not capture the effect that some projects have on emissions elsewhere. Types of offset projects with measurement challenges include the following: Land-use offsets. Land-use offset projects seek to absorb greenhouse gases or reduce emissions by affecting various natural processes. For example, trees absorb carbon dioxide from the atmosphere as they grow, and soils store carbon. However, the precise amounts stored or emitted due to an offset project may be uncertain because some of the underlying natural processes are complex and not fully understood. The amount of carbon absorbed by agricultural soils, for example, depends on the local climate, soil type, vegetation, and past land management practices. While precise methods for measuring carbon in soil samples are well established, the level of carbon will vary across a parcel of land, and changes due to the project may be small compared with the total level of carbon in the soil. Accurate estimates can therefore require extensive sampling, which may be prohibitively costly for some offset projects. Carbon storage projects also require ongoing monitoring to assess whether the stored carbon is re-released. According to literature we reviewed, estimates of emissions from land-use offset projects can be more uncertain than those of other projects. For example, the uncertainty of a meter that measures methane captured from a landfill may be less than plus or minus 1 percent, whereas uncertainties of the amount of carbon stored in agricultural soils range from plus or minus 6 percent to plus or minus 100 percent. Dispersed projects. Offset projects that include many small sources can also be challenging to measure. For example, estimating emissions reductions from a project that distributes energy-efficient light bulbs would require assessing light bulb use among recipients and estimating the associated energy savings. According to our review of literature, one option is to collect information from a sample of recipients; however, this can cost more and may involve sampling errors or other errors compared with projects where emissions are directly monitored using a meter at a single point. Projects prone to leakage. The net effect of some types of offset projects may be challenging to measure because of the potential for emissions to increase elsewhere as a result of the project. This is known as leakage. For example, avoiding wood harvest in one area may simply displace harvesting and its emissions to another location. Some studies that assessed different project types in different regions suggest that leakage may be significant, although there is considerable uncertainty about the extent of leakage and the factors that cause it. Estimates suggest that between none or almost all of the emissions reductions from some types of land-use offset projects could be negated by increased emissions elsewhere. Other types of projects may also be at risk. For example, energy-efficiency projects may save resources that are ultimately spent on activities that increase energy use elsewhere. Some experts suggested that measurement costs can affect the viability of certain types of projects. The measurement stringency or degree of accuracy required in a program can affect the costs of offset projects and make some types of projects unviable. Some stakeholders reported that a program will need to balance the benefits of accurate measurements with the costs. Such a balance will shift over time as new techniques and approaches are developed. As we have previously reported, projects that store, or “sequester,” carbon carry the risk that the stored carbon will be re-released into the atmosphere, known as a reversal. The risk of reversal is most commonly associated with projects involving forestry and agricultural soil sequestration. In these types of projects, reversals can occur as a result of human activity, such as logging or changes in tilling practices, or from natural events such as fires, storms, or insect infestations. Addressing the risk of reversal is important because a reversal can negate the environmental benefit of the project. Carbon dioxide can remain in the atmosphere for a long time—up to thousands of years, according to the Intergovernmental Panel on Climate Change. In the context of an offset program, this means that a project in which trees planted in one year but destroyed 30 years later would convey a minimal environmental benefit compared to a project that captured and permanently destroyed methane emitted from a landfill. According to our review of literature and interviews with experts, verification is an important aspect of an offset program because participants may have limited incentives to report information accurately or to evaluate quality. Verification involves confirming that the project complied with program rules and that estimates of emissions reductions are reasonable. In most programs, a third-party auditor conducts the verification, which can involve checking that emissions reduction calculations are correct and site-visits to verify information with independent measurements and observations. The verifier may also review the assumptions underlying the assessment of additionality. According to our review of literature, verification may be challenging because sellers of carbon offsets may have little incentive to report information accurately to program administrators, and buyers may have little incentive to investigate the quality of offsets. Unlike buyers of other commodities, like oil or corn, buyers of offsets may not care about the quality of the offsets they buy and may be primarily interested in lowering their compliance costs by purchasing lower-cost offsets. This is partly because under some designs, buyers may not be liable for the quality of offsets they purchase after those offsets have been issued by a program. On the basis of our review of the literature and interviews with experts, we identified several challenges to verifying offset projects, including the following: Projects in developing countries and those involving complex measurement techniques can be difficult to verify. Some experts and stakeholders suggested that offset projects in developing countries can be difficult to verify because of varying legal frameworks, lack of available documentation, or other reasons. For example, some verifiers reported that it is sometimes difficult to verify whether project developers have legal ownership of land used in a project. These challenges can vary considerably depending on the country hosting the project. Some verifiers noted that projects involving forestry and agricultural soils—in the United States or in other nations—can be more challenging to verify, since they often involve complex measurement methods. To verify emissions reduction claims in such projects, a verifier must assess the reasonableness of the model or estimation technique used, as well as the data used in the model. Incentives and conflicts of interest may complicate verification. Many experts and some stakeholders reported that misaligned incentives and conflicts of interest may affect the quality of verifications. In most cases, third-party verifiers are selected and paid by project developers. This may give verifiers an incentive to further the goals of the developer—earning offsets at low cost—over the goal of ensuring the quality of offsets. Specifying verification criteria can be difficult. Some stakeholders suggested that the verification criteria used in some programs have been unclear or subject to interpretation. This can make verifications difficult, as verifiers must make subjective judgments as to the reasonableness of assumptions and may interpret program guidelines differently than program administrators intend. For example, according to CDM documentation, about 7 percent of projects authorized by third-party verifiers in 2009 were subsequently rejected by the board that ultimately approves CDM projects. According to one study, this is partly because the CDM rules for additionality were unclear or ambiguous, which led to different interpretations between third-party verifiers and the CDM board. In addition, the CDM’s guidelines do not establish a level of confidence required in a verification, known as a materiality threshold. Two verifiers we interviewed suggested that that without such a threshold, verifiers may spend considerable effort investigating potential errors that would have a negligible or no impact on emissions reduction estimates. Competence and supply of verifiers may be inadequate. Some stakeholders we interviewed suggested that there has been a limited supply of qualified verifiers. Following spot checks of some verifiers, the CDM suspended four verification firms from 2008 to 2010, in part because of concerns over the skills and experience of staff. Two stakeholders said that the shortage of verifiers is especially acute in developing countries or for more technically demanding project types such as avoided deforestation. The CDM has taken various steps to improve its verification system, and these challenges may be alleviated in the future as verifiers and program administrators gain experience with the verification process. These challenges have raised verification costs, according to our review of literature and stakeholders we interviewed. One stakeholder said that verification can be the single largest cost of developing an offset project. According to information collected by the CDM, costs range from $13,000 to $54,000 to initially register a project and $7,900 to $32,000 to periodically verify emissions reductions in that program. According to two stakeholders involved in verifying CDM projects, these issues have driven up verification costs in the CDM and contributed to a growing backlog of projects. Verification costs could cause some otherwise high- quality offset projects not to be undertaken because they are not financially viable. Experts and stakeholders generally agreed that for some types of offset projects, quality is relatively easy to assess. In particular, many suggested that projects that have one emissions source and involve the metered destruction of greenhouse gases—such as methane flaring from landfills and coal mines—generally produce high-quality offsets. These projects take place at a single location; permit easy, reliable and continuous monitoring of emissions; and are not at risk of re-releasing emissions. However, offsets from such projects were forecast to be a small portion of total offsets in recent legislative proposals. Further, EPA’s review of recent draft legislation suggests that the potential emissions reductions from these activities may be limited, and therefore may do little to reduce the cost of a future U.S. program to limit emissions. For example, EPA’s analysis of the American Clean Energy and Security Act estimated that allowing landfill, coal mine, and natural gas system methane projects as offsets would decrease the cost of emissions by only 2 percent relative to a program without these projects. According to our review of the literature, the types of projects that are particularly challenging to assess—including forestry, international, and some agricultural offsets—may account for the majority of offsets. In 2009, CBO estimated that most offsets under proposed U.S. legislation would result from forestry and agricultural practices, with most domestic offsets coming from the forestry sector. CBO also estimated that international offsets would comprise slightly over half of all offsets from 2012 to 2050. Efforts to reduce deforestation in developing countries could be a particularly significant source of offsets, given that up to 20 percent of global greenhouse gas emissions results from tropical deforestation. However, forestry offsets pose key challenges for measurement, leakage, and permanence, and have therefore had a relatively limited role in existing offset programs thus far. According to our review of the literature and interviews with experts, policymakers have several options to choose from in addressing challenges with offset quality, but many of these options could increase the cost of offsets and may involve other trade-offs. Nonetheless, addressing these challenges may be valuable since offsets, in principle, could substantially lower the cost of a program to limit greenhouse gases relative to the cost of a program without offsets. The extent of these savings will depend partly on the quality assurance mechanisms used to address offset quality. On the basis of our review of relevant literature and interviews with experts, we identified several options that address challenges associated with additionality, measurement, permanence, or verification. We also identified steps that could address multiple offset quality challenges at the same time. Finally, we identified four overarching principles that experts generally agreed could enhance offset quality. On the basis of our review of relevant literature and interviews with experts and stakeholders, we identified several options to address specific challenges to offset quality. Many of these options involve trade-offs— most notably, more stringent quality assurance can increase the cost of offsets. These options are not mutually exclusive, and some experts suggested that a program will likely need to employ a combination of options depending on the type of offsets allowed under the program. There are several options to assess additionality, although many experts we interviewed stated that it may be practically impossible to ensure that all offsets are additional at the project level. Still, all of the programs we examined included additionality as a criterion for offset approval, and all took certain straightforward steps to increase the likelihood that issued offsets are additional. For example, all of the programs we reviewed seek to accept only those projects that achieve emissions reductions beyond what is already required by law or regulation, and all require that projects be initiated after a certain date (e.g., the start date of the program). The assumption behind both of these requirements is that projects that cannot meet them were likely motivated by something other than the incentives of the offset program. All the programs we examined also take one of two approaches to more thoroughly assess the additionality of offsets—a standardized approach or a project-by-project approach. With a standardized approach, a program establishes a standard way of assessing additionality for each type of offset project and uses it for all projects of that type. One way to do this is for a program to review comparable projects and establish a performance level or set of technologies that would be considered additional. For example, a performance level for international electricity projects might reflect the most efficient method of producing electricity that is in use in a given region. Projects that exceed that performance level would then be considered additional. Alternatively, a program could identify technologies or practices that are generally additional. For example, after reviewing current livestock manure waste management practices in the United States, CAR decided that any project that installed a system to capture and destroy methane gas from manure treatment or storage facilities could be considered additional and defined a baseline methodology for all such projects. Therefore, to demonstrate additionality under CAR, a project developer simply has to show that an approved methane collection system has been installed. In contrast, with a project-by-project approach, additionality can be assessed differently for each project—even projects of the same type—so as to consider the unique circumstances of each project. For example, CDM program documents show that livestock methane capture projects generally have to (1) conduct either an investment analysis to show that methane capture was not attractive without revenue from the sale of offsets, or demonstrate that offsets allow the project to overcome some prohibitive barriers; (2) demonstrate that methane capture is not already common practice in that area; and (3) define an appropriate baseline from which offsets would be awarded. Table 2 compares these two approaches. The choice of approaches to address additionality involves three basic trade-offs, according to on our review of relevant literature and interviews with experts and stakeholders: 1. Stringency versus cost. Regardless of the approach that is used, a more rigorous assessment of additionality can be more costly to implement and exclude some projects that could have produced additional offsets, according to some experts. Two experts we interviewed estimated that relatively lenient offset standards could mean that nearly half of issued offsets are not additional. On the other hand, these experts estimated that stringent offset standards could greatly reduce non-additional offsets but exclude a significant number of potentially additional offsets from the program. 2. Up-front costs versus lower overall administrative costs. Some experts and stakeholders suggested that a standardized approach may reduce administrative costs overall but may also involve higher up- front investments than a project-by-project approach. For example, the verification to register a project can cost a project developer between $13,000 and $54,000 and can take over 250 days in the CDM’s project- specific process, while the same step involves minimal cost and approximately 4 to 12 weeks under CAR’s standardized approach. However, developing a standard can involve up-front costs for collecting and evaluating information to assess business-as-usual activities, and for soliciting and considering public comments on proposed standards. Although a project-by-project approach may be more expensive to operate over time, an expert suggested that it can be established more quickly and at lower initial cost. This is because the program administrator would not need to establish specific standards for assessing additionality for each type of offset project, although general offset criteria for all projects would still be needed. 3. Flexibility versus objectivity. While standardized approaches are more objective to implement than project-by-project approaches, they are less flexible, according to some experts and stakeholders. Some stakeholders were concerned about subjective and inconsistent decisions that have occurred in some programs that use a project-by- project approach, and these concerns would likely be reduced under a standardized approach. However, once a standardized method is established, it may allow little flexibility in assessing whether a given offset project meets the standard. This lack of flexibility might mean that some projects with the potential to generate additional offsets will be excluded, and some non-additional projects will be included. Recognizing these tradeoffs and that the suitability of a given approach may depend on the type of offset project, many experts recommended a hybrid approach that would use elements of both project-by-project and standardized approaches, and that would be tailored to each offset project type. For example, a standardized approach may work well for project types where sufficient data on relevant industry practices are available, while a project-by-project approach may be better suited to less common project types. According to literature we reviewed, one option to address the potential for measurement error is to require project developers to incorporate measurement uncertainty into their emissions reductions calculations, reducing the number of offsets claimed to those that can be measured with a specified degree of certainty. For example, CAR adjusts the number of offsets that can be credited to a forestry project when measurement uncertainty exceeds a certain threshold. Projects measured with high uncertainty receive fewer offsets than comparable projects measured with less certainty. Such deductions can be a significant amount of potential offsets for some types of projects—up to 15 percent for some forestry projects. Additional options exist for addressing measurement challenges due to the risk of emissions leakage, according to the literature we reviewed. At the project level, some leakage may be addressed by expanding the area of emissions monitoring—for example, for certain project types, VCS tracks local “leakage belts” surrounding the project area. However, this option does not address any emissions that shift beyond a localized region. An alternative is to expand the scale of emissions monitoring to the national or international level—for example, monitoring emissions in the forestry sector or other sectors where leakage is likely to occur. In such a system, adjustments could be made if the emissions in a given sector were higher than expected, given estimated reductions from offsets. However, it may be difficult to isolate the effect of leakage from other factors that affect emissions. While some experts characterized leakage as a particularly difficult challenge, literature we reviewed suggests that assessing the potential for leakage may help policymakers adjust emissions measurements appropriately. For example, leakage may often be driven by the need to meet agricultural and timber demands. Assessing the circumstances of the markets, regions, and countries targeted by an emissions reduction program may help provide information on how much leakage can be expected, enabling program administrators to adjust policies as needed. Addressing the risk of offset reversals—which occur when carbon stored in trees or soil is subsequently re-released into the atmosphere—is critical to achieving expected reductions under a program to limit emissions, according to literature we reviewed. Developing a policy to address reversals involves deciding how long a project must continue to store carbon, and how to compensate for lost reductions in the event that stored carbon is re-released into the atmosphere. Under existing offset programs, carbon must be stored for a certain period of time, although these “permanence” requirements vary significantly. In the voluntary offset program CAR, for example, a forestry project must store carbon for 100 years after offsets are issued or pay back the offset credits. In contrast, CCX required a commitment of 15 years. Given that carbon dioxide can remain in the atmosphere for anywhere between 30 years and several centuries, a longer time commitment may help improve the likelihood that offset projects convey their intended environmental benefit. On the other hand, some stakeholders suggested that extended time commitments could reduce participation from landowners and renters, who may be unwilling to commit to 100-year time frames. A CAR official we interviewed noted, however, that CAR had received nearly 140 applications for forestry projects, each of which would be subject to the 100-year commitment. The CDM takes a different approach by issuing temporary credits for forestry activities, which can be used for compliance purposes only for a certain amount of time. Once a credit expires, the owner must replace it. New temporary credits can be used to replace the expiring credits if the project owner is able to demonstrate that the carbon remains stored. According to literature we reviewed, temporary crediting avoids the need for ongoing monitoring to ensure permanence, and three experts characterized it as the best option to address reversals. However, others expressed skepticism that temporary credits would be attractive to buyers in the context of a mandatory program to limit emissions. One expert, for example, suggested that temporary credits would create ongoing compliance liabilities that offset buyers would be unwilling to carry. According to one study we reviewed, alternative forms of temporary crediting could address these issues—for example, allowing the private market, rather than the administrator of the program, to set contract length to meet the different needs of market participants. On the basis of our review of the literature and experts we interviewed, we identified several other options which, together or independently, could help ensure that carbon is stored for the specified time or otherwise accounted for: Hold seller or buyer liable. Policymakers could assign liability to either project developers (sellers) or offset buyers. In the event of a reversal, the liable party would either have to replace the offsets or face sanctions for noncompliance. The advantage of holding the seller liable, according to experts and literature we reviewed, is that the landowner has a greater incentive to avoid reversals. Flexibility is another potential advantage to this option, according to one expert—a landowner that wanted to use the land for other purposes could simply replace the offsets. However, literature we reviewed suggests the transfer of liability may have to be established through a contract or other mechanism, since land ownership can shift over time. Under the buyer liability option, the responsibility for an offset reversal shifts along with the ownership of the offset. According to some literature we reviewed, this option may give buyers a greater incentive to pursue quality offsets, and liability may be easier to enforce. However, one stakeholder we interviewed suggested that such an approach would significantly dampen program participation because potential offset buyers would be unwilling to take on this level of risk. An unexpected forest fire, for example, could create a significant and immediate financial liability for an offset owner. Insurance. In the case of buyer or seller liability, private insurance markets may help address the risk of offset reversals. For example, offset owners could insure themselves through private insurance or bonds issued by a bank, and if a reversal occurs, the insurer pays for the cost of replacing the offsets. According to one expert, one advantage of this option is that some private insurance companies may be better equipped to assess risk than the federal government. However, another expert noted that, because offsets are a relatively new commodity, there may not yet be sufficient information to identify risks. This expert therefore recommended against using this option until sufficient data exist to allow a private market system to work at reasonable cost. Programwide buffer pools. A program could establish a “buffer” pool by setting aside a portion of all offsets from new projects to cover possible future reversals. For example, the VCS requires land-use projects to undergo a risk assessment for non-permanence, which encompasses risks of natural disaster, technical failure, and political instability, among others. On the basis of this assessment, a percentage of the credits is withheld and put into a buffer pool for use in the event of reversal. According to literature we reviewed, a programwide buffer pool can serve as a type of insurance against unanticipated reversals. However, determining the appropriate size of the buffer pool may be difficult, according to some experts. A smaller buffer pool may not provide enough protection against reversals, whereas a large buffer pool may require applicants to withhold a larger share of their offsets, potentially dampening participation in the program. There are three basic ways to verify offset projects. First, offset projects can be verified by independent third-party organizations. Nearly all of the programs we examined use this approach. Verifiers are generally chosen and paid by project developers, presenting a potential conflict of interest. Because of this, the programs we reviewed have various requirements governing the relationship between the verifier and the developer. For example, all require conflict of interest reviews, and some have additional requirements governing the relationship between the verifier and the developer. In RGGI, for example, verifiers may not have any other direct or indirect financial relationship with project developers. Under some programs, such as the CDM, third-party verifiers may also be liable for failing to adequately verify that emissions reductions have occurred as a result of the offset project. According to many stakeholders, these and other requirements generally prevent potential conflicts of interest from affecting the quality of third-party verifications, although two experts suggested that such policies may not be sufficient. Second, some experts suggested that a program could itself verify offset projects, either directly or by contracting with third parties. This could eliminate many potential conflicts of interest by eliminating the relationship between project developer and verifier, although this is not done in any of the programs we examined. Some stakeholders suggested that having the program select verifiers could be problematic because it could add a layer of bureaucracy and could reduce market competition, among other reasons. Third, one expert and one project developer suggested that project developers could certify their own information if a program had strong compliance and enforcement provisions to encourage developers to report truthfully. For example, the government could conduct random spot checks or audit a sample of projects. This would eliminate verifications, but could increase the risk of fraud, abuse, and mistakes. In addition to choosing who will verify offset projects, programs face additional challenges related to verification. Experts and stakeholders identified the following options to address these: Oversight can help align incentives and improve verification. Some experts and stakeholders stressed the need for rigorous oversight to ensure verifications are effective and meet specified goals. This could take the form of accreditation processes to select third-party verifiers and ongoing monitoring of verifications including spot checks. Clearly defined guidelines and expectations can facilitate verifications. Some experts and many stakeholders indicated that clear guidelines and expectations are important for effective verification. More specific guidance and more objective criteria can reduce the chance that verifiers and program administrators will interpret information differently. Standards and training can help improve the competence and supply of verifiers. A program can help ensure that verifiers are competent by establishing standards or a minimum set of qualifications. For example, the CDM specifies that verifiers must have a certain level of verification experience before they can serve as team leaders. Some stakeholders also reported that training can be useful, although one suggested that the private sector can develop necessary training if standards are clear enough. On the basis of our review of the literature and experts we interviewed, we identified several other options that—used in combination or separately— may help address multiple challenges to offset quality at the same time. Many of these options involve addressing the quality of the program on aggregate, rather than attempting to ensure the quality of each offset at the project level. This may be necessary because, according to a CBO study, complete quality assurance of every project would be prohibitively costly, particularly for forestry and other challenging types of offsets. According to our review of the literature, one way to mitigate the negative impacts of non-additional offsets, leakage, and other quality problems is to simply limit the use of offsets in a cap-and-trade program or other program to limit emissions. With this option, the emissions reduction program would ensure that only a fixed percentage of the emissions permits could be affected by any problems with offset quality. All existing emissions reduction programs we reviewed use this option. In the EU ETS, regulated entities are able to use CDM credits for 12 percent of their emissions cap, on average, through 2012. In contrast, a draft Senate bill would have allowed a greater number of offsets into the program—approximately 42 percent of the emissions cap during the first year of the program. These percentages are based on the total emissions cap, not the required emissions reduction. As a result, such limits could mean that regulated entities could use offsets for all of their required emissions reductions, assuming a sufficient supply of offsets was available. RGGI’s approach, on the other hand, limits offsets to no more than 50 percent of required reductions under the cap, which may avoid a scenario where emissions reductions were wholly dependent on offsets. Restricting the number of offsets allowed would likely increase the cost of meeting the emissions cap in an emissions reduction program. On the other hand, one expert suggested that while offsets may lower the cost of compliance, such savings are irrelevant if offsets do not represent actual emissions reductions. Policymakers could also choose to limit the types of projects eligible for offsets, excluding the types most likely to pose quality problems. While existing offset programs we reviewed allow a wide variety of project types, they all also impose some limits on the type of projects they accept (see table 3). In some cases, programs impose limits because of concerns over the likely quality of offsets from certain types of projects. For example, soil sequestration projects, including conservation tillage, are not permitted in the CDM because of difficulties in accurately measuring the amount of carbon that is ultimately absorbed into the soil. Many experts and stakeholders suggested that project types should only be eligible if they meet key quality criteria. Experts and stakeholders generally agreed on the characteristics of projects that presented relatively few quality assurance challenges: Projects that represent a single, localized source of emissions are less likely to necessitate resource-intensive sampling and complicated measurement models than projects that cover large areas of land or those with multiple emissions sources. Projects with emissions that can be measured directly through a meter allow for relatively easy monitoring and verification and are generally not subject to leakage or reversals. Projects that do not receive subsidies or generate revenue on their own may be less challenging to assess for additionality, since the offset is often the only financial incentive for these activities. Projects implemented in the United States may be easier to verify than international projects, given that verifiers may be less familiar with the legal, political, and institutional infrastructures of other nations. Rather than limiting an offset program to only these types of projects, however, some experts cited reasons that the government should allow some flexibility around offset types. First, the supply of offsets from easy- to-monitor, low-risk projects—such as projects to capture fugitive gases from landfills or coal mines—may be limited. Second, some types of offsets that present quality assurance challenges—such as those in the forestry sector—also present large opportunities for emissions reductions. Third, imposing higher limits on international projects relative to domestic projects could exclude many legitimate reduction opportunities, according to some experts. Many experts and stakeholders recommended developing a list of acceptable project types carefully over time. Some of them cautioned against codifying a list of acceptable project types in legislation, instead suggesting that the implementing agency choose acceptable project types using guidance from scientific and financial experts. One expert recommended that the agency initially focus on a set of project types that are most likely to produce quality offsets using the experience of existing programs and standards, and gradually build on that list as more information is collected. According to our previous work, one way to compensate for offset quality problems is to discount the value of offset credits. This could be done in one of several ways, each of which has advantages and disadvantages, according to literature and experts we interviewed: Discount all offset projects. Challenges in quantifying offsets range from assessing additionality and setting emissions baselines to measuring and verifying emissions reductions. While ideally an offset program would have measures to address these issues, our previous work suggests that even a rigorous approval process can still allow a substantial number of offsets that do not meet quality criteria. An offset program could seek to compensate for this by estimating the percentage of offsets that do not meet quality standards in the program overall and then discounting all offsets by that percentage. For example, five offset credits could be set as equal to four emissions permits in a cap-and-trade program. The burden of the discount would be borne by offset buyers, who would then need to purchase more offset credits, or by offset suppliers, who would have to perform more emissions-reducing activities. On one hand, some experts characterized this as a relatively simple approach that may help limit the adverse effects of non-additionality or other offset quality issues. However, others suggested that determining the appropriate discount would be difficult and somewhat arbitrary, and others expressed concern that discounting would reduce the chance that additional projects would be viable. Discount certain project types. This option could be used to prioritize certain types of projects over others, such as projects whose reductions are relatively easy to measure or verify. These projects would receive smaller discounts—or no discount—relative to higher-priority projects. For example, some proposals suggest applying a greater discount to forestry or international projects. However, some experts cautioned that such an approach can impede economic efficiency by reducing the overall supply of offsets or by making certain types of offsets more expensive. Apply a discount before credits are issued. Under this option, used by several existing programs, discounts are incorporated into a project’s measurement methodologies before credits are issued, as a way to target projects for which measurement error, leakage, or additionality is a high risk. In general, experts and stakeholders supported this form of discounting when it is possible, but some noted that leakage and additionality can be especially hard to quantify and may be better addressed through other quality assurance options. On the basis of interviews with experts and our review of literature, we identified four broad principles that could help guide offset program design under any approach to quality assurance: Identify key goals and priorities for the program. Identifying key goals and priorities can help guide the numerous decisions that will need to be made in designing and administering the program. In many cases, policy mechanisms designed to increase the quality of offsets may also increase their cost. As a result, some experts suggested that policymakers should define an acceptable level of uncertainty—or an acceptable level of cost— on which to base the choice of quality assurance measures. Establishing these parameters may help policymakers determine whether specific types of projects can be reliably verified within the acceptable ranges of uncertainty, taking into account existing methods and technologies. Align incentives with goals. The design of the offset program creates incentives that may or may not serve program goals. Assessing the incentives created by various program designs can inform design decisions and may help improve outcomes. For example, evaluating whether the incentives offered by the offset program overlap with other incentive programs could help policymakers determine if program adjustments— such as offset discounts or limits on project types—are needed. Promote transparency. A program might cover projects from a wide range of economic sectors and countries. Clear and transparent processes and publicly available information can enable concerned third parties to be involved in project oversight, potentially improving the quality of offsets. In addition, maintaining transparency in the development of procedures and standards can help build trust in the program and reduce uncertainty for investors. Incorporate evaluation and continuous improvement into the program. Carbon markets are relatively new and less mature than other commodity markets, and program administrators will therefore need to be able to respond to an evolving marketplace. This may include adapting to unforeseen consequences of program policies as well as incorporating new technologies and innovations that emerge over time. Experts and literature thus recommended that a program develop a process for ongoing evaluation and assessment of program policies and outcomes. For example, a program could establish an ongoing process to update the methods used to establish baselines so that they accurately reflect current conditions and technologies. According to one expert, a program could also evaluate the effectiveness of its additionality procedures by assessing whether projects that had been screened out by program policies were ultimately implemented. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix III. This report examines (1) the key challenges in assessing the quality of different types of offset projects, and (2) options for addressing key challenges associated with offset quality if the United States adopted a program to limit greenhouse gas emissions. To address these objectives, we reviewed existing information, assessed approaches in seven offset programs, and conducted semistructured interviews with knowledgeable persons in two broad groups: experts (researchers, economists, and academic experts involved with designing or assessing offset programs) and stakeholders (individuals that directly participate in or administer offset programs). Specifically, we assessed approaches that seven offset programs use to address offset quality. We selected programs based on their representation in relevant literature and assessed two compliance programs—the Clean Development Mechanism (CDM) and the Regional Greenhouse Gas Initiative (RGGI)—and five voluntary programs—Climate Action Reserve (CAR), Chicago Climate Exchange (CCX), Climate Leaders, Gold Standard, and Voluntary Carbon Standard (VCS). We identified and interviewed 19 stakeholders from these programs to better understand quality issues from multiple perspectives. Stakeholders we interviewed included (1) program officials, (2) verifiers, and (3) offset project developers. To select a sample of verifiers, we identified seven verification firms that worked with at least three of the seven offset programs and interviewed representatives from each. To select a sample of project developers, we selected the three U.S.-based and three internationally based offset developers that had the most projects registered with the three largest offset programs in each market. Appendix II lists the stakeholders we interviewed and their affiliations. We also selected a nonprobability sample of 13 experts—a group that included economists, academic researchers, and specialists in ecology and law—based on their knowledge and experience in relevant areas, recommendations from knowledgeable persons including agency officials and other interviewees, and the relevance and extent of their publications. To ensure coverage and range of perspectives, we selected experts who had information about key offset types, like the agriculture and forestry sectors; came from scientific, technical, or economic backgrounds, and provided perspectives from both developing offset standards and assessing the quality of offsets. We verified our list of experts with other experts that have served on previous GAO panels focused on market- based mechanisms to address climate change to ensure that we had sufficient expertise. Appendix II lists the experts we interviewed, which included agency and international officials and researchers. We conducted a content analysis to assess experts’ responses and grouped the top responses into overall themes. Not all of the experts provided their views on all issues, and we do not report the entire range of expert responses in this report. Findings from our nonprobability sample of experts and stakeholders cannot be generalized to those we did not speak to. The views expressed by experts do not necessarily represent the views of GAO. To characterize expert and stakeholder views, we identified specific meanings for the modifiers we use to quantify views, as follows: “Many” represents 6 to 10 experts, and 7 to 15 stakeholders, “Some” represents 3 to 5 experts, and 3 to 6 stakeholders. To understand the scope of current and possible U.S. government work in carbon offsets quality assurance, we interviewed officials responsible for offset-related work at agencies identified as having important roles in either existing programs or current legislation. These agencies were Energy Information Administration, Environmental Protection Agency, Department of Agriculture, and United States Agency for International Development. To understand issues related to quality assurance in the Clean Development Mechanism (CDM), we met with officials of the United Nations Framework Convention on Climate Change (UNFCCC), which administers the CDM. We also met with officials of the German Federal Environmental Ministry to learn about quality issues in the context of the implementation of the CDM on the national level. GAO provided a summary of the contents of this report to UNFCCC and EPA officials prior to its issuance. We conducted our work from April 2010 to February 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. We interviewed officials from the following organizations: American National Standards Institute (ANSI) Clean Development Mechanism (UNFCCC Secretariat and German Federal Environment Ministry) Climate Leaders (EPA) Voluntary Carbon Standard World Bank, Carbon Finance Unit Det Norske Veritas (DNV) Environmental Services, Inc. ERM Certification and Verification Services First Environment, Inc. In addition to the contact named above, Michael Hix (Assistant Director), Quindi Franco, Cindy Gilbert, Cody Goebel, Tim Guinane, Richard Johnson, Erik Kjeldgaard, Jessica Lemke, Susan Offutt, and Ben Shouse made key contributions to this report.
|
Carbon offsets are reductions in greenhouse gas emissions in one place to compensate for emissions elsewhere. Examples of offset projects include planting trees, developing renewable energy sources, or capturing emissions from landfills. Recent congressional proposals would have limited emissions from utilities, industries, or other "regulated entities," and allowed these entities to buy offsets. Research suggests that offsets can significantly lower the cost of a program to limit emissions because buying offsets may cost regulated entities less than making the reductions themselves. Some existing international and U.S. regional programs allow offsets to be used for compliance with emissions limits. A number of voluntary offset programs also exist, where buyers do not face legal requirements but may buy offsets for other reasons. Prior GAO work found that it can be difficult to ensure offset quality--that offsets achieve intended reductions. One quality criterion is that reductions must be "additional" to what would have occurred without the offset program. This report provides information on (1) key challenges in assessing the quality of different types of offsets and (2) options for addressing key challenges associated with offset quality if the U.S. adopted a program to limit emissions. GAO reviewed relevant literature and interviewed selected experts and such stakeholders as project developers, verifiers, and program officials. This report contains no recommendations. According to experts, stakeholders, and available information, key challenges in assessing the quality of offset projects include the following: (1) Additionality. According to many experts and stakeholders GAO interviewed, additionality is the primary challenge to offset quality. Assessing additionality is difficult because it involves determining what emissions would have been without the incentives provided by the offset program. Studies suggest that existing programs have awarded offsets that were not additional. (2) Measuring and managing soil and forestry offsets. For projects that store carbon in soils and forests, it is challenging to estimate the amount of carbon stored and to manage the risk that carbon may later be released by, for example, fires or changes in land management. Some studies have estimated that projects involving soils and forestry could constitute the majority of offsets under a U.S. program. (3) Verification. Experts and stakeholders said that verifying offsets in existing markets has presented several challenges. In particular, project developers and offset buyers may have few incentives to report information accurately or to investigate offset quality. According to experts, stakeholders, and available information, policymakers have several options to choose from in addressing challenges with offset quality. These approaches often involve fundamental trade-offs, such as increasing the cost of offsets. Nevertheless, some research indicates that including offsets in a program to limit emissions could provide substantial cost savings that would not be provided by a program without offsets. (1) Additionality. One way to assess additionality is project-by-project approval, a lengthy process that considers the individual circumstances of each project. Another approach is to group projects into categories and apply a standard to the entire group--for example, award offsets to all electricity generators with emissions below a certain level. While such standards may be less subjective and less costly to administer, they may also require a considerable up-front investment to collect data for various project types. (2) Measuring and managing soil and forestry offsets. To address these challenges a program could, for example, adjust the amount of offsets awarded based on measurement uncertainty, or establish a "buffer pool" of offsets to compensate for any re-released carbon. (3) Verification. To address this challenge, a program could, for example, hold verifiers liable for problems with offsets they have approved, contract with independent verifiers, and provide for rigorous oversight. Experts also identified options that could address multiple quality assurance challenges, such as limiting the quantity or type of offsets that can be used for compliance. However, limiting the supply of offsets could also raise their cost. Regardless of the program design, many experts said an offset program should clearly identify goals, align incentives with goals, promote transparency, and continuously evaluate progress.
|
Franchise funds are government-run, self-supporting businesslike enterprises managed by federal employees. Franchise funds provide a variety of common administrative services, such as payroll processing, information technology support, employee assistance programs, public relations, and contracting. This review focuses on DOD’s use of the franchise funds’ contracting services. Franchise funds are required to recover their full costs of doing business and are allowed to retain up to 4 percent of their total annual income. To cover their costs, the franchise funds charge fees for services. The Government Management Reform Act of 1994 authorized the Office of Management and Budget to designate six federal agencies to establish the franchise fund pilot program. Congress anticipated that the franchise funds would be able to provide common administrative services more efficiently than federal agencies’ own personnel. The original operating principles for franchise funds included offering services on a fully competitive basis, using a comprehensive set of performance measures to assess the quality of franchise fund services, and establishing cost and performance benchmarks against their competitors—other government organizations providing the same types of services. Although there are five franchise funds currently in operation, DOD primarily uses two for contracting services—GovWorks, operated by the Department of the Interior, and FedSource, operated by the Department of the Treasury. Figure 1 shows the revenues for GovWorks and FedSource and the percentage of revenue derived from doing business with DOD in fiscal year 2004. Effective contract management requires specialized knowledge and careful attention to a range of regulatory requirements and contracting practices designed to protect the government’s interests. In obtaining contracting services through a franchise fund, three main parties share responsibilities for ensuring that proper procedures are followed: government customer—the program office or agency in need of a good or service; franchise fund—the federal entity that provides contracting services; and contractor—the vendor that provides the good or service desired by the government customer. DOD program officials are most familiar with the technical requirements for the goods and services they need. DOD contracting officers can place orders directly through many interagency contracts. Alternatively, DOD pays the franchise fund to assume many of the contracting responsibilities that normally would have been handled by DOD’s contracting officers if the customers had relied on them to purchase the goods or services. Whether DOD makes purchases directly or through another agency, regulatory procedures and requirements are the same, such as ensuring competition, determining fair and reasonable pricing, and monitoring contractor performance. Table 1 shows the basic steps to acquire a good or service through GovWorks or FedSource. GovWorks and FedSource can either make use of their own or other agencies’ contracts, or they can develop new, customized contracts to satisfy a DOD customer’s needs. GovWorks generally uses other agencies’ contracts, and FedSource generally uses its own contracts. Table 2 lists the various types of contracting methods the franchise funds use. While use of other agencies’ contracting services may offer convenience and efficiency, our prior work and that of some agency inspectors general have identified problems with the use of other agencies’ contracting services, including lack of compliance with federal requirements for competition and lack of contractor oversight. In prior work, we found that increasing demands on the acquisition workforce and insufficient training and guidance are among the causes for these deficiencies. Two additional factors are worth noting. First, the fee-for-service arrangement creates an incentive to increase sales volume because revenue growth supports growth of the organization. This incentive can lead to an inordinate focus on meeting customer demands at the expense of complying with contracting policy and required procedures. Second, it is not always clear where the responsibility lies for such critical functions as describing requirements, negotiating terms, and conducting oversight. Several parties—the government customer, the agencies providing the contracting services, and, in some cases, the contractors—are involved with these functions. But, as the number of parties grows, so too does the need to ensure accountability. We have previously reported that ensuring the proper execution of the contracting process is a shared responsibility of all parties involved in the acquisition process and that specific responsibilities need to be more clearly defined. GovWorks and FedSource did not always obtain the full benefits of competitive procedures, did not otherwise ensure fair and reasonable prices, and may have missed opportunities to achieve savings on behalf of DOD customers for millions of dollars worth of goods and services. With limited evidence that prices were fair and reasonable, GovWorks sometimes added millions of dollars of work to existing orders—as high as 20 times the original order value. In addition, we found limited and inconsistent evidence in the GovWorks and FedSource contract files we reviewed that the franchise funds sought to negotiate prices or conducted price analysis when required. DOD customers told us they were under the impression that franchise funds ensure competition and analyze prices. However, we found numerous cases in which these practices did not occur. The FAR states that contracting officers must purchase goods and services from responsible sources at fair and reasonable prices. Price competition is the preferred method to ensure that prices are fair and reasonable. The FAR also includes special competition procedures for orders placed under the types of contracts the franchise funds use, including GSA schedules and multiple-award contracts. DOD’s procurement regulations have additional procedures for ensuring competition when purchasing services from these types of contracts with certain exceptions—such as urgency or logical follow-on. For example, when ordering from GSA schedules, DOD procurement regulations require contracting officers to request proposals from as many contractors as practicable and receive at least three offers. If three offers are not received, a contracting officer must determine in writing that no additional contractors can fulfill the requirement. Alternatively, the contracting officer may provide notice to all schedule holders that could fulfill the requirement. When prices for the specific services being ordered are not established in the contract, the FAR and GSA ordering procedures require contracting officers to analyze proposed prices and to document that they are determined to be fair and reasonable. For example, when labor rates are established in the contract, relying on labor rates alone is not a good basis for deciding which contractor is the most competitive. The labor rates do not reflect the full cost of the order or critical aspects of the service being provided, such as the number of hours and mix of labor skill categories needed to perform the work. These procedures are designed to ensure that the government’s interests are protected when purchasing goods and services. We reviewed 10 orders—totaling about $164 million in fiscal year 2003 funding—in which GovWorks provided contracting services to DOD’s customers. With the exception of two orders, which were placed against GovWorks’ own contracts, the orders we reviewed were placed against GSA schedules. In 5 of the 10 cases, GovWorks sought, but did not receive, competing proposals as required for the types of contracts used. In 3 of the 10 cases, GovWorks sought and received multiple proposals for the work. In the remaining 2 cases, GovWorks placed orders on a sole-source or single-source basis and provided relevant explanations, such as an urgent need for the work and an award to a small disadvantaged business. Table 3 provides details on these 10 orders, and additional information is available in appendix I. In the five cases for which GovWorks sought competing proposals but received only one proposal for each order, GovWorks allowed 2 weeks or less for proposals to be submitted. In four of these cases, orders were ultimately placed with incumbent contractors to fill requirements for ongoing programs. For example, when the Air Force’s Office of the Deputy Chief of Staff Air and Space Operations sought a contractor to provide analytical services, GovWorks gave potential contractors 4 days—around Christmas—to respond. The one contractor that responded was the incumbent and received the order, which totaled $63.4 million. When the Air Force’s Aging Landing Gear Life Extension Program needed a contractor to provide services involving landing gear technology, GovWorks invited 17 contractors to submit proposals and posted the solicitation on the Internet allowing 14 days for proposals to be submitted. The incumbent contractor, which had provided services to the program since its inception in 1998, submitted the only proposal and received the order, which totaled $19.8 million. Each of these 5 orders was subject to the standards for obtaining competing offers for DOD orders, but in only the case of the Aging Landing Gear Life Extension Program did contract documentation indicate that GovWorks had attempted to meet Defense procurement regulations for ordering from GSA schedules. Our findings at GovWorks are consistent with our previous work on DOD’s use of other agencies’ contracts. In our prior work we found that the reasons only one contractor responded to opportunities to compete for work included a perception among potential contractors that incumbent contractors have an advantage in competing for ongoing work and that very short time frames to prepare proposals discouraged others from competing. In this review, we found GovWorks received multiple proposals for work when there was no incumbent contractor and longer time frames allowed for competition to occur. In the five cases in which competing proposals were sought but not obtained, we found limited evidence of price analyses in GovWorks’ contract files. In four of these cases, orders were subject to GSA ordering procedures for services requiring a statement of work. In the fifth case, an Interior multiple-award contract, the FAR required price analysis. (See table 3.) Consequently, GovWorks should have determined that the total price was fair and reasonable. GovWorks told us that it had conducted analyses, but we found that the files generally included only brief statements that prices had been determined reasonable, and GovWorks generally could not provide us with documentation showing what data had been gathered or analyses conducted to support the conclusion for the cases we reviewed. In 6 of the 10 cases we reviewed, GovWorks added substantial work beyond what was originally planned without determining that prices were fair and reasonable. For example, GovWorks increased an original order 20-fold by adding $45.5 million for management consulting services for the National Guard Bureau Chief Information Office. GovWorks modified another National Guard order on numerous occasions, this time increasing the value of the original order for an automated information system from $17.6 million to $44.6 million. An order for reconnaissance and surveillance flight support to Army combatant commands increased in value from $7.4 million to $34.9 million. The order was intended to provide support in Bosnia, for a period of 15 months with no option to renew, but was expanded to include operations in Colombia, and the period of performance was extended by more than 2 years. In each of these examples, GovWorks assigned the additional work without conducting price analyses to determine whether the prices charged were fair and reasonable. We reviewed seven FedSource projects—amounting to $85 million in fiscal year 2003—and found that the franchise fund did not compete orders it placed under multiple-award contracts or perform analyses to ensure fair and reasonable pricing. FedSource commonly used multiple-award contracts to make purchases for DOD. When placing orders against multiple-award contracts, DOD is generally required to ensure that contract holders have a fair opportunity to submit an offer and have that offer fairly considered for each order with certain exceptions—such as urgency or logical follow-on. In addition, FedSource used Blanket Purchase Agreements and requirements contracts for some of the projects we reviewed. Table 4 provides detail on the seven projects, and additional information is available in appendix I. The FedSource business model involves a two-step process of placing an order under previously awarded contracts and subsequently developing work assignments to define requirements for that order. In the first step, contracting officers issue orders indicating the type and approximate dollar value of work that FedSource anticipates will be required under each contract. This estimated value is based on historical usage. The second step is executed later when DOD identifies its needs. At this point, FedSource administrative personnel define tasks and outcomes and assign work to a contractor. In our past work, we recommended that the FAR clarify that agencies should not award large, undefined orders against multiple-award contracts and subsequently define specific tasks. The FAR was revised to encourage agencies to define work clearly so that the total price for work could be established at the time orders are issued. Although this requirement was in effect for the period of our review, we found that FedSource routinely allowed modifications to orders through work assignments that substantially increased the total price of the orders. FedSource did not provide contractors the opportunity to submit offers for orders under multiple-award contracts and have their offers fairly considered, as required by the FAR. FedSource officials told us that their business model does not provide contractors the opportunity to submit offers on orders. Instead, FedSource officials told us that administrative personnel were responsible for providing contractors a fair opportunity to be considered for work under multiple-award contract orders when assigning specific work to contractors. However, we found this generally did not occur. Of the 120 work assignments we reviewed, 75 were for work under multiple-award contracts. We found that in most of the 75 work assignments, FedSource administrative personnel did not provide contractors this opportunity. For example, FedSource used one of these contracts to fill several individual support staff positions at Brooke Army Medical Center at Fort Sam Houston and generally assigned work to one of the three multiple-award contractors without providing the other two contractors an opportunity to be considered. Justifications accompanying these assignments stated that assigning work to more than one contractor might create conflict among assigned staff over variations in pay and benefits. The Army’s Fort McCoy used FedSource to obtain contractor support for a variety of construction projects, and FedSource assigned the work noncompetitively for all 12 work assignments we reviewed to 1 of 3 multiple-award contract holders—totaling $7.2 million. The contract holder, a firm specializing in staffing, subsequently passed the work through to local construction companies that Fort McCoy officials had identified. Justifications accompanying some of the projects stated that the FedSource contracting officer’s representative had determined that it was “in the best interest of the government to award task orders to the vendor that solicited and brought in the business.” A FedSource quality review later concluded that these justifications were inadequate. Many months after the assignments were made, a second justification was placed in the contract files citing numerous reasons for selecting the preferred contractor. One of the reasons was that the project required expedited effort to support urgent requirements, which might have been an acceptable reason, except that the justification did not indicate that use of the other two contractors would have resulted in unacceptable delays. In another example, the Navy needed to fill several administrative positions at its 31 regional recruiting centers around the country. Under another purchasing arrangement, FedSource assigned the work to two contractors, one for recruiting centers east of the Mississippi River and the other for centers to the west of the river. These arrangements did not establish prices for any of the services provided, and FedSource personnel told us that they accepted the prices provided by the contractors. This type of purchasing arrangement does not justify purchasing from only one source—contracting officers are still required to solicit price quotations from other sources. However, there was no evidence FedSource personnel had negotiated or analyzed these prices. In addition, FedSource did not always demonstrate that prices were reasonable. For example, in two of the customer projects we reviewed, FedSource made work assignments for construction services at the Army’s Fort McCoy and Fort Snelling against a contract for operational support. Because the original contract had a very broad and undefined statement of work that did not explicitly include construction, no prices for that type of work had been established in the contract. For the project at Fort McCoy, the contractor that received the assignment solicited prices from potential subcontractors and presented their price, including a markup, to FedSource. We did not find any analysis to determine that the contractor’s price was reasonable in FedSource’s files. FedSource officials told us that they have since awarded a separate contract for construction services. In four of the five projects involving staffing support, FedSource paid contractors higher prices for services than were established in the contract. Most of the files we reviewed contained no justifications for the higher prices. For example, in our review of 25 work assignments for staffing support services at an Army medical center, 14 of the work assignments were priced higher than the price established in the contracts. In 9 of these cases, FedSource had agreed to additional sick leave or vacation time as part of the hourly rate, but FedSource’s contract file contained no documentation indicating that the contractor employee qualified for the additional benefits. DOD did not follow sound management practices designed to ensure value while expeditiously acquiring goods and services. DOD customers chose to use franchise funds based on convenience, rather than as part of an acquisition plan. DOD conducted little analysis, if any, to determine whether using franchise funds’ contracting services was the best method for acquiring a particular good or service. For their part, although franchise funds’ business operating principles require that they maintain and evaluate cost and performance benchmarks against their competitors, they did not perform analyses that DOD could use to assess whether the franchise funds deliver good value. Their performance measures generally focus on customer satisfaction and generating revenues, rather than proper use of contracts and sound management practices. This focus on customer satisfaction and generating revenues provides an incentive to emphasize customer service rather than ensuring proper use of contracts and good value. DOD customers told us that they did not formally analyze contracting alternatives but generally chose to pay GovWorks and FedSource to provide contracting services because the franchise funds provided quick and convenient service. Some customers were dissatisfied with the speed and quality of services provided by DOD’s in-house contracting offices. For example, two DOD customers told us that their contracting offices required 9 months to respond to their purchasing needs, while the franchise fund required only a few weeks. The franchise fund’s ability to place orders quickly was valuable to DOD customers in these situations. DOD customers said that franchise funds’ contracting services were less restrictive than other DOD contracting alternatives. Some DOD customers told us that GovWorks and FedSource made it easier to spend funds at the end of a fiscal year unlike DOD’s in-house contracting offices. Two DOD customers said that GovWorks made it easier to spend small amounts of funding because GovWorks would place orders incrementally as funding became available. Some DOD customers mentioned that using FedSource meant they did not have to “live with the terms and conditions” of a long term contract or that it was easier to replace problem contractor employees. In one case, we were told that, if the organization had to fill positions with government employees, it would have less flexibility to hire the personnel it needed in a timely manner. Analysis of contracting alternatives helps to ensure that purchases are made by the most appropriate means and are in DOD’s best interest; however, DOD has no clear mechanism for making this determination when using other agencies’ contracting services. DOD’s guidance on the use of these vehicles has been evolving for several years and has not yet been fully implemented. DOD also lacks a means to gather data on the use of interagency contracts on a recurring basis, although it has been subject over the years to various requirements to monitor interagency purchases. In 2003, in response to a congressional mandate, DOD was unable to compile complete data on spending through interagency contracts. DOD officials told us that their financial systems are not designed to collect this data. Without this type of data, it is difficult to make informed decisions about the use of other agencies’ contracting services. DOD issued guidance in October 2004 that requires the military departments and defense agencies to determine whether using interagency contracts—such as those the franchise funds manage—is in DOD’s best interest. While this guidance outlines procedures to be developed, and general factors to consider, it does not provide specific criteria for how to make this determination and does not require military departments and agencies to report on the use of interagency contracts. DOD has directed the military departments and defense agencies to develop their own guidance to implement this policy. Congress has also recently taken action to ensure DOD’s proper use of interagency contracts. The conference report accompanying this legislation established expectations that DOD’s procedures will ensure that any fees paid by DOD to the contracting agency are reasonable in relation to work actually performed. In 2001, Congress adopted legislation requiring DOD to establish a management structure and establishing savings goals for the procurement of services. The legislation also requires DOD to ensure that contracts for services are entered into or issued and managed in compliance with applicable laws and regulations regardless of whether the services are procured by DOD directly or through a non-DOD contract or task order. One of the goals of this legislation was to allow DOD to improve the management of the procurement of services. However, DOD generally chose to use franchise funds for reasons of speed, convenience, and flexibility rather than taking a strategic and coordinated approach to acquiring services. We found that prior to choosing to use a franchise fund, DOD did not analyze costs and benefits or prepare business cases to determine whether the franchise fund provided better value—considering the fees it charges—compared with other alternatives, such as using a DOD contracting office or purchasing goods or services through another federal agency’s existing contract. As a result, DOD customers did not consider opportunities to leverage their buying power when using franchise funds. None of the DOD customers we spoke to analyzed trade- offs between total price, including fees, and the benefits of convenience. For example, on a group of work assignments for construction services valued at $7.2 million, the Army’s Fort McCoy paid FedSource a total of about $1 million, or 17 percent above the subcontractor’s proposed price, for the contractor markup and the franchise fund fee. Most of these assignments were placed towards the end of the fiscal year. This may have led to a higher price for the services than DOD would have paid in contracting directly with the subcontractors. Figure 2 shows the general process by which the Army’s Fort McCoy used FedSource to obtain contractor support for construction services. The DOD customer said that FedSource made it easier than his own contracting office to assign work with values greater that $25,000 late in the fiscal year because FedSource’s deadlines were not as strict. He also speculated that the subcontractor probably would have charged more if contracting directly with the government because dealing with the government is cumbersome and costly. He did not have information to indicate what the subcontractor’s price might have been, nor did he perform any formal analysis to compare FedSource with other contracting opportunities. Conducting a thorough analysis also might have given DOD a better understanding of the fees paid to make purchases through the franchise funds. For example, DOD customers sometimes paid a GovWorks fee, or service charge, on top of a fee to use another agency’s contract because GovWorks generally uses other agencies’ contracts to make purchases for DOD customers. While some customers were aware of the fees they paid, in two cases, DOD customers selected GovWorks because its fees were lower than fees charged by other agencies; however, the customers did not realize that GovWorks’ fees were in addition to the other agencies’ fees. GovWorks’ fees generally ranged from 2 percent to 4 percent of the price for goods and services purchased, and our analysis showed that FedSource fees ranged from 2 percent to 8 percent for the contracts and orders we reviewed. Congress has mandated that DOD agencies report fees paid for the use of other agencies’ contracts in the past and required DOD to do so again for fiscal year 2005. The franchise funds’ business operating principles require that they maintain and evaluate cost and performance benchmarks against their competitors. However, they did not perform analyses that DOD could use to assess whether the franchise funds deliver good value. FedSource claims that it achieves lower prices on goods and services because it aggregates requirements and negotiates price discounts. Further, FedSource claims that competition with other contracting offices provides an incentive to provide better quality at lower cost. However, this incentive may not drive costs down unless customers are sensitive to the cost of doing business with one agency over another and make decisions based on costs. Franchise fund officials told us that demonstrating these advantages was difficult because they lacked insight into the prices customers would have paid when using other contracting alternatives to fill their requirements. FedSource officials also explained that quantifying the value of the other benefits they provide—such as convenience and flexibility—is difficult. Instead, GovWorks and FedSource have used such measures as growth in total contracting activity and revenues as well as customer satisfaction but have little data to demonstrate that they provide better quality and lower price goods and services than other federal contracting alternatives can provide. In fact, GovWorks marketing materials emphasize convenience and value-added service rather than costs. In our prior work, we found that fee-for-service contracting arrangements emphasize the overall sustainability of the contracting operation, as the fees collected are used to cover the costs of doing business, which may lead to a focus on customer service at the expense of compliance with contracting policy and procedures. DOD, GovWorks, and FedSource did not follow federal contracting procedures designed to ensure value while expeditiously acquiring goods and services. DOD and the franchise funds did not define desired outcomes and the specific criteria against which contractor performance could be measured and paid limited attention to monitoring contractors’ work. As we have reported previously, it is not always clear where the responsibility lies for such critical functions as describing requirements, negotiating terms, and conducting oversight. Although the FAR states that contracting officers are responsible for including appropriate quality requirements in solicitations and contracts and for contract surveillance, the franchise funds do not have sufficient knowledge about the DOD customers’ needs to fulfill these responsibilities without the assistance of the DOD customer. Recently, the franchise funds contracting operations performed some internal reviews that have findings similar to ours, and the funds are working to address the problems. These shortcomings mirror many of the findings of our previous work and are among the reasons we have designated interagency contracting as a governmentwide high-risk area. In the GovWorks and FedSource cases we reviewed, required outcomes were not well-defined, work was generally described in broad terms, and orders sometimes specifically indicated that work would be defined more fully after the order was placed. GovWorks and FedSource files we reviewed lacked clear descriptions of outcomes to be achieved or requirements that the contractor was supposed to meet. The FAR states that contracting officers are responsible for including the appropriate quality requirements in solicitations and contracts. Without these criteria, accountability becomes harder to determine and the risk of poor performance is increased. Clear definition of requirements promotes better mutual understanding of the government’s needs. In a typical situation, the customer—a DOD program office, for example—is best qualified to know what it needs. However, once a DOD program office chooses to pay a franchise fund to make purchases on its behalf, the office must then rely on the franchise fund to provide the contracting expertise. The two parties have to work together to ensure that requirements for purchases are well-defined with sufficient detail to determine whether the desired outcomes were met and the goods and services provided meet the government’s needs. Critical information must be documented in order to make these determinations. GovWorks and FedSource use different processes, and the tables in appendix III explain some of the pertinent contract documents used to define desired outcomes and criteria. In 7 of the 10 GovWorks orders we reviewed, statements of work were very broad. For example, six of these orders contained language stating that specific tasks could be added, deleted, or redefined throughout the period of performance. In some cases, DOD program officials told us that the statements of work were broad because they were not aware of all requirements when the order was placed or because they were operating in a constantly changing technological environment. DOD program officials also told us that the broad statements of work gave them flexibility to add requirements to existing orders as additional needs arose. Orders placed by FedSource against its contracts contained only a very general statement—generally just a few words—describing the work in broad terms and an anticipated dollar value. These orders did not clearly describe all services to be performed or supplies to be delivered so that the full price for the work could be established when the order was placed, as required by the FAR. As noted earlier, FedSource officials explained that in their business model, orders were not intended to describe specific work to be completed. Instead, FedSource administrative personnel issued work assignments that were intended to provide the clear descriptions of desired outcomes that the orders did not. However, we found that these work assignments were often unclear as well. Five of FedSource’s largest customer projects for DOD involved use of contracts to provide staff. Work assignments for staffing services often described the position to be filled, including a general outline of duties. However, the assignments did not contain criteria for evaluating the work performed by contract employees. In addition, when providing staffing support, FedSource uses these contracts to fill positions individually, rather than describing functional needs or desired results. For example, at an Army medical center FedSource filled over 200 positions individually instead of aggregating these positions into fewer functional requirements. This acquisition approach does not provide contractors with the flexibility to determine how best to staff a function and does not lend itself to a performance- based approach. Under performance-based contracting, the contracting agency specifies the outcome or result it desires and leaves it to the contractor to decide how best to achieve the desired outcome. FedSource officials said they were moving toward a more performance- based contracting approach. To determine whether an environment had been created that would allow improper personal services relationships to develop, we interviewed officials at five DOD program offices that used FedSource contracts to staff individual positions. We asked questions about the work performed by the contractor employees and the relationships between the DOD customers and the contractor employees. The DOD officials said that generally: the services provided by the contract employee were integral to agency functions or missions; the contractor employees were providing services comparable to those performed using civil service personnel; and the services were provided on site and with the use of equipment provided by the government. With regard to the work relationships, DOD customers told us that government employees assigned and prioritized daily tasks for the contractor employees. FedSource guidelines also state that the government customer is responsible for verifying contract employee hours worked by signing the contractor’s weekly timesheet. Further, a FedSource internal review found that statements of work contained “personal services-type language like ‘under the direction of’ or ‘oversee’ or ‘duties’ or ‘job description.’” Our review also found documents that had been edited to revise similar language. FedSource officials were aware of the potential that these contracts might be used for personal services and took various steps to clarify that personal services were not to be provided. For example, FedSource officials provided training for DOD customers on how to avoid creating a situation that had the appearance of personal services. Although this training is a positive step, poorly defined statements of work provided the opportunity for situations to arise in which personal services relationships could develop. FedSource relied on administrative staff, not contracting officers, to work with the customer to define and assign the specific tasks to be performed or the positions to be filled. A FedSource review found that trained contracting staff was needed for developing task order requirements and warranted contracting officers were required for issuing task orders. The FedSource administrative employees do not have the same level of expertise as contracting officers, who have specialized knowledge to ensure compliance with federal regulations and guidelines. Inadequacies we found in FedSource’s contracting practices pointed to the challenges of relying on administrative personnel rather than contracting experts to review statements of work, choose appropriate contracting vehicles, ensure adequate competition, and sign off on assignments of specific work. DOD customers, GovWorks, and FedSource often relied on methods of contract oversight that lacked performance measures to ensure that contractors provided quality goods and services in a timely manner. Typically, the franchise funds failed to include an oversight plan that contained specific quality criteria in their contracts or orders. Without this critical information, neither DOD nor the franchise funds could effectively measure contractor performance. The FAR and DOD’s procurement regulations require contract surveillance and documentation that it occurred. Contract surveillance, also referred to as oversight, is a contracting officer’s responsibility, and DOD pays the franchise fund to assume the responsibilities of contracting officers. The Office of Management and Budget’s Office of Federal Procurement Policy has issued policy stating that contract oversight begins with the assignment of trained personnel who conduct surveillance throughout the performance period of the contract to ensure the government receives the services required by the contract. DOD guidance states that documentation constitutes an official record and the surveillance personnel assessing performance are to use a checklist to record their observations of the contractor’s performance. The guidance also states that all performance should be documented whether it is acceptable or not. The GovWorks contract files we reviewed generally did not include contractor monitoring plans, quality assurance surveillance plans, test and acceptance plans, or other evidence of monitoring activities. However, the files did contain evidence that a contracting officer’s representative from the DOD program office had been appointed to assist in performing contractor oversight. Although ensuring that contract oversight occurs is a contracting officer responsibility, GovWorks officials told us that surveillance plans were not usually kept in the GovWorks contracting officers’ contract files. Instead, these plans were maintained by the contracting officer’s representative at the DOD customer agency. When we asked about contract oversight, we found that in the absence of an agreed upon oversight plan, DOD customers generally ensured that there was some process in place for monitoring performance. Some customers described status meetings and regular progress reports, but generally told us that they had no specific criteria for monitoring contractor performance or established measures for determining the quality of services. Although GovWorks officials told us that their contracting officers did assist customers in measuring quality services from the acquisition planning stages through contract completion, we found little evidence that this actually took place. We found that FedSource generally did not ensure that contractor oversight occurred. As was the case with GovWorks, FedSource officials told us that they encouraged DOD to develop criteria for quality. However, FedSource allowed general information—such as job descriptions—to serve as requirements, even though the job descriptions contained no criteria for measuring quality. These descriptions did not provide sufficient information to establish an oversight plan. FedSource did not appoint trained contracting officers’ representatives from DOD to conduct on-site monitoring. Instead, FedSource relied on its own administrative personnel, who had been trained as contracting officers’ technical representatives but were not located on-site with the customer, to assess contractor performance. Because they were not on-site, they could not observe the quality of the contractors’ work, and FedSource generally took the absence of complaints from DOD customers as an indication that the contractor was performing satisfactorily. A FedSource official explained that FedSource guidelines state that the customer agency’s acceptance of the contract employee’s time sheet indicates agreement that services have met quality standards and requirements. This policy lacks clear criteria and measures to determine whether the contractor has provided quality services. In place of criteria, we found DOD customers said they generally evaluated performance of contractor staff based on informal observation and customer satisfaction. The lack of adequate oversight is consistent with what we have reported in our recent work on contractor oversight for DOD service contracts, where we found that almost all of those that had insufficient oversight were interagency contracts. DOD explained that contractor oversight is not as important to contracting officials as awarding contracts and does not receive the priority needed to ensure that oversight occurs. DOD concurred with our recommendations to develop guidance on contractor oversight of services procured from other agencies’ contracts, to ensure that proper personnel be assigned to perform contractor oversight in a timely manner no later than the date of contract award, and that DOD’s service contract review process and associated data collection requirements provide information that will provide management visibility over contract oversight. Aside from monitoring the contractors’ performance, we also found that the departments of the Interior and the Treasury, which operate GovWorks and FedSource, respectively, and the Office of Management and Budget have conducted infrequent reviews of franchise funds’ procurement activities. GovWorks and FedSource have recently conducted internal reviews of their operations that have identified concerns similar to those we found. A GovWorks’ 2004 Management Review identified such issues as lack of acquisition planning for work added to existing awards, unanticipated increases in the amounts of orders, and inadequate documentation of many requirements such as competitive procedures, determinations that changes were within the scope of the contract, the basis of award decisions, and that prices were fair and reasonable. FedSource officials recently started conducting “office assistance reviews.” A June 2004 FedSource review identified lack of documentation, use of purchasing agreements beyond their intended parameters and dollar limits, lack of price analysis, lack of quality assurance plans, and the need for warranted contracting officers rather than administrative personnel to perform much of the work. While the operating principles for franchise funds require the funds to have comprehensive performance measures, these measures do not emphasize compliance with contracting regulations and generally focus on customer satisfaction, financial performance, and generating revenues to cover operating costs. Several customers we interviewed were unaware of compliance problems and told us that they believed the franchise funds placed orders on a competitive basis, analyzed prices, or otherwise sought to ensure the best deal for the government when the funds, in fact, did not. GovWorks has taken steps that address concerns raised in its own reviews, such as increased training for contracting officers, developing a written acquisition procedures manual, and creating a uniform system of contract file maintenance and sample documents to ensure adequate documentation. GovWorks officials also told us they are trying to improve competitive procedures by requiring all solicitations for DOD work to be posted on e-Buy, an online system to request quotes for products and services. FedSource also has taken steps toward addressing concerns raised in this report, such as quality assurance planning, hiring contracting officers, and restructuring its operations. These initiatives are underway, and it is too early to tell whether they will improve contracting operations at the franchise funds. The Office of Management and Budget’s oversight of franchise funds has been limited. The Office of Management and Budget and the Chief Financial Officers Council established business-operating principles as a foundation for effective franchise fund management and, as required by the Government Management Reform Act, submitted an interim report on the franchise fund pilot program to Congress in 1998. Among other efforts, the report recommended that the franchise funds should continue to seek opportunities to provide services at the least cost to the taxpayer, contributing to reducing duplicative administrative functions and consequently to the costs of those functions. The report noted that the franchise funds’ performance measures were in varying stages of development. The report recommended that the Office of Management and Budget should report to Congress on franchise fund activity prior to the expiration of the pilot authority and that the office should continue to develop and implement operating guidance for the franchise fund program. Although the Office of Management and Budget’s budget examiners conduct some monitoring of franchise funds as part of their general oversight responsibilities, Office of Management and Budget representatives said they have not conducted any comprehensive reviews of franchise funds since they submitted the required report to Congress. Neither have they reviewed the funds’ contracting practices. GovWorks and FedSource, created as a result of governmentwide initiatives to improve efficiency, have streamlined contracting processes to provide customers with greater flexibility and convenience. However, GovWorks and FedSource have not always adhered to competitive procedures and other sound contracting practices. They have paid insufficient attention to basic tenets of the federal procurement system— taxpayers’ dollars should be spent wisely, steps should be taken to ensure fair and reasonable prices, and purchases should be made in the best interest of the government. One factor contributing to these deficiencies is that the departments of the Interior and the Treasury have not ensured that the franchise funds’ contracting services follow the FAR and other procurement policies. The franchise funds need to develop clear, consistent, and enforceable policies and processes that comply with contracting regulations while maintaining good customer service. Another contributing factor is that the roles and responsibilities of the parties involved in the interagency contracting process are not always clearly defined. GovWorks and FedSource are ultimately accountable for compliance with procurement regulations when they assume the role of the contracting officer. However, they often depend on the customer for detailed information about the customer’s needs. To facilitate effective purchasing and to help obtain the best value of goods and services, all parties involved in the use of interagency contracts have a stake in clarifying roles and responsibilities. Additionally, franchise funds sometimes face incentives to provide good customer service at the expense of proper use of contracts and good value. These pressures are inherent in the fee-for-service contracting arrangement. Because the franchise funds have not always adhered to sound contracting practices, DOD customers must be cautious when deciding whether franchise fund contracting services are the best available alternative. In addition to convenience and flexibility, decisions to use franchise funds should be grounded in analysis of factors such as price and fees. Further, to enhance DOD’s ability to develop sound policies related to the use of franchise funds, DOD needs measurable data that would allow it to assess whether franchise funds’ contracting services help lower contract prices, reduce administrative costs, and improve the delivery of goods and services. This information would also be useful in leveraging DOD’s overall buying power through strategic acquisition planning. No one knows the total cost of using other agencies’ contracting services. Without understanding total cost, value is elusive. In addition, DOD customers should ensure that taxpayers’ dollars are spent wisely by sharing in the responsibilities for developing clear contract requirements and oversight mechanisms. DOD customers are the best source of information about their specific needs and are also best positioned to oversee the delivery of goods and services. Given the incentive to focus on sustaining the franchise funds’ operations and the many service providers from which customers like DOD may choose, objective oversight would help to ensure that franchise funds adhere to procurement regulations and operate as intended. The Office of Management and Budget, which designated and has previously evaluated the franchise funds, is well positioned to periodically evaluate, monitor, and develop guidance to improve the franchise funds’ contracting activities. While a number of actions to improve DOD’s use of other agencies’ contracting services are already underway, to enhance these initiatives, we make the following eight recommendations to DOD, the Interior, the Treasury, and the Office of Management and Budget. To ensure that DOD customers analyze alternatives when choosing franchise funds and to provide DOD with the measurable data it needs to assess the value of the franchise funds’ contracting services, we recommend that the Secretary of Defense take the following three actions: Develop a methodology to help DOD customers determine whether use of franchise funds’ contracting services is in the best interest of the government. The methodology should include analysis of tradeoffs. Reinforce DOD customers’ ability to define their needs and desired contract outcomes clearly. This skill includes working with franchise fund contracting officers to translate their needs into contract requirements and to develop oversight plans that ensure adequate contract monitoring. monitor and evaluate DOD customers’ use of franchise funds’ contracting services, prices paid, and types of goods and services purchased. Prices include franchise fund fees and fees for use of other interagency contracts. To ensure that GovWorks and FedSource adhere to sound contracting practices, we recommend that the Secretaries of the Interior and the Treasury take the following two actions: develop procedures and performance measures for franchise fund contracting operations to demonstrate compliance with federal procurement regulations and policies while maintaining focus on customer service and develop procedures for franchise fund contracting officers to work closely with DOD customers to define contract outcomes and effective oversight methods. To ensure that the FedSource workforce has the skills to carry out contracting responsibilities, we recommend that the Secretary of the Treasury take the following action: assign warranted contracting officers to positions responsible for performing contracting officer functions. In order to provide incentives for the franchise funds to adhere to procurement regulations and to ensure that franchise funds operate as intended, we recommend that the Director of the Office of Management and Budget take the following two actions: Expand monitoring to include franchise funds contracting operations’ compliance with procurement regulations and policies. These findings should be available to customers to ensure transparency and accountability to customers and the Congress. Develop guidance to clarify the roles and responsibilities of the parties involved in interagency contracting through franchise funds. We provided a draft of this report to DOD, the departments of the Interior and the Treasury, and the Office of Management and Budget for review and comment. We received written comments from DOD and the Department of the Treasury, which are reprinted in appendices IV and V respectively. The Department of the Interior and the Office of Management and Budget provided comments via e-mail. DOD concurred with our recommendations and identified actions it has taken or plans to take to address them. In response to our recommendation that the Secretary of Defense develop a methodology to help DOD customers determine whether the use of franchise funds’ contracting services is in the best interest of the government, DOD indicated that action had been taken through the issuance of a policy memo titled Proper Use of Non-DOD Contracts and subsequent policies issued by the military departments. We acknowledge the DOD policy memo in our report and note that this guidance describes general factors to consider but does not provide specific criteria for how to make this determination. The policies issued by the military departments establish procedures for review and approval of the use of non-DOD contract vehicles, but do not address methods of determining whether this is in the best interest of the government. Our recommendation takes these actions into account and encourages DOD to go further by developing a methodology to help customers assess contracting alternatives. In response to our recommendation that DOD reinforce DOD customers’ ability to define their needs and desired contract outcomes clearly, DOD maintained that it is the responsibility of the franchise fund contracting officer to decide whether or not the requirement is described accurately. Nonetheless, DOD committed to issue a memo by August 31, 2005, reinforcing the need for DOD customers to define clearly their requirements and articulate clearly their desired outcomes in the acquisition process. We believe that this memo, coupled with DOD’s ongoing efforts to educate DOD customers about the use of interagency acquisitions, are steps in the right direction. Finally, in response to our recommendation that DOD monitor and evaluate DOD customers’ use of franchise funds’ contracting services, DOD concurred but explained that the data capture systems that would provide this information are not yet in place. DOD stated that the Federal Procurement Data System-Next Generation would provide this capability in fiscal year 2006. However, data collection is just one step in the evaluation process. In addition to collecting data, DOD will also need to compare alternatives and prices in order to make more informed choices. Further, the accuracy and reliability of interagency contracting data in the Federal Procurement Data System-Next Generation will depend heavily on accurate reporting by franchise funds. The Department of the Interior concurred with our recommendations and identified actions it has taken or plans to take to address them. The Interior highlighted 2004 accomplishments and acknowledged a need for better documentation to demonstrate compliance and value provided. The Interior also committed to ensuring an adequate contracting staff and to publishing information to help DOD determine the value of using the franchise fund. In response to our recommendation that the Department of the Interior develop procedures and performance measures for franchise fund contracting operations to demonstrate compliance with federal procurement regulations, the Interior highlighted a number of recent efforts to improve performance, including its 2004 management control review and performance improvement plan that will monitor compliance with federal procurement regulations. This plan establishes a goal of 75 percent reduction in reportable findings. Interior also stated that it had revised its acquisition review process, awarded a contract for a third party acquisition review, and provided additional training to its staff. Interior committed to continue monitoring performance and creating guidance as needed. In response to our recommendation that the Interior develop procedures for franchise fund contracting officers to work more closely with DOD customers, the Interior highlighted efforts to train its contracting officers and develop policies for working with DOD customers. The Department of the Treasury concurred with our recommendations and identified actions it has taken or plans to take to address them, including centralization of FedSource’s acquisition workforce under one line of authority to allow for standardization and consistency. In response to our recommendation that FedSource develop procedures and performance measures for franchise fund contracting operations to demonstrate compliance with federal procurement regulations, the Treasury committed to continue to conduct reviews to measure and evaluate compliance with federal procurement regulations and policies. This is a positive step toward ensuring compliance. The Treasury also said that FedSource had instituted performance-based statements of work for its acquisitions. While this initiative focuses on some aspects of compliance and is important in managing contractor performance, our recommendation addresses the performance of the franchise fund. Developing performance measures related to compliance with procurement regulations would reinforce the agency’s commitment to compliance and provide a means to monitor and demonstrate progress. In response to our recommendation that FedSource develop procedures for franchise fund contracting officers to work more closely with DOD customers, the Treasury indicated that FedSource will also develop procedures to provide its customers with clear guidance for defining contract outcomes. In response to our recommendation that FedSource assign warranted contracting officers to positions responsible for performing contracting officer functions, Treasury stated that FedSource has hired contracting officers to perform all contracting officer functions. OMB concurred with our recommendations that OMB expand its monitoring to include franchise funds contracting operations’ compliance with procurement regulations and policies and develop guidance to clarify the roles and responsibilities of the parties involved in interagency contracting through franchise funds. OMB stated that its Office of Federal Procurement Policy (OFPP) proposed to include the implementation of our recommendations in an undertaking pertaining to governmentwide acquisition contracts and incorporate franchise funds into that project. As part of that project, OMB/OFPP is asking the designated agencies to develop plans to ensure cost-effective and responsible contracting. The plans will address (1) training to contracting staff; (2) customer staff training; (3) management controls to ensure contracts are awarded in accordance with applicable laws, regulations, and policies; (4) contract administration; and (5) periodic management reviews. OMB acknowledged that this was only a part of the solution. We encourage OMB to give additional consideration to providing guidance that would clarify roles and responsibilities of the parties involved in interagency contracting through franchise funds. We are sending copies of this report to the Secretaries of Defense, the Interior, and the Treasury; the Director of the Office of Management and Budget; and interested congressional committees. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841 (cooperd@gao.gov). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Amelia Shachoy, Assistant Director; Lily Chin; Lara Laufer; Janet McKelvey; Kenneth Patton; Monty Peters; and Ralph Roffo. In memory of Monty Peters (1948-2005), under whose skilled leadership this review was conducted. We reviewed legislation establishing the franchise fund pilot program, governmentwide guidance relating to the program, and reports summarizing program outcomes. We held discussions with Office of Management and Budget representatives responsible for overseeing and providing guidance for the program and with Department of Defense (DOD) officials responsible for oversight of procurement issues. We performed work at the franchise funds managed by the departments of the Interior and the Treasury and interviewed officials and reviewed records relating to Interior’s GovWorks and Treasury’s FedSource programs. The Interior and Treasury franchise funds accounted for about 76 percent of total revenues for the six franchise funds during fiscal year 2003 (the most recently completed fiscal year at the time we were planning our field work) and about 95 percent of all services the six funds provided DOD. Contracting services the GovWorks and FedSource programs provided accounted for over 95 percent of total revenues at the Interior and Treasury franchise funds. To gain insight into how DOD customers were using franchise funds and into franchise fund contracting processes, we reviewed documentation relating to 17 selected customer projects totaling $249 million in funding provided and interviewed GovWorks and FedSource contracting personnel responsible for these projects and representatives of the DOD customers. To determine how DOD customers determined whether franchise funds provided a good value, we interviewed representatives of DOD customers for the selected projects and reviewed available documentation relating to decisions to use franchise fund contracts. We also reviewed information available from the franchise funds that would indicate whether the franchise funds provided a good value, and interviewed franchise fund officials. To determine how franchise fund contracting officers worked with DOD customers to define measurable quality standards for goods and services and develop effective oversight mechanisms, we reviewed contract documentation for selected customer projects that would establish quality standards, and documentation relating to contract oversight. We also discussed these issues with franchise fund contracting personnel. In addition, we discussed these issues with representatives of DOD customers and reviewed available documentation. To determine whether franchise funds followed the contracting practices needed to ensure fair and reasonable prices, we reviewed contract documentation for selected customer projects to assess the extent to which contracting personnel sought competition for work and analyzed proposed prices to determine whether they were fair and reasonable, and discussed these issues with contracting personnel. In addition, we discussed these issues with representatives of DOD customers and reviewed available documentation. To select customer projects for review, we obtained data files from the Department of the Interior’s GovWorks and the Department of the Treasury’s FedSource contracting programs that reflected customer projects active during fiscal year 2003, and the dollar value of customer funding provided for these projects during the year. We ranked these projects in terms of funding provided and selected projects representing the greatest dollar value of customer funding provided—10 GovWorks projects accounting for $164 million and 7 FedSource projects accounting for $85 million. Table 5 summarizes GovWorks projects, and table 6 summarizes FedSource projects. GovWorks contracting personnel fulfilled the requirements of each project selected by award of a single order, and we reviewed contract documentation related to the relevant order. FedSource contracting personnel, in contrast, fulfilled the requirements of customer projects by award of one or more contracts or orders. Further, FedSource personnel initiated multiple work assignments—in some cases several hundred—to define specific work what would be performed under each of the contracts awarded or orders placed. Accordingly, we reviewed all contracts awarded or orders placed to fulfill the requirements of the selected customer projects and a sample of work assignments initiated under these contracts or orders. To select sample work assignments for review, we first ranked the work assignments in terms of dollar value of the work to be performed. For those projects where a relatively small number of work assignments accounted for a significant share of total project value, we selected the highest dollar value assignments representing at least 50 percent of total project value. For those projects where most individual work assignments represented only a small fraction of total project value, we selected all assignments valued at $150,000 or more and a sample— selected at random—of the remaining work assignments. We conducted our review between June 2004 and June 2005 in accordance with generally accepted government auditing standards. Appendix II: Franchise Fund Operating Principles The enterprise should only provide common administrative support services. The organization would have a clearly defined organizational structure including readily identifiable delineation of responsibilities and functions and separately identifiable units for the purpose of accumulating and reporting revenues and costs. The funds of the organization must be separate and identifiable and not commingled with another organization. The provision of services should be on a fully competitive basis. The organization’s operation should not be “sheltered” or be a monopoly. The operation should be self-sustaining. Fees will be established to recover the “full costs,” as defined by standards issued in accordance with the Federal Accounting Standards Advisory Board. The organization must have a comprehensive set of performance measures to assess each service that is being offered. Cost and performance benchmarks against other “competitors” are maintained and evaluated. The ability to adjust capacity and resources up or down as business rises or falls, or as other conditions dictate, if necessary. Resources to provide for “surge” capacity and peak business periods, capital investments, and new starts should be available. The organization should specify that prior to curtailing or eliminating a service, the provider will give notice within a reasonable and mutually agreed time frame so the customer may obtain services elsewhere. Notice will also be given within a reasonable and mutually agreeable time frame to the provider when the customer elects to obtain services elsewhere. Customers should be able to “exit” and go elsewhere for services after appropriate notification to the service provider and be permitted to choose other providers to obtain needed service. Full-time equivalents would be accounted for in a manner consistent with the Federal Workforce Restructuring Act and Office of Management and Budget requirements, such as Circular A-11. Capitalization of franchises, administrative service, or other cross-servicing operations should include the appropriate full-time equivalents commensurate with the level of effort the operation has committed to perform.
|
The Department of Defense (DOD) is the largest user of other federal agencies' contracting services. The availability of these contracting services has enabled DOD and other departments to save time by paying other agencies to award and administer contracts for goods and services on their behalf. DOD can access these contracting services a number of ways, such as ordering directly from interagency contracts for commonly needed items. DOD also can pay someone else to do the work. For example, DOD uses franchise funds, which are government-run, fee-for-service organizations that provide a portfolio of services, including contracting services. As part of a congressional mandate, GAO assessed whether franchise funds ensured fair and reasonable prices for goods and services, whether DOD analyzed purchasing alternatives, and whether DOD and franchise funds ensured value by defining contract outcomes and overseeing contractor performance. GovWorks and FedSource, two of the franchise funds that DOD has relied on for contracting services, have not always ensured fair and reasonable prices while purchasing goods and services. The franchise funds also may have missed opportunities to achieve savings from millions of dollars in purchases, including engineering, telecommunications, or construction services. In the course of its review, GAO examined $249 million worth of orders and work assignments from the contracts the franchise funds used to make purchases on DOD's behalf. In many cases, GovWorks sought but did not receive competing proposals. GovWorks added substantial work--as much as 20 times above the original value of a particular order--without determining that prices were fair and reasonable. FedSource generally did not ensure competition for work, did not conduct price analyses, and sometimes paid contractors higher prices for services than established in contracts with no justification provided in the contract files. For its part, DOD--in the absence of clear guidance on the proper use of other agencies' contracting services--chose to use franchise funds on the basis of convenience without analyzing whether using franchise funds' contracting services was the best method for meeting purchasing needs. DOD also lacks information about purchases made through other agencies contracts, including franchise funds, which makes it difficult to make informed decisions about the use of these types of contracts. The franchise funds' business-operating principles require that they maintain and evaluate cost and performance benchmarks against their competitors. However, the franchise funds did not perform analyses that DOD could have used to assess whether the funds deliver good value. The funds' performance measures generally focus on customer satisfaction and generating revenues. These measures create an incentive to increase sales volume and meet customer demands at the expense of ensuring proper use of contracts and good value. DOD and the franchise funds--which share responsibility for ensuring value through sound contracting practices such as defining contract outcomes and overseeing contractor performance--did not adequately define requirements. Without well-defined requirements, DOD and the franchise funds lacked criteria to measure contractor performance effectively. On a separate oversight-related issue, GAO found that the departments of the Interior and the Treasury--each of which has responsibility in the successful operation of the respective franchise funds--and the Office of Management of Budget have performed little oversight of GovWorks and FedSource.
|
For fiscal year 1998, the Federal Procurement Data Center reported that federal agencies had contract obligations of about $200 billion. Acquisition refers to the process of obtaining goods, services, and space for use by the government. The acquisition process begins with a determination of a need for goods or services and includes deciding on solicitation and selection of sources; award of contracts; and contract administration, completion, and closeout. Personnel in many different occupations perform these acquisition tasks, including those who are in the acquisition profession and those who are in other professions but who become involved in the acquisition process by performing such activities as determining requirements or monitoring contractor performance. Congress, recognizing that billions of dollars are spent each year on federal procurement, the acquisition process is highly complex, and the caliber of the workforce is critical to the efficiency and effectiveness of the acquisition process, has expressed concern over the years about the expertise of the federal acquisition workforce. Every major congressional acquisition reform initiative since 1972 has included steps toward improving the acquisition workforce. Steps taken have included such measures as designating a central agency to provide leadership for acquisition workforce development, establishing minimum qualification requirements, requiring enhanced performance incentives, and giving greater visibility to funding for training the acquisition workforce. The December 1972 report of the Commission on Government Procurement recommended improvements in the efficiency and effectiveness of the procurement process through various measures, including improving the caliber of the acquisition workforce. Since then, Congress and the executive branch have taken actions designed to improve the acquisition workforce. In 1974, Congress passed legislation establishing OFPP and, over the years, assigned it responsibility to provide direction of procurement policy and leadership in the development of executive agency procurement systems, including the professional development of acquisition personnel. Through legislation, Congress directed that the Federal Acquisition Institute, under the direction of OFPP, promote governmentwide career management programs for a professional acquisition workforce. The Institute carries out this role by such means as periodically analyzing acquisition career fields, developing competencies for acquisition positions, and developing acquisition training courses. In February 1996, Congress enacted the Clinger-Cohen Act (P.L. 104-106). Section 4307 of Clinger-Cohen, entitled “Acquisition Workforce,” amended the OFPP Act and requires OFPP to, among other things, (1) establish minimum acquisition workforce qualification requirements, (2) promote uniform implementation of acquisition education and training requirements among agencies to the extent this is consistent with their missions, (3) ensure that agencies collect and maintain standardized information on the acquisition workforce related to Clinger-Cohen’s implementation, and (4) evaluate agencies’ implementation of Clinger- Cohen. In addition, Clinger-Cohen requires civilian agencies to establish, in consultation with OFPP, policies and procedures for effective management, including education and training requirements, for their acquisition workforces, and to ensure uniform implementation of policies and procedures among components to the maximum extent practicable. Clinger-Cohen further requires civilian agencies to separately identify the funding levels requested for acquisition workforce education and training in their congressional budget justification documents submitted in support of the President’s budget and provides that agencies may not obligate funds appropriated for acquisition workforce education and training under the act for any other purpose. In September 1997, after consulting with agency procurement executives, OFPP issued Policy Letter 97-01 that set forth governmentwide policies and approaches for implementing Clinger-Cohen’s acquisition workforce provisions. Among other things, OFPP directed agencies to establish core training for contract specialists (GS-1102), contracting officers, purchasing agents (GS-1105), contracting officer representatives, and contracting officer technical representatives, and at least 40 hours of continuing education or training every 2 years for contract specialists and contracting officers. There is one main occupational series that federal employees involved in acquisition work fall into—GS-1102. Contract specialists are defined as a broad category of employees whose positions are in the GS-1102 occupational series. This series includes those who perform the duties of contracting officers. Contracting officers are federal employees with the authority to bind the government legally by signing a contractual instrument. Purchasing agents, who by definition are in the GS-1105 occupational series, are federal employees who generally issue delivery orders against established contracts. Contracting officer representatives and contracting officer technical representatives are federal employees who have been designated by a contracting officer to perform certain contract administration activities, some of which relate to program or technical issues; these categories of acquisition personnel can be in a variety of OPM occupational series. In Policy Letter 97-01, OFPP delegated to the Federal Acquisition Institute the responsibility for developing a governmentwide management information system that would allow departments and agencies to collect and maintain standardized acquisition workforce information, including training data, and that would conform to standards established by OPM for its Central Personnel Data File. Although OPM has data on the total number of federal employees in the GS-1102 and GS-1105 series, it does not have data on the numbers of acquisition personnel, such as contracting officers or contracting officer technical representatives who are in other job series. Therefore, because agencies have acquisition personnel in job series other than the GS-1102 and GS-1105 series, it is not possible to determine the total number of acquisition personnel governmentwide at this time. GSA and VA, however, have estimated the number of contracting officer representatives, contracting officer technical representatives, and other acquisition personnel they employ. Most recent OPM data show that as of March 1999, there were a total of about 31,400 acquisition personnel in the GS-1102 and GS-1105 job series, of whom about 20,900 were in the Department of Defense and about 10,500 were in civilian agencies. According to GSA, as of December 1999, it had 3,146 acquisition personnel, including 1,319 in the GS-1102 and GS-1105 job series, 1,383 contracting officers who were not in the GS-1102 or GS-1105 job series, and 444 contracting officer representatives or contracting officer technical representatives, who were not contracting officers. In addition, GSA also reported that 253 of the 1,383 contracting officers were contracting officer representatives or contracting officer technical representatives. According to VA, during 1999, it had 4,357 acquisition personnel, including 1,724 contracting officers in the 1102, 1105, or other job series, such as program analysts (GS-345), general engineering (GS-801), reality specialist (GS-1170), and prosthetic representative (GS-672); 2,355 contracting officer representatives or contracting officer technical representatives who were not contracting officers; and 278 others, such as supply management specialist (GS-2003) personnel and procurement, clerical, and technical (GS-1106) personnel. In addition, VA also reported that 21 of the 1,724 contracting officers were contracting officer representatives or contracting officer technical representatives. Neither GSA nor VA has comprehensive organizationwide data showing the extent to which its acquisition workforce has received required training. Further, although both agencies have efforts under way to provide training, training records for acquisition personnel we reviewed at locations for each agency were incomplete; some acquisition personnel at each location had not met all of their training requirements; and contrary to OFPP’s policy, neither agency had established core training requirements for all categories of acquisition personnel. Both GSA and VA lacked organizationwide data on the status of training provided to the organizations’ acquisition workforce. Without such information, neither agency, nor OFPP, can be assured that Clinger-Cohen Act requirements relating to the training of the acquisition workforce are being met. GSA and VA each have both automated information systems and manual records that have some information on their acquisition workforces. For example, GSA’s automated personnel information system contains demographic information, such as an employee’s name, job series and grade, location, and education. We found that the education level on 7 of 19 (37 percent) newly hired contract specialists was erroneous and had to be corrected to reconcile with the records located in the field. Furthermore, a GSA official told us that this automated system does not contain centralized data on the extent to which GSA’s acquisition workforce meets core training and continuing education requirements. Instead, training records of this nature are maintained at the local level. With respect to VA, its Office of Acquisition and Materiel Management centrally collects and maintains training information on contracting officers with intermediate- and senior-level warrants. The information includes the employee’s name, title and grade, facility, core training completed, education level, and warrant level, but does not include core training information for contracting officers with basic-level warrants and contracting officer technical representatives. According to VA, its field offices are to maintain training information on contracting officers with basic-level warrants. VA officials told us that their headquarters’ database does not contain up-to-date information on contracting officer’s training for any warrant level because they suspended maintenance of their existing database in anticipation of the implementation of the governmentwide management information system in 1999. During our review of training records at VA’s Dallas medical center, we found that 10 out of 11 intermediate- and senior-level contracting officers’ headquarters database files were incomplete. In addition, each agency also maintains hard copy personnel files for its employees that, according to each agency, are supposed to contain a variety of data, including warrant level and training received. However, about one-third of the files we reviewed at the two agencies’ field locations we visited were incomplete. Files were incomplete for 25 of the 70 (36 percent) files we reviewed at GSA’s Greater Southwest 8 of the 25 (32 percent) files we reviewed at VA’s Dallas medical center. In these instances, files frequently lacked documentation that contracting officers met core training and continuing education requirements. We had to request additional information from the individual contracting officers or agency officials regarding warrants, core training, or continuing education for these 33 individuals. The contracting officers and agency officials provided us with the additional information. In January 1998, GSA’s Inspector General also found that training records were incomplete at GSA’s Greater Southwest Regional Office. Specifically, 48 of the 86 (56 percent) contracting officer files that the Inspector General reviewed lacked sufficient documentation to support the assertion that these individuals had completed all the required training for their type of appointment or warrant level. Although the Regional Administrator agreed with the Inspector General’s recommendation to fully document all pertinent training, a July 1999 Inspector General report concluded that the Regional Administrator’s action plan was not yet fully or satisfactorily implemented. OFPP Policy Letter 97-01 directs executive agency heads to establish core training for acquisition personnel. GSA and VA have established core training for acquisition personnel who need a warrant. Table 1 shows the GSA and VA contracting officer warrant levels, contracting authority, and number of core training courses required to obtain a warrant. GSA and VA issue permanent warrants to contracting officers who have completed the core training and who have the necessary work experience and formal education when there is a need for a warrant at a location. In addition, the agencies issue interim warrants when the need arises to contracting officers at all warrant levels, except basic, for a specified period to permit the completion of core training for a permanent warrant. Interim warrants are valid usually for up to 3 years for GSA and for up to 6 months for VA. While GSA and VA have established core training requirements for contracting officers, neither has established such requirements for contracting officer representatives and contracting officer technical representatives who did not hold a warrant. For GSA, these two groups represented 14 percent (about 444) of its 3,146 acquisition personnel, and for VA, they represented 54 percent (about 2,355) of its 4,357 acquisition personnel. GSA officials told us that they had not provided acquisition-related training to contracting officer representatives and contracting officer technical representatives who are not warranted because they performed limited acquisition tasks. However, a GSA official told us they now plan to include all contracting officer representatives and contracting officer technical representatives who are not warranted as part of the acquisition workforce. In this regard, GSA planned to prescribe computer on-line training for contracting officer representatives and contracting officer technical representatives who are not warranted in the near future. VA did not require training for contracting officer technical representatives until November 1999, but according to a VA official, VA strongly encouraged and supported this training. Since 1997, VA had trained more than 340 contracting officer technical representatives and had spent approximately $46,000 in fiscal years 1998 and 1999 on contracting officer technical representative training, according to a VA official. In addition, this official told us that VA had electronically distributed the Federal Acquisition Institute’s contracting officer representative workbook and VA’s contracting officer technical representative handbook to its acquisition workforce. Further, VA has made available to its acquisition workforce the Federal Acquisition Institute’s on-line contracting officer representative mentoring course. In November 1999, VA issued a policy letter requiring contracting officer technical representatives to receive training that covers the competencies contained in the Federal Acquisition Institute’s contracting officer technical representative workbook. Our review of contracting officer training records in the two field locations we visited showed that 69 out of 70 (99 percent) of GSA’s contracting officers had completed the core training, and 18 out of 25 (72 percent) of VA’s contracting officers had completed the training (see table 2). According to a GSA Greater Southwest Regional Office official, the one contracting officer who had not completed the training did not need the warrant, and the warrant was suspended for this individual. VA’s Dallas medical center officials told us that two of the seven contracting officers who had not completed the required training had not done so because VA headquarters had not scheduled it. The official told us that they plan to schedule the two contracting officers for the basic procurement class; however, they have not set a specific date for this training. Until we brought it to their attention, these officials were not aware that the other five contracting officers had not completed the core training. The officials told us they planned to address the training deficiencies by having the five contracting officers read acquisition training materials and take a test. As a result of our findings, VA officials told us that they plan to incorporate reviews of acquisition workforce training in periodic reviews VA recently began doing of its acquisition operations. Since July 1999, VA’s Office of Acquisition and Materiel Management has conducted four of these reviews, but the reviews have focused primarily on reviewing contract files for compliance with procurement regulations. Since September 1997, GSA’s Inspector General issued reports in which he stated that some acquisition personnel at various locations had adequate training whereas other personnel lacked sufficient training to do their jobs, including not completing the core training and not having specified training. For example, the Inspector General reported that GSA’s Helena Field Office (Helena, Montana) granted contracting officers simplified acquisition warrants without having satisfied specified training requirements. Both agencies have established policies mandating continuing education for contracting officers who have completed core training, but GSA’s policies are not consistent with OFPP policy. OFPP requires 40 hours of continuing education every 2 years; GSA requires 16 hours every 2 years for contracting officers with basic and simplified acquisition-level warrants, and 40 hours every 2 years for intermediate- and senior-level warrants. According to GSA’s Director of Acquisition Policy, contracting officers with basic and simplified acquisition-level warrants do not require 40 hours of training because they perform less complex work than that of contracting officers with intermediate- and senior-level warrants. We reviewed the training records for the 46 GSA and 8 VA contracting officers at the two field locations we visited, who were required by OFPP policy to meet continuing education requirements by December 1999, and found that 22 out of 46 (48 percent) of GSA’s and 4 out of 8 (50 percent) of VA’s contracting officers had not met the OFPP requirements. Table 3 presents continuing education results by agency. GSA regional officials told us that the contracting officers did not complete their required continuing education by the December 1999 time frame because a 40-hour class in October 1999 was cancelled due to a scheduling conflict. These officials told us that they have scheduled training by the spring of 2000 to help ensure that the region’s contracting officers meet their requirements. The VA Dallas medical center’s Chief of Acquisition and Materiel Management Service told us that he was unaware of the continuing education requirement; thus, he did not have a training plan for the four individuals who did not receive the required training. The GSA Regional Administrator said that his policy is to terminate warrants of those contracting officers who do not meet continuing education requirements after a 90-day grace period. Similarly, VA has a draft policy that states contracting officers’ warrants may be terminated at the discretion of the appointing official if they do not meet continuing education requirements. OFPP has not complied with the Clinger-Cohen Act of 1996, which requires OFPP to ensure that civilian departments and agencies collect and maintain standardized information on their acquisition workforces. Although in September 1997 OFPP tasked the Federal Acquisition Institute with developing a management information system to assist departments and agencies in collecting and maintaining standardized data, the system has not yet been developed. In the meantime, GSA and VA rely on automated systems that provide limited information and on decentralized, manual files that, according to agency officials, greatly impede their ability to oversee and plan training for their acquisition workforces. Although Clinger-Cohen was enacted in February 1996, OFPP did not issue a policy implementing the act’s acquisition workforce provisions until September 1997, when it issued Policy Letter 97-01. According to OFPP officials, the reason for the delay in issuing Policy Letter 97-01 was that members of the Section 37 Steering Committee did not agree on education requirements for contract specialists until May 1997. OFPP’s Policy Letter 97-01 tasked the Federal Acquisition Institute to work with agencies and OPM to develop a governmentwide management information system that would allow departments and agencies to collect and maintain standardized acquisition workforce information, including training data. This system is to conform to OPM’s Central Personnel Data File standards. These standards require that workforce data include such information as job classification series, grade level, service computation date, and education level. The Federal Acquisition Institute, after several consultations with the Section 37 Steering Committee, envisioned that the system would collect information on a contracting officer’s name, social security number, grade and job series, formal education, agency, core training, continuing education, and warrant level. In July 1998, the Federal Acquisition Institute requested OPM’s assistance in implementing this governmentwide management information system, and provided OPM with initial data elements that it believed were needed to be included in the system. In August 1998, OPM submitted to the Federal Acquisition Institute a concept paper that outlined its ideas for the system’s specifications. In this concept paper, OPM stated that it would build or oversee the construction of an Internet-based system, which would be linked to the Central Personnel Data File, and agreed that the system would contain, at a minimum, the data elements presented by the Institute. In September 1998, GSA, (on behalf of the Federal Acquisition Institute), and OPM entered into a memorandum of understanding through which OPM agreed to develop the management information system for approximately $60,000 and deliver the new system in approximately 16 weeks. The Federal Acquisition Institute selected OPM to develop this system because of its expertise in developing, delivering, and maintaining automated personnel systems. However, OPM chose not to develop this system in-house, but instead, selected a private firm—Lexitech. In November 1998, the Federal Acquisition Institute, OPM, Lexitech, and other government officials met to discuss the system’s goals, site design, flowchart, storyboards, development, and population to be covered. In addition, this group decided on actions to take so the project could move forward. However, project records we reviewed indicate that little progress has been made since that November 1998 meeting due to the lack of agreement between the Institute and OPM on final system requirements and specifications. Although project records do not document substantial action taking place between November 1998 and May 1999 to resolve the situation, Federal Acquisition Institute and OPM officials told us they had taken action during this time period to continue developing the system. In May 1999, the Institute increased efforts to get the project moving, but as of December 1999, agreement had still not been reached and the system had still not been developed; however, OPM had spent about $30,000. Although an OPM official told us these funds were used to pay for OPM staff hours devoted to this project and for an approved contractor payment for original storyboard development, travel, and draft Internet web page development, he was unable to provide us with full documentation. OPM’s project manager told us that once the Federal Acquisition Institute and OPM agree on the requirements, Lexitech would need only a few weeks to develop the system. In December 1999, the Director of the Federal Acquisition Institute told us that she had asked OPM to provide her with a management plan by January 2000, which would provide her with the actions and time frames for completing project events. She stated that she wanted to begin testing the system by February 2000. In January 2000, OPM provided the Federal Acquisition Institute with a project plan that estimated the system’s completion by the end of April 2000. Subsequently, the Federal Acquisition Institute requested that a new project manager and a new firm to develop the system be identified, according to an OPM official. This official told us that OPM assigned a new project manager and has initiated the process for selecting a new firm to develop the system. This official also told us that the project plan would be revised and approved by the Federal Acquisition Institute and OPM once a new firm has been selected. This official further stated that it is unlikely that system testing will begin in February 2000, since a new firm is being sought to develop the system. However, he stated that OPM has provided the proposed new firms the April 2000 target date for completion. In February 2000, an OFPP official told us that OFPP, the Federal Acquisition Institute, and OPM have now reached agreement on how to move the project forward. This official also told us that OFPP has engaged senior OPM management personnel to ensure that both organizations focus on completing the project. In addition, OFPP expects that project development will be under way by the end of February 2000, according to this same official. GSA and VA officials told us that they have held off developing their own agency management information systems to comply with Clinger-Cohen because they were made aware that a system was being developed. According to GSA and VA officials, however, the limited information their present systems provide greatly impedes their managers’ ability to oversee and plan training for their acquisition workforces. Because it has a critical need for immediate access to timely and accurate acquisition workforce data, VA plans to design and implement a database on its own and deal with compatibility issues as the need arises, according to VA officials. In June 1992, OFPP issued Policy Letter 92-3 that required heads of executive departments and agencies to provide for a system for certifying and reporting the completion of all required training. In October 1998, the Federal Acquisition Institute provided senior procurement executives with guidance on the data to be collected to meet Clinger-Cohen requirements. Although OFPP and the Federal Acquisition Institute had provided agencies guidance and instructions on what data to collect, GSA and VA have relied on automated systems that provide limited information and on decentralized, manual files, which have resulted in a lack of complete, readily accessible information on workforce training. For example, in responding to our requests for information on training and certification (such as the number of agency personnel holding contracting officer warrants), both GSA and VA had to send queries to widespread field offices, resulting in weeks of delay in getting responses. Both GSA and VA reported that they primarily used revolving funds to finance the education and training of their acquisition workforces. While VA had reported some of its education and training funding requirements in its congressional budget justification documents for fiscal years 1998 through 2000, pursuant to Clinger-Cohen, GSA had only done so for fiscal year 2000 due to what GSA officials described as an administrative oversight. Although GSA identified acquisition workforce education and training funding in its fiscal year 2000 congressional budget justification documents, GSA officials told us they do not apply the limitation in Clinger-Cohen, which governs funds specifically appropriated for such training, to the revolving funds that are the primary source of financing this training at GSA. VA officials told us they were restricting the obligation of most of the funding amounts identified in the budget justification documents for acquisition workforce education and training, even though VA mainly used revolving funds to finance these activities. Neither VA’s nor GSA’s appropriations acts for fiscal years 1998, 1999, or 2000, nor the committee reports accompanying those acts, designated any specific amount of funds for the purpose of acquisition workforce education and training. Although both GSA and VA identified funding planned to be used for training their acquisition workforces for fiscal year 2000 in their budget documents, both appear to have understated the amounts they planned to use for this purpose. GSA identified about $2.8 million to educate and train its acquisition workforce in its fiscal year 2000 congressional budget justification documents. Only $27,000 of this amount was to come from annually appropriated funds, while revolving funds were identified as the source for the remainder of this funding. However, GSA documents indicate that it plans to use more than $2.8 million to educate and train its acquisition workforce. For example, GSA plans to use other funds to educate and train segments of its acquisition workforce that have already met their qualification requirements. We also found that some of GSA’s regional components did not provide an estimate to headquarters of the funding needed to educate and train their acquisition workforces. In addition, we noted that the funding estimates that GSA’s regions submitted to GSA headquarters did not consider the training needs of all their contracting officers. For example, according to a GSA official, the Public Buildings Service at GSA’s Southeast Sunbelt Regional Office (Atlanta, Georgia) only submitted budgeted amounts for contracting officers in the GS-1102 job series, even though it had contracting officers in other job series, because they considered only GS-1102s as part of the acquisition workforce. For fiscal years 1998, 1999, and 2000, VA estimated that it would use about $2.2 million, $2.3 million, and $2.3 million, respectively, for Office of Acquisition and Materiel Management sponsored acquisition workforce education and training. VA officials told us this budgeted amount would come from the “Supply Fund,” which is a revolving fund. VA officials told us they used money from the Supply Fund to provide (1) mandatory contracting officer training and continuing education for Office of Acquisition and Materiel Management personnel, (2) mandatory contracting officer training for all warranted personnel throughout the department, and (3) Office of Acquisition and Materiel Management sponsored continuing education for all acquisition personnel departmentwide. However, VA officials said that the Office of Acquisition and Materiel Management did not use the Supply Fund to provide continuing education for acquisition personnel in other units unless they sponsored the training. Instead, the VA officials said that other units, such as the Veterans Health Administration, used local facility appropriated funds for contracting officer noncore training and continuing education for their acquisition workforce personnel. Thus, funding for this training was not included in the amounts VA has identified for training its acquisition workforce in its budget documents. In addition, VA officials told us that their budget requests had not included enough funding to cover VA’s entire acquisition workforce. VA officials also said they were unaware of acquisition workforce personnel who were not receiving training, particularly those with basic-level warrants. Thus, they said that they had not previously asked for additional funding to educate and train these personnel; however, based on our findings, they said that they would be asking for additional funds for fiscal year 2001. Neither GSA nor VA tracked all acquisition education and training expenditures. However, according to GSA officials, GSA planned to implement sometime in fiscal year 2000 a mechanism that would allow tracking of such expenditures. For example, GSA’s Budget Office officials told us that GSA had planned, at the start of fiscal year 2000, to begin tracking all funding used for educating and training its acquisition workforce by using a special function code in GSA’s accounting system. However, GSA was delaying the implementation of this tracking mechanism until at least the second quarter of fiscal year 2000 because of Year 2000 (Y2K) computer concerns, according to the official. A VA official told us that as of December 1999, VA had no plans to implement a mechanism for identifying all funding used to educate and train its acquisition workforce. Although VA said that it had restricted the obligation of the revolving fund amounts identified in the budget justification documents for educating and training its acquisition workforce, a VA official told us that VA was unable to determine all of the actual expenditures used for this purpose. Due to the lack of data, neither GSA nor VA knows the extent to which its acquisition workforce meets training requirements, and neither is in a position to see that minimum training requirements are uniformly met throughout the agencies, as required by the Clinger-Cohen Act. GSA and VA have been working toward training their acquisition workforces, but neither has fully complied with applicable training requirements. As a result of our review, these agencies said that they would revise their acquisition core training programs to encompass personnel initially excluded—nonwarranted contracting officer representatives and contracting officer technical representatives. However, GSA was still not following OFPP policy on continuing education. In our opinion, adherence to OFPP’s continuing education policy would better equip GSA’s acquisition personnel to stay abreast of acquisition reforms and increase their acquisition knowledge and skills. Four years after the Clinger-Cohen Act’s passage requiring OFPP to ensure that agencies collect and maintain standardized information on their acquisition workforces, OFPP has not done so. Consequently, OFPP is not in a good position to evaluate the way agencies are implementing these provisions, as it is required to do. Both GSA and VA identified in their fiscal year 2000 congressional budget justification documents amounts for the education and training funding requirements for their acquisition workforces, as Clinger-Cohen requires them to do. Neither agency, however, provided complete information on the amounts it planned to use to educate and train its acquisition workforces, nor did either identify and track the actual amounts it expended for this purpose for fiscal years 1998 and 1999. GSA plans to implement a mechanism sometime in fiscal year 2000 that would allow tracking of such expenditures, and VA plans to determine the feasibility of tracking these types of expenditures. In our opinion, the lack of complete information on funding planned and used for educating and training the acquisition workforces at GSA and VA makes it more difficult for Congress to make well-informed decisions relative to the knowledge and skill levels of acquisition personnel at these two agencies. To ensure that the skills of their acquisition workforces are current, we recommend that the Administrator of GSA and the Secretary of Veterans Affairs fully adhere to OFPP’s policy associated with Clinger-Cohen’s training provisions by (1) establishing core training requirements for all contracting officer representatives and contracting officer technical representatives; (2) ensuring that all acquisition personnel receive the required core training and continuing education, consistent with OFPP’s policy; (3) directing appropriate agency personnel to collect and maintain accurate and up-to-date data showing the extent to which acquisition personnel meet training requirements; and (4) seeing that all funding that agencies plan to use for educating and training their acquisition workforces is identified in appropriate budget documents and that all related expenditures for such education and training are tracked. In addition, for OFPP to ensure that civilian departments and agencies collect and maintain standardized acquisition workforce information, we recommend that the Administrator of OFPP take action necessary to ensure that the Federal Acquisition Institute and OPM complete and implement the governmentwide management information system being developed to implement Clinger-Cohen’s requirements for standardized acquisition workforce information. We provided a draft of this report to the Administrator of GSA, the Secretary of Veterans Affairs, and the Directors of the Office of Management and Budget and Office of Personnel Management for their review and comment. On February 15, 2000, we received written comments from the Administrator of GSA. He agreed with our recommendations and outlined the actions GSA plans to take to implement them (see app. II). On February 9, 2000, we received written comments from the Department of Veterans Affairs’ Assistant Secretary for Planning and Analysis. He agreed that the VA could improve its managing of acquisition workforce training requirements. He also concurred with our recommendations and outlined the steps under way and planned for VA to conform with statutory and policy requirements (see app. III). In responding to our recommendation for seeing that all funding that agencies plan to use for educating and training their acquisition workforces is identified in appropriate budget documents and that all related expenditures for such education and training are tracked, VA said that it planned to ask OFFP for clarification of the Clinger-Cohen Act’s funding requirements. In discussions with an official from VA’s Office of Acquisition and Materiel Management about this issue, it became clear that there was some confusion concerning VA’s interpretation of our recommendation. After we explained that our recommendation only applied to all funding for acquisition workforce education and training associated with Clinger- Cohen’s funding provision and not other education and training acquisition workforce members receive, this official told us that VA no longer needed to clarify the interpretation of Clinger-Cohen’s funding provision with OFPP. On February 4, 2000, OFPP’s Procurement Innovation Branch Chief and his staff provided oral comments on our draft report. They said that OFPP concurred with our recommendation to the Administrator and provided technical comments. We modified our report, where appropriate, to reflect their comments. On February 4, 2000, OPM’s Deputy Chief of Staff provided oral comments, saying that OPM is working diligently to move development of the data system forward. He also provided technical comments, which we have included in this report, as appropriate. Program officials at GSA and VA also provided technical comments, which we have reflected in this report, as appropriate. We are sending copies of this report to Senator Fred Thompson, Chairman, and Senator Joseph Lieberman, Ranking Minority Member, Senate Committee on Governmental Affairs; the Honorable David L. Barram, Administrator of GSA; Togo D. West, Jr., Secretary of Veterans Affairs; Jacob Lew, Director of the Office of Management and Budget; Deidre A. Lee, Administrator of the Office of Federal Procurement Policy; and Janice R. Lachance, Director of the Office of Personnel Management. We will make copies available to others upon request. Key contributors to this assignment are acknowledged in appendix IV. If you have any questions regarding this report, please contact me on (202) 512-8387 or Hilary Sullivan on (214) 777-5600. Our objectives were to determine whether (1) the General Services Administration (GSA) and the Department of Veterans Affairs (VA) had assurance that their acquisition workforces met training requirements as defined by the Office of Federal Procurement Policy (OFPP) and whether contracting officers at one GSA and one VA field location met each agency’s training requirements; (2) OFPP had ensured that federal civilian departments and agencies collected and maintained standardized acquisition workforce information, as required by the 1996 Clinger-Cohen Act; and (3) GSA and VA were taking actions to comply with the Clinger- Cohen Act’s funding requirements. To determine the actions GSA and VA had taken in ensuring that their acquisition workforce met training requirements, we researched and analyzed the Clinger-Cohen Act and OFPP policy letters to identify the relevant provisions and policies. We then interviewed GSA headquarters officials in Washington, D.C., and regional officials in Fort Worth, Texas, to determine their actions in implementing the acts’ training requirements and OFPP policies. We also interviewed VA headquarters officials in Washington, D.C., and VA medical center officials in Dallas, Texas, for the same purpose. To assess whether GSA and VA contracting officers met training requirements, as set out by each agency’s headquarters office (associated with relevant Clinger-Cohen requirements and OFPP Policy Letter 97-01), we examined data maintained at their headquarters for acquisition workforce training and discussed this issue with acquisition officials at each agency. In addition, we randomly selected a sample of 75 out of 324 contracting officer training records at GSA’s Greater Southwest Regional Office in Fort Worth, Texas, and all 26 records at VA’s medical center in Dallas, Texas. We adjusted our original sample size of 75 contracting officers to 70 at GSA’s Greater Southwest Regional Office because the list provided was inaccurate. We eliminated one individual’s name from the list of 26 contracting officers at VA’s Dallas medical center because this individual did not have a warrant. We examined the selected training records to assess whether contracting officers met training requirements, and we discussed training and documentation issues with acquisition officials at each agency’s field location. We reviewed GSA Inspector General reports related to the education and training of GSA’s acquisition workforce. For the reports associated with GSA’s Greater Southwest Regional Office, we discussed the audit findings with Inspector General staff. VA’s Office of Inspector General had not completed audits on these issues at the VA since the enactment of Clinger- Cohen. We selected GSA and VA for review because they have large numbers of contract specialists (GS-1102), handle large amounts of contracting dollars, and engage in decentralized activities. In fiscal year 1997, GSA and VA contract specialists constituted 23 percent of the 8,320 contract specialists in all federal civilian executive departments and agencies. They had 1,224 and 727 specialists, respectively, making GSA and VA the top two federal civilian agencies in terms of numbers of contract specialists employed. In addition, in fiscal year 1997, GSA and VA spent 18 percent of the $63.1 billion in federal contracting dollars ($7 billion and $4.5 billion, respectively) for civilian executive departments and agencies in the federal government. GSA and VA’s decentralized procurement activities also provided us the opportunity to review both headquarters’ and field activities’ efforts at educating and training their acquisition workforces. We conducted our review at GSA’s Greater Southwest Regional Office because, out of GSA’s 11 regional offices, the Greater Southwest Regional Office had the highest number of contract specialists and had contract specialists assigned to all of the region’s three services (Federal Supply Service, Federal Technology Service, and Public Buildings Service). We conducted our review at the VA Dallas medical center because it was the fifth largest VA facility in terms of the number of acquisition personnel and the largest VA facility within the state of Texas. We researched and analyzed the Clinger-Cohen Act to identify the provisions and policies related to OFPP’s requirement to ensure that agencies collect and maintain standardized acquisition workforce information, including training data. We interviewed OFPP officials in Washington, D.C., to obtain their views on these provisions and policies to identify their actions for ensuring that departments and agencies implement this requirement. We reviewed documents such as the project agreement, scope of work, cost reports, and electronic communications between OFPP, the Federal Acquisition Institute, Office of Personnel Management (OPM), and Lexitech (the private firm) to determine the status of the development of a governmentwide management information system that would allow departments and agencies to collect and maintain standardized information on the acquisition workforce, including training data, that conform to standards established by OPM for the Central Personnel Data File. We also interviewed the Federal Acquisition Institute and OPM officials in Washington, D.C., and an OPM official in Macon, Georgia, to determine the actions they had taken to develop a governmentwide management information system. In addition, we interviewed GSA and VA officials in Washington, D.C., to determine the actions taken to collect and maintain standardized information on their acquisition workforces. To determine the actions GSA and VA had taken to fund and track the cost of educating and training their workforces, we reviewed agency budget development and congressional budget justification documents, and we interviewed GSA and VA officials at both Washington, D.C., headquarters and Fort Worth and Dallas, Texas, field locations. Also, we interviewed GSA and VA officials at both headquarters and field locations to obtain their views on Clinger-Cohen’s funding provisions and to identify any barriers to implementing the act’s requirements. We did not verify data in agencies’ automated information systems. We requested comments on a draft of this report from the Administrator of the General Services Administration, the Secretary of Veterans Affairs, and the Directors of the Office of Management and Budget and the Office of Personnel Management and made changes to the final report as appropriate. Steve D. Boyles, John E. Clary, Luis Escalante, Jr., Raimondo Occhipinti, Elliott C. Smith, and Joel Smith made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the training of the acquisition workforce in certain federal civilian departments and agencies, focusing on whether: (1) the General Services Administration (GSA) and the Department of Veterans Affairs (VA) had assurance that their acquisition workforces met training requirements as defined by the Office of Federal Procurement Policy (OFPP) and whether contracting officers at one GSA and one VA field location met each agency's training requirements; (2) OFPP had taken action to ensure that civilian departments and agencies collected and maintained standardized acquisition workforce information, as required by the 1996 Clinger-Cohen Act; and (3) GSA and VA were taking actions to comply with Clinger-Cohen Act funding requirements. GAO noted that: (1) both GSA and VA have efforts under way to train their acquisition workforces; (2) however, neither had assurance that all members of their acquisition workforces had received core training and continuing education, as required by OFPP's policy; (3) neither agency had complete readily accessible information on the overall extent to which their acquisition workforces had received required training; (4) contrary to OFPP's policy, neither GSA nor VA had established core training requirements for some segments of their acquisition workforces--contracting officer representatives and contracting officer technical representatives who do not have authority to award contracts; (5) by reviewing agency training records and obtaining documentation directly from GSA's Greater Southwest Regional Office and VA's medical center in Dallas, GAO determined that 99 percent of GSA and 72 percent of VA contracting officers at these two locations met core training requirements that GSA and VA had established for such personnel; (6) however, only about half of GSA's and VA's contracting officers in these locations who were to have continuing education requirements completed by December 1999 had met those requirements by the due date; (7) to help explain why some officers had not completed the required training, agency officials cited conflicts in scheduling the training and a lack of awareness of training requirements; (8) OFPP has not yet ensured that civilian departments and agencies were collecting and maintaining standardized information, including training data, on their acquisition workforces, as required by Clinger-Cohen; (9) in September 1997, OFPP tasked the Federal Acquisition Institute to work with departments and agencies and the Office of Personnel Management (OPM) to develop a governmentwide management information system, including specifications for the data elements to be captured, to assist departments and agencies in collecting and maintaining standardized data; (10) system development was significantly delayed because the Institute and OPM had not reached agreement on final system requirements and specifications; (11) neither GSA nor VA identified all the funds it planned to use for acquisition workforce training in its congressional budget justification documents as required by Clinger-Cohen; (12) Clinger-Cohen provides that agencies may not obligate funds specifically appropriated for acquisition workforce education and training under the act for any other purpose; and (13) appropriations acts GAO reviewed for GSA and VA did not specify a funding level for acquisition workforce education and training.
|
The employee evaluation process is an important tool for influencing employee behavior. At IRS, as in other federal agencies, employee evaluations are the official documents used to support many personnel actions, including within-grade pay increases, performance awards, promotions, reductions-in-force, and adverse performance-based actions. As stated in IRS documents, the process is intended to accurately reflect employees’ performance, facilitate their development, and improve and enhance their work. For enforcement (and most other IRS) employees, the evaluation process is to include an annual formal written evaluation and a midyear progress review. The annual written evaluation involves a quantitative assessment of performance, which may be supported by narrative commentary. Supervisors who complete the evaluations are to address how well employees perform a number of critical job elements, which are job skills that must be performed at or above a set standard for an employee’s performance to be judged acceptable. The critical elements include technical skills—the specialized skills needed to process cases, such as workload management and case analysis—as well as customer relations, which involves interpersonal skills in dealing with taxpayers. The number of critical job elements on which an enforcement employee is evaluated generally has ranged from five to seven, depending on his or her job classification (e.g., tax auditor, revenue officer, or revenue agent). Each element is rated on a 5-point scale, with 1 being unacceptable and 5 being outstanding. A score of 3 is to be given for performance deemed fully successful. The evaluation forms for the types of enforcement employees we reviewed have similar formats and critical job elements, as the examples shown in appendix II illustrate. Progress reviews are also required, preferably at midyear, and are to be communicated by supervisors to each of their employees. IRS policy imposes several additional requirements. Notably, the supervisor who is responsible for assigning an employee’s work is also responsible for preparing and signing his or her evaluation. The evaluation must be reviewed, approved, and signed by a higher level manager. And to help ensure the accuracy and fairness of the rating, the supervisor who prepares the evaluation is to observe the performance of the employee during the rating period. In evaluating employee performance, supervisors are to exercise caution when describing employees’ skills and contributions vis-a-vis revenue production and efficiency, so as not to improperly emphasize the accomplishment of statistical or numerical goals. IRS policy prohibits rating officials from using enforcement statistics, such as the average amount of taxes assessed or collected, in employee evaluations. However, during hearings held by the Senate Committee on Finance in September 1997, witnesses alleged that IRS’ focus on enforcement statistics at the organizational level was encouraging enforcement officers to take unnecessary and inappropriate enforcement actions against taxpayers.IRS’ Internal Audit subsequently reviewed the use of such statistics by examination and collection supervisors and found an atmosphere largely driven by statistical measures. In November 1998, we reported that 75 percent of revenue agents, tax auditors, and revenue officers believed that enforcement results affected their evaluations. To meet our reporting objectives, we reviewed the two most recent evaluations for each employee as of June 1998 in a statistically representative sample of 19,096 examination and collection frontline employees and the supplemental documentation supporting those evaluations. Because the confidence intervals for the different estimates vary in size, we report all of them in appendix IV. We also interviewed responsible IRS headquarters officials and 30 supervisors in 3 district offices and sent division chiefs in all 33 IRS district offices a survey on how supervisors allocated their time. We requested comments on a draft of this report from the Commissioner of Internal Revenue. In a letter dated September 17, 1999, we received his comments, which are discussed at the end of this letter and reprinted in appendix VIII. We did our work at IRS headquarters in Washington D.C., and the Northern California, Kansas-Missouri, and Georgia District Offices between November 1998 and June 1999 in accordance with generally accepted government auditing standards. (See app. I for more details on our objectives, scope, and methodology.) Our analysis of evaluations written when IRS’ old mission statement was in effect showed that, overall, written evaluations of enforcement employees emphasized revenue production and efficiency more than customer service. In addition, customer service comments, when made, often did not emphasize the importance of taking into account the taxpayer’s point of view or how well the taxpayer understood the tax issues being raised by the enforcement employee. As shown in table 1, about two-thirds of supervisors’ comments related to revenue production and efficiency, and one-third related to customer service. Looking only at the comments in the technical skill portion of the evaluations, comments on revenue production and efficiency outnumbered customer service by about four to one. Of the comments in the customer relations element, we estimate that about three-fourths related to customer service and one-fourth to revenue production and efficiency. In considering these results, it is important to point out that, to some extent, the design of IRS’ employee evaluations for enforcement personnel could lead supervisors to focus on revenue production and efficiency. This is because customer relations is only one of five or more equally weighted critical job elements upon which employees are evaluated. Because the other, more technical elements logically tend to involve considerations of revenue production and efficiency, the evaluation likely would focus on such issues. Our analysis also revealed that, overall, many of the customer service comments did not emphasize the importance of taking into account the taxpayers’ point of view or whether they understood the tax issues being raised by the enforcement employee. To illustrate, consider the following results on comments regarding employees’ assistance in helping taxpayers to better understand their tax issues. As shown in table 2, an estimated 53 percent of the employees received evaluations discussing their efforts to describe the law and regulations related to the examination and collection process to the taxpayer. An estimated 4 percent received evaluations discussing efforts to check on taxpayers’ understanding of these issues by asking taxpayers questions or soliciting their responses. Both approaches can improve taxpayer understanding. However, the latter approach—and comments as to its use or lack thereof—can also help reinforce the importance of targeting explanations to the needs of the taxpayer. Most of the comments regarding whether the employee applied the tax law with integrity and fairness also did not reflect the taxpayer’s point of view. As shown in table 3, an estimated 31 percent of the employees had comments in their evaluations discussing whether the employee balanced taxpayer’s interests with the government’s interests by listening to and considering the taxpayer’s position. Comments that were more reflective of the taxpayer’s point of view, such as whether the employee considered a variety of actions to try to meet taxpayer needs, such as proposing an installment agreement to meet the tax liability, or took proactive action, such as identifying tax credits or deductions the taxpayer was entitled to, were each made for about 6 percent of the employees. As shown in table 4, an estimated 38 percent of the employees received evaluations indicating that they listened to taxpayers, and 32 percent received evaluations indicating that they were courteous with taxpayers. “Over the last year the Service is emphasizing payments be obtained at the conclusion of the examination. It can truly be said that the agent has kept to this philosophy. The agent always seeks to obtain full payment of the deficiency, penalties, and interest. This shows a strong commitment to the Service programs.” Further, the above comment was contained in the narrative for the customer relations critical job element and seems to equate good customer relations with success in obtaining full payment in every case. “You set clear, reasonable deadlines and follow-up on them promptly, usually with an appropriate collection tool rather than a phone call. Your success in that regard is referenced above. During the selection of cases for one entity review it was noted that 84% of your inventory was less than 5 months old. On another it was noted that only 4 of your cases had been assigned for longer than six months.” While this discussion of the employee’s performance may be appropriate, it was not balanced with statements indicating the employee considered the taxpayer’s circumstances when setting deadlines and taking follow-up actions. Although the employee’s actions from the perspective of revenue production and efficiency may warrant discussion, given IRS’ new mission statement, these comments logically would need to be balanced with comments on the employee’s action from the perspective of customer service. Appendix V contains examples on how supervisor comments can reflect the old versus new mission statements. The current IRS evaluation process contains four features that provide supervisors with opportunities to reinforce the customer service orientation reflected in IRS’ new mission statement. The narrative portion of employee evaluations, midyear progress reviews, case reviews, and field visits provide supervisors with the flexibility to identify and provide feedback on employees’ customer service behaviors, as well as on their technical skills. Available evidence indicates that IRS supervisors could take greater advantage of four features to emphasize customer service. They could use the narrative part of the evaluation more fully. They also could conduct more midyear progress reviews, case reviews, and field visits and sit-ins. Employee annual evaluations are intended to facilitate employees’ development and improve and enhance their work. The narrative portion of an employee’s written evaluation provides flexibility to supervisors to focus on employees’ customer service skills. We estimated that over a 2- year period, more than 40 percent of the employees received evaluations with no narratives in one or both evaluations. An additional estimated 12 percent of employees received an evaluation that duplicated the narratives from the prior year, sometimes repeated word-for-word. A further 13 percent had some narrative but at least one evaluation with no narrative for at least one critical job element. Table 5 shows the extent of missing and duplicate narratives. According to the examination and collection division chiefs who completed our survey in IRS’ 33 district offices, enforcement supervisors have an average of 12 employees that they supervise. To reduce the administrative burden associated with completing the evaluations, IRS and NTEU agreed that supervisors could omit narratives from evaluations under two circumstances. Supervisors can omit all narratives for employees who have earned the same numerical rating in every critical job element as the prior year. In these cases, supervisors may revalidate the prior evaluation without having to prepare a new evaluation. Also, supervisors are allowed to omit narratives for critical job elements when employees receive a numerical rating of either 5 (outstanding) or 4 (exceeds fully successful) for those elements, and the numerical rating is the same as or higher than the prior year’s rating. IRS’ evaluation policies do not address the use of duplicate narratives. Midyear progress reviews provide supervisors with opportunities to provide interim feedback on all aspects of case handling, including any deficiencies, relating to agency customer service (and other) goals. While supervisors are required to conduct midyear progress reviews, we estimated that about 65 percent of the employees’ files did not contain evidence that a midyear progress review was done. Also, for the employees for whom reviews were documented, an estimated 36 percent did not have reviews discussing the customer relations critical job element. Supervisors are required to review a sample of each employee’s cases at least once a year. We estimated that the evaluations of about 30 percent of the employees did not contain evidence that they were supported by case reviews. These reviews are important because they provide supervisors with the opportunity to examine employees’ case documentation to determine whether employees’ case decisions were made in accordance with the agency’s policies and procedures, including those that relate to customer service. Also, we found that supervisors did not always take advantage of case reviews to review customer service. We estimate that 23 percent of employees for whom case reviews were documented did not receive reviews that addressed the customer relations critical job element. According to IRS training documents, many experienced supervisors see significant benefits to field visits, which allow supervisors to observe the employee interacting with taxpayers and the employee’s application of the law, regulations, and procedures. One field visit is required during an employee’s first year, but after the first year, the frequency and need for visits is left to the supervisor’s judgment. We found that supervisors had not documented field visits for an estimated 66 percent of the employees and that for those employees for whom visits were documented, an estimated 18 percent had reviews with no indication that the customer relations critical job element was discussed. More fully using the evaluation process features discussed above, especially field visits, could reduce the time supervisors spend on their other administrative tasks. The 30 supervisors from 3 district offices that we visited estimated that they were spending an average of about 25 percent of their time on the 4 employee evaluation activities and about 29 percent of their time on clerical duties and other administrative and collateral duties. All but one of these supervisors we talked to indicated that direct observation of employees was the best method for evaluating customer service skills. However, 25 of the 30 supervisors said that they did not have time to spend in the field with their employees and also complete their other supervisory and administrative responsibilities. The supervisors’ administrative burden and its effect on supervisors’ ability to manage their employees was also raised in a 1991 nationwide IRS survey of revenue officer supervisors and more recently by the Professional Managers Association. In a December 17, 1998, message to the Commissioner of Internal Revenue, the association stated, “One of the major concerns of frontline managers is the excessive administrative burden placed on them.” Supervisors we interviewed suggested several options that would give them more time to spend in the field with their employees and on other employee evaluation activities. These options included reducing their administrative duties, providing clerical staff to take over some of their administrative duties, and reducing the number of employees that report to them. We did not evaluate the feasibility or impact of these alternatives. If the features were to be used more, IRS would need to consider the potential implications for the way in which supervisors allocate their time between these and other administrative tasks. IRS has implemented a number of initiatives to promote customer service, setting the stage for the reform of IRS’ entire performance management system over the coming years. Thus far, IRS has revised its strategic goals; aligned them with its new mission statement; and introduced organizational performance measures that are to balance customer satisfaction, employee satisfaction, and business results. IRS has also taken several interim actions to promote customer service in evaluating enforcement employees. IRS’ new strategic goals are intended to promote customer service by (1) providing service to each taxpayer by such means as being prompt, professional, and helpful to taxpayers when additional taxes may be due; (2) providing service to all taxpayers by such means as increasing the fairness of compliance; and (3) increasing productivity by providing a quality work environment for employees. To evaluate how well it is achieving these new goals, IRS has also developed new organizational performance measures that are intended to balance customer satisfaction, employee satisfaction, and business results. Customer satisfaction is to be measured through written or telephone surveys to obtain taxpayers’ perceptions of how they were treated by IRS employees during interactions. Employee satisfaction is to be measured through annual employee surveys of work environment satisfaction. The quality aspects of business results are to be measured through samples of completed cases taken under its various operational quality review programs, while the quantity aspects are to be based on data collected on such outcome-neutral items as the number of cases handled. IRS is in the process of providing training to supervisors and employees on the new balanced measures, which emphasizes customer service. IRS also intends to revamp its entire performance management system as required by the Internal Revenue Service Restructuring and Reform Act of 1998. Performance management systems are broad systems for managing employee behavior that incorporate the evaluation process and other managerial actions. Office of Personnel Management regulations define performance management as the integrated processes agencies use to (1) communicate and clarify organizational goals, (2) identify accountability for accomplishing organizational goals, (3) identify and address developmental needs, (4) assess and improve performance, (5) measure performance for recognizing and rewarding accomplishments, and (6) prepare appraisals. It is difficult to determine when IRS’ new system would become fully operational. As we have reported, IRS faces formidable challenges to achieve this and other reforms related to its ongoing modernization efforts. As we noted, IRS is attempting to implement all of the reforms in a comprehensive, rather than sequential, fashion. The integrated approach that IRS is using makes sense and has the potential to significantly improve the way IRS interacts with taxpayers. However, it also presents a significant challenge. At the same time IRS is attempting to reform its performance management system, it also is striving to revamp its business practices, restructure its organization, and implement new technology. Effectively implementing such a broad and complex set of interdependent changes will strain IRS’ management capacity. Having to make the transition while continuing to operate the existing tax administration process will strain the agency further. These factors and IRS’ poor track record for implementing reforms suggest that it could be years before a new performance management system is fully operational. IRS recognizes that revamping its performance management system is a major effort. With respect to enforcement employees in particular, IRS has recognized that the evaluation process is an important part of any performance management system and may be a key to improving customer service. In laying out its long-term strategy for creating a customer- oriented work force, IRS has identified the need for change in the evaluation process for enforcement and other employees so that supervisors communicate what constitutes good customer service, ensure that employees adopt the new desired behaviors, and assess and develop employees’ customer service skills. IRS has also recognized that short- term improvements in employees’ customer service are needed and has advised managers to think of ways they could begin fostering the new orientation. IRS has incorporated into the evaluation process a new performance standard relating to the fair and equitable treatment of taxpayers that employees must meet at a passing level to retain their jobs. The retention standard, which was required by the IRS Restructuring and Reform Act, says that employees must “Administer the tax laws fairly and equitably, protect all taxpayers’ rights, and treat each taxpayer ethically with honesty, integrity, and respect.” When evaluating employees, supervisors are to first determine whether the employee met the retention standard and, if the employee did, then proceed to evaluate the employee on the critical job elements. IRS officials expect that most employees will meet the retention standard. The new retention standard was put in place in July 1999. IRS issued guidance on how managers were to implement the retention standard. The guidance included examples of behaviors that would meet the standard and those that would not. Supervisors were also instructed to develop other examples of behavior tailored to their employees’ occupations. By July 31, 1999, supervisors were to have held individual or group meetings with their employees to discuss how employees would be evaluated under the standard. In addition, the Collection Division has taken two steps to restructure its evaluation system for revenue officers to better reflect the increased value that IRS now places on customer service. The division has (1) revised its standard position description for revenue officers and (2) reduced the number of critical job elements for revenue officers from five to three. The three revised critical job elements of customer relations and assistance, case resolution, and case management take the place of time and workload management, case decisions, investigation and analysis, accounts maintenance, and customer relations. The revised elements became effective in July 1999. The guidance issued to IRS district collection officials on implementing the revised elements is not as comprehensive as the guidance issued for the new retention standard. The guidance instructs supervisors to meet with their employees to discuss the revised position description and critical job elements. It does not, however, explain or provide examples of how the revised elements are to be incorporated into the evaluation process or to be used to evaluate revenue officers. In a related effort, the Examination Division was exploring the extent to which critical job elements for revenue agents should reflect IRS’ auditing standards. By linking the critical job elements with the auditing standards, the Examination Division hoped to reduce the number of standards supervisors must refer to when evaluating employee performance. At the time we completed our fieldwork, the Examination Division had not yet established a time frame for completing the initiative, which began in April 1999. To fulfill its new mission statement, IRS will need to make a significant departure from the past supervisory practice of emphasizing revenue production and efficiency in employee evaluations to one that balances these goals with good customer service. IRS recognizes that making changes to its employee evaluation process will be important in bringing about cultural change and establishing customer service as an agency priority. IRS expects to change the process as part of a larger reform of its entire performance management system. However, because of the magnitude of the changes IRS is undertaking, it is uncertain when such a system will become fully operational and a new employee evaluation process put in place. In the meantime, IRS could take better advantage of opportunities within the current evaluation process to reinforce the importance of customer service among its frontline enforcement employees. We recommend that the Commissioner of Internal Revenue develop an interim approach for making better use of enforcement employees’ performance evaluations to develop and encourage good customer service. The approach could include providing guidance on the conditions under which supervisors should provide narrative for critical job elements and conduct field visits in assessing individual employees. As part of developing the approach, the Commissioner should determine whether to better enforce the existing agency policies requiring that supervisors conduct midyear progress reviews of all enforcement employees and review a sample of their caseloads annually. We also recommend that the Commissioner ensure that Collection Division supervisors are given more comprehensive guidance on how the revised elements are to be incorporated into the evaluation process and used to evaluate revenue officers. The Commissioner of Internal Revenue provided written comments on a draft of this report in a September 17, 1999, letter, which is reprinted in appendix VIII. We also met with senior Collection and Examination officials on August 26, 1999, to obtain oral comments from them on the draft report. We have incorporated technical comments from that meeting and the Commissioner’s letter where appropriate. Our evaluation of IRS’ comments focuses on those of the Commissioner, since his comments and those provided by IRS officials were generally consistent. The Commissioner’s letter emphasized that IRS is providing training to supervisors and employees on the balanced measurement system that would reinforce IRS’ increased emphasis on good customer service. Although IRS’ training on performance measures was not part of our review, we said in our testimony on modernization that—given the critical role that frontline employees will have in improving taxpayer service— such training will be important to effectively align IRS’ culture with its new mission statement. More specific to our report, the Commissioner noted that the report provides timely information on IRS’ current evaluation system. He also agreed with our recommendation to develop an interim approach for making better use of enforcement employees’ performance evaluations to develop and encourage good customer service. He said IRS is developing a Manager’s Guide to Performance Management that would provide detailed information on the entire appraisal process, including monitoring and evaluating employee performance. He said that the guide, which is expected to be issued in October 1999, would specifically address the conditions under which managers should provide a narrative for critical job elements and conduct field visits. He also stated that mandatory mid- year progress reviews are to be conducted. The Commissioner did not specifically say if he would enforce existing agency policies requiring supervisors to review a sample of employees’ caseloads. However, he stated that current IRS procedures require periodic reviews of employees’ caseloads, which we interpret to mean the current requirement for annually reviewing a sample of employees’ caseloads will be enforced. As our report points out, case reviews are important for determining whether employees’ case decisions were made in accordance with agency policies and procedures. The Commissioner stated that IRS plans to discuss with NTEU representatives our recommendation that more comprehensive guidance on the revised critical job elements be provided to Collection supervisors. He stated that the revised critical job elements represent a reordering of previous job elements under the current appraisal system and that IRS worked closely with NTEU to define the features of this appraisal system. Therefore, IRS plans to collaboratively work with NTEU to develop any additional guidance regarding incorporating the revised elements for revenue officers into the evaluation process. We believe that this is an appropriate approach to take. Our recommendation is based on the premise that customer service starts with the frontline employees, who need to know what IRS expects, and that supervisors are a key link in explaining those expectations. NTEU representatives should be knowledgeable of the sort of guidance supervisors need to evaluate revenue officers and can help IRS develop guidance about IRS’ expectations regarding acceptable performance under the revised critical job elements, including good customer service. We are sending copies of this report to Representative Charles B. Rangel, Ranking Minority Member, Committee on Ways and Means; Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, Committee on Ways and Means; various other congressional committees; the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and other interested parties. We will also make copies available to others on request. If you have any questions, please contact me at (202) 512-9110 or Ralph T. Block at (415) 904-2000. Other major contributors are acknowledged in appendix IX. Our objectives in this report are to (1) determine the relative emphasis on revenue production, efficiency, and customer service in enforcement employees’ annual written evaluations; (2) identify features of the evaluation process that might be used to greater advantage to reinforce the importance of customer service; and (3) describe IRS initiatives to promote customer service, including those to encourage enforcement employees to be taxpayer oriented. To determine the relative emphasis on revenue production, efficiency, and customer service comments in annual evaluations written under IRS’ former mission statement, we reviewed the two most recent evaluations for each employee as of June 1998 in a statistically representative sample of 19,096 examination and collection frontline employees. For comments on revenue production, we included comments such as those discussing (1) dollars assessed or collected by the employee, (2) number of cases in which the taxpayer agreed with IRS’ assessment, or (3) use of collection tools by the employee to secure payment from the taxpayer. For comments on efficiency, we included comments such as those discussing (1) timeliness and output, such as the number of overage cases, which are cases that have been in inventory for more than a certain length of time, and (2) the average number of hours needed to complete work on a return. For customer service comments, we included comments on the extent to which the employee (1) helped taxpayers to understand and meet their tax responsibilities and (2) applied the law with integrity and fairness. We also included a category for interpersonal skills because IRS has emphasized that quality interactions with taxpayers are an important component of customer service. To validate our approach, we discussed our criteria with IRS headquarters officials in the Collection and Examination Divisions, who agreed that it accurately captured their new customer-service orientation. Appendix III provides a more detailed explanation of our sampling methodology, and appendix IV provides a summary of the results of our analysis. To identify features in the current evaluation process that could be used to reinforce the importance of customer service, we reviewed the same two evaluations mentioned above. We counted the number of evaluations without narrative descriptions, and the number of evaluations that contained evidence that they were based on field visits, case reviews, and midyear progress reviews that included comments on the customer relations critical job element. To strengthen our understanding of how supervisors allocated their time, we interviewed 30 supervisors from the Northern California, Kansas-Missouri, and Georgia District Offices, which we chose because of their proximity to our offices, and sent a survey to examination and collection division chiefs in IRS’ 33 district offices. To obtain data describing any IRS initiatives to promote customer service, we interviewed IRS headquarters officials, attended IRS training sessions introducing IRS’ new initiatives, and reviewed draft documents describing the new initiatives. Our review was subject to some limitations. Our choice of evaluative statements as supporting revenue production, efficiency, and customer service, and our analysis of supervisors’ written comments required us to make judgments that were, in part, subjective. To maximize the objectivity of our analysis, we (1) obtained IRS’ concurrence that the categories we used to characterize the evaluative statements were appropriate and (2) conducted two separate and independent assessments of each sampled evaluation. When differences arose, a collaborative approach was used to resolve them. Although we did not verify responses from our survey of division chiefs, we did discuss supervisory and administrative responsibilities with 30 supervisors in the field. The evaluation forms for revenue officers, revenue agents, and tax auditors are very similar and have three parts. The first part provides basic information on the employee; assesses the employee’s competence level; and provides an overall rating of the employee, such as outstanding or fully successful. The second part, shown in figures II.1, II.2, and II.3, is tailored to the job classification. It lists the critical job elements and provides the supervisor’s numerical rating of how well the employee performed. The third part contains the narrative to support the numerical rating. We have not included a sample for this part because there is no official document for the narrative part of the evaluation. The supervisors attach as many typed or handwritten pages of narrative as they feel are necessary. This appendix discusses the sampling methodology we used to determine the extent to which evaluations referred to customer service and the extent to which supervisors used various features of the current evaluation system to monitor and give feedback on employees’ customer service skills. To minimize disruption of IRS operations, we used the same sample for this report that we drew for our report entitled IRS Personnel Administration: Use of Enforcement Statistics in Employee Evaluations (GAO/GGD-99-11, Nov. 30, 1998). To determine the extent to which (1) evaluations referred to revenue production, efficiency, and customer service and (2) supervisors used various features of the current evaluation process, we reviewed the evaluations of a sample of 300 IRS employees from the 3 enforcement employee groups of interest: tax auditors, revenue agents, and revenue officers. IRS managers are not required to write performance narratives for every rating dimension for every employee each year. To review more narratives, two ratings were requested from IRS for each employee in the sample. The results presented in the report reflect only employees who received two performance evaluations during the period of our review. From our sample of 300 IRS employees, we received usable responses for 267 employees for a response rate of approximately 89 percent. We eliminated all nonrespondents, including those for whom we did not have two evaluations or who functioned in a specialized capacity, such as computer audit specialist (known ineligible), and those for whom we did not receive responses (unknown eligibility). Disposition of the sampled cases is provided in table III.1. After weighting the responses to account for selection probabilities and nonresponse, we were able to make estimates of the percentage of IRS employees who received a narrative referring to customer service in at least one of their two most recent employee evaluations prepared for the period ending June 1998. In addition, we were able to make estimates of the percentage of employees whose files indicated they received field visits, case reviews, and midyear progress reviews referring to customer service, as well as the number of evaluations lacking narrative for one or more critical job elements. Because we reviewed a statistical sample of employee evaluations, each estimate developed from the sample has a measurable precision or sampling error. The sampling error is the maximum amount by which the estimate obtained from a statistical sample can be expected to differ from the true population value being estimated. Sampling errors are stated at a certain confidence level—in this case, 95 percent. This means that the chances are 19 out of 20 that if we reviewed evaluations for all IRS employees in the groups of interest, the true value obtained for a question on these evaluations would differ from the estimate obtained from our sample by less than the sampling error for that question. Because the confidence intervals for the different estimates vary in size, we report all of them in appendix IV. This appendix discusses the criteria we used to categorize narrative statements in evaluations of enforcement employees and the results of our analysis of that data. We reviewed the two latest evaluations for the period ending June 1998 for each employee in a statistically representative sample of 267 of 19,096 examination and collection enforcement employees. For each narrative describing performance in a critical job element, we documented comments on (1) revenue production, (2) efficiency, and (3) customer service. Comments about revenue production included statements discussing the dollars assessed or collected by the employee, the number of cases in which the taxpayer agreed with IRS’ assessment, or the use of collection tools by the employee to secure payment from the taxpayer. Comments on efficiency included statements discussing timeliness and output, such as the number of overage cases, and the average number of hours needed to complete work on a return. Customer service comments were categorized as comments regarding (1) helping taxpayers understand and meet their tax responsibilities and (2) applying the tax laws with integrity and fairness. We also included a category for interpersonal skills because IRS has emphasized that quality interactions with taxpayers are an important component of customer service. Comments on skills that we looked for in determining whether the narrative contained a customer service comment are listed below. IRS headquarters officials agreed that these behaviors were appropriate for the customer service critical job element. The following are comments on the extent to which the employee helped taxpayers understand and meet their tax responsibilities: Employee asks questions to identify taxpayer needs. Employee explains the examination or collection process to taxpayer. Employee provides Publication 1 to taxpayer without explanation. Employee explains taxpayer rights to taxpayer. Employee checks taxpayer’s understanding of issues involved, process involved, and what’s expected by asking questions or soliciting a response from the taxpayer. Employee looks for ways to improve taxpayer’s future compliance. The following are comments on the extent to which the employee applied tax law with integrity and fairness: Employee applies law objectively. Employee considers a variety of actions to try to meet the taxpayer’s need. Employee takes proactive action in favor of the taxpayer. Employee balances taxpayer interest with government interest. The following are comments on the employee’s customer service interpersonal skills: Employee treats taxpayer with respect. Employee treats taxpayer fairly. Employee is courteous and tactful in dealings with taxpayer. Employee is well prepared or organized for taxpayer contact. Employee responds quickly to taxpayer’s inquiries or problems. Employee listens to the taxpayer. We also documented additional comments about an employee’s interaction with the taxpayer such as (1) the employee is firm with or demands payment from the taxpayer and (2) the employee sets time frames or limits for the taxpayer. The critical job elements are different for revenue agents, tax auditors, and revenue officers. In order to discuss the narrative comments for all three employee types together, we grouped the critical job elements for each employee type into five skill groups, as shown in table IV.1. IRS Examination and Collection officials agreed with our grouping of the job elements into those skill groups. Tables IV.2 through IV.12 present the results of our analysis of narrative comments made by supervisors in our sample of enforcement employees. We analyzed two evaluations for each employee. In table IV.3 and table IV.4, we combined the data from the two evaluations into one record for each employee. Note 1: Confidence intervals are bracketed. This appendix provides examples of written comments by supervisors from evaluations of the enforcement employees reviewed for this report. The comments are grouped in two categories: (1) comments on revenue production and efficiency that reflect IRS’ old mission statement and (2) comments on customer service behaviors that reflect IRS’ new mission statement. At the beginning of each excerpt from an employee evaluation, we note which of the critical job elements the statements are from. “Time spent, both in terms of hours applied and months in process, was on the high end of an acceptable range. Decisions need to be made in a more timely manner and weighed against the ultimate tax potential of the issues being developed.” “You do not hesitate to use the full array of collection tools to resolve a case or move it along. Summonses, prompt assessments, levies, and seizures have all been frequently used with great success throughout the rating period . . . . You have maintained a vigorous program of appropriate enforcement with many seizures resulting in full payment and some others going to Chapter 11. These have included restaurants, an attorney’s office, a bakery, a social club and several vehicles.” “You set clear, reasonable deadlines and follow-up on them promptly, usually with an appropriate collection tool rather than a phone call. Your success in that regard is referenced above. During the selection of cases for one entity review it was noted that 84% of your inventory was less than 5 months old. On another it was noted that only 4 of your cases had been assigned for longer than six months.” “On another case, she found that the taxpayers had no mortgage on their home, she dealt with the taxpayers and convinced them that they needed to take out a mortgage. They took out that mortgage and fullpayed the liability.” “Your case files show that you consistently demand full payment, warn T/Ps of enforcement and document Publication 1. receipt. Taxpayers’ rights are fully observed with respect to lien filing and final notice issuance before levy. During the year you have had frequent enforcement activity with BMF cases . . .. The fact that so many of these led to full payment is indicative of your proper direction and refusal to accept less. With many of these types of cases, effective customer relations means standing firmly behind your correct decisions and you have done that very well.” “ does a very good job in developing pertinent information through the interview process. On several cases was able to pick up subsequent and related returns, get agreement, collect deficiency and close returns out in a short time frame.” “Your attention to the aspects of this element has remained high. You continue to observe the rights of taxpayers by professionally demanding full payment and/or delinquent returns. You also continue to consistently explain and warn of enforcement actions, which you also document fully. You conduct yourself in a very businesslike way and demonstrate an industrious manner.” “Your personal contacts and discussions are conducted in a firm business-like and professional manner. The customer relation aspects of the -A- and -B- case examinations are significant. The examinations were conducted in a timely manner with documented efforts to maintain the forward momentum of the examinations. You applied good time saving techniques, conducted the majority of the audit at the taxpayer’s place of business and, in general, conducted the examinations in a prompt and efficient manner. In both instances, you proposed significant adjustments as a result of your audit efforts, and secured the taxpayer’s agreement and checks in full settlement of the resulting tax, which, again, was significant. Not only did you save the taxpayer from additional charges by collecting the tax on the spot, but you also supported the Servicewide objective of maintaining a high collectibility rate, thereby reducing costs of collecting taxes due.” “You approach your audits in an objective manner and always consider the taxpayer’s position on issues that are unagreed. You generally cite the tax law for each adjustment you make ….You document the taxpayer’s position on unagreed issues and explain to them your position and the tax law’s applications.” “You make a point to explain all the facts, apply the proper code section, regulations, and include numerous cases to support your position and conclusion. You are able to identify factual differences between your issues and those in court cases and rulings. You ensure taxpayer is in agreement as to the facts and the only disagreement is the question of the law.” “You are open minded when dealing with your customers and willing to listen to their point of view as well as other information they may provide prior to making case decisions. Overall the case decisions you have made as well as the information you share have been presented in a professional and understandable manner.” “You have a difficult inventory, inhabited by difficult taxpayers and even more difficult exasperating representatives. In spite of this, you handled all of your customers well. You were extremely fair and did an excellent job in balancing the rights of your taxpayers with the protection of the interests and revenue of the Service . . . .You projected an excellent image of the Service.” “You have displayed a very helpful and courteous attitude towards taxpayers and have demonstrated that you are willing to take every step possible to ensure that the Service is represented to the public in the best possible light. You treat everyone you come into contact with both within the Service and outside of the Service, in a respectful manner, which not only enhances agreed cases but also contributes in a significant way to positive customer relations. Overall you are very good at securing cooperation during the course of examinations, thus facilitating case closings and demonstrating skill in explaining findings and conclusions with technical competence, while also effectively listening and considering the taxpayer’s point of view . . . In your discussions with taxpayers or their representatives, you will always disclose all facts and will never misrepresent the Service policies or interpretations of case law. You carefully listen to the opposing view and will tell the taxpayer that you need to research further and will get back to him in an expeditious manner. You employ a tactful manner in discussions of controversial tax issues while at the same time demonstrating your technical knowledge, always in attempting to resolve cases at the lowest possible level.” This appendix provides the results of our review of IRS enforcement employee evaluations to determine how often supervisors used various features of the evaluation process to support their written evaluations and the extent to which supervisors addressed customer service when using them. This appendix provides data on the results of our survey of 30 supervisors selected by IRS in 3 district offices (see table VII.1) and examination and collection division chiefs in all 33 IRS district offices (see table VII.2). As shown in the tables, both supervisors and division chiefs agree that supervisors, the immediate managers of frontline enforcement employees, spend little time on field visits and a significant amount of time on clerical, administrative, and collateral duties. In addition to those named above, Wendy Ahmed, Robert V. Arcenia, Benjamin Douglas, Suzy Foster, Ronald J. Heisterkamp, Sidney H. Schwartz, Sam Scrutchins, Jonda Van Pelt, and John N. Zugar made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the extent to which the Internal Revenue Service (IRS) employee evaluation system can support the new mission statement during the period IRS will need to revamp its performance management system, focusing on: (1) determining the relative emphasis on revenue production, efficiency, and customer service in enforcement employees' annual written evaluations; (2) identifying features of the evaluation process that might be used to greater advantage to reinforce the importance of customer service; and (3) describing IRS initiatives to promote customer service, including those to encourage enforcement employees to be taxpayer oriented. GAO noted that: (1) IRS could take advantage of opportunities within the evaluation process to reinforce the importance of customer service among its frontline enforcement employees; (2) there are a number of reasons for doing so; (3) most importantly, the evaluation process is not aligned with IRS' new mission statement because it emphasizes revenue production more than customer service; (4) also, it is uncertain when a new performance management system that IRS is planning will become fully operational; (5) enforcement employees' two most recent written evaluations for the period ending June 1998 emphasized their revenue production and efficiency skills more than their customer service skills; (6) available evidence indicates that four features of the evaluation process could be used to greater advantage to reinforce the importance of customer service among enforcement employees; (7) if the features were to be used more, however, IRS would need to consider the potential implications for the way in which supervisors allocate their time between these and other administrative tasks; (8) the narrative portion an employee's written evaluation provides flexibility for supervisors to focus on employees' customer service skills; (9) midyear progress reviews, which are required, provide supervisors with opportunities to give interim feedback on aspects of case handling in relation to customer service (and other) goals; (10) mandatory reviews of sampling of completed cases present another opportunity for supervisors to comment on customer service skills because these are ex post facto examinations of documents prepared by employees to support their case decisions; (11) field visits present an excellent opportunity to reinforce customer service; (12) IRS has implemented a number of initiatives to promote customer service; (13) it has: (a) revised its strategic goals; (b) aligned them with its mission statement; and (c) introduced organizational performance measures that are to balance customer satisfaction, employee satisfaction, and business results; (14) in the meantime, IRS has taken several interim actions to encourage enforcement employees to be taxpayer oriented; and (15) it has incorporated into the employee evaluation process a new retention standard relating to the fair and equitable treatment of taxpayers that employees must meet at a passing level to retain their jobs.
|
Recent fire seasons have shown that past fire suppression policies have not worked as effectively as was once thought. In fact, they have had major unintended consequences, particularly on federally owned lands. For decades, the federal wildland fire community followed a policy of suppressing all wildland fires as soon as possible. As a result, over the years, brush, small trees, and other vegetation accumulated that can fuel fires and cause them to spread more rapidly. This combination of accumulated underbrush and rapidly spreading fires heighten the potential for fires to become catastrophic. The buildup of excessive underbrush is not the only cause of catastrophic wildfires, however. The weather phenomenon known as La Nina, characterized by unusually cold Pacific ocean temperatures, changed normal weather patterns when it formed in 1998. It caused severe, long-lasting drought across much of the country, drying out forests and rangelands. This drought is cited by some as one of the major causes for the 2002 catastrophic wildland fires, which nearly surpassed those of 2000. BLM, the Fish and Wildlife Service, the National Park Service, and the Forest Service manage about 700 million acres, or 96 percent of all federal lands. In addition, Interior’s Bureau of Indian Affairs manages another 55 million acres. Most federal lands in the 48 contiguous United States are located in 11 western states, many of which have seen a dramatic surge in population over the last two decades, complicating the management of wildland fires. New development is occurring in fire-prone areas, often adjacent to federal lands, and creating a wildland-urban interface—an area where structures and other human development meet or intermingle with undeveloped wildland. This relatively new phenomenon means that more communities and structures are threatened by wildland fire and of potential postfire effects, including increased erosion and flooding. Interior agencies and the Forest Service have undertaken postwildfire measures aimed at reducing potential postfire effects for several years. Since the early 1960s, BLM has had a program to curb damages often associated with wildfires—soil erosion and potential changes in vegetation. Similarly, the Forest Service has implemented postfire measures, such as seeding, since the 1930s. According to a Forest Service analysis of such measures implemented between 1973 and 1998 in the western United States, more than $110 million, in total, has been spent on treating burnt lands. Furthermore, postfire expenditures have increased substantially, especially during the 1990s, as the number of Forest Service acres that burn annually increased and as the Forest Service used treatments more extensively. This finding is consistent with Interior’s analysis of emergency stabilization fire treatments on BLM lands. Similarly, according to Fish and Wildlife Service officials, even though it has undertaken postwildfire measures for several years, its policy on what measures are appropriate has evolved from measures aimed primarily at “keeping the soil in place” to those having additional functions such as combating invasive or noxious weeds or plants. Responding in the aftermath of the disastrous 1994 fire season, when several lives were lost, Interior, the Forest Service, and other federal agencies undertook an extensive interagency review and revision of federal fire management policies. The resulting 1995 Federal Wildland Fire Policy and Program Review proposed a set of uniform federal policies to enhance effective and efficient operations across administrative boundaries and improve the agencies’ capabilities to meet challenges posed by wildland fire conditions. Large-scale wildfires continued to burn throughout the United States, with severe fire seasons in 1996, 1999, and 2000. Following the 2000 wildland fires, the administration asked USDA and Interior to recommend how best to respond to the 2000 fires and how to reduce the impacts of such fires in the future. The resulting report—the National Fire Plan—recommended increased funding for several key activities, such as suppressing wildland fires and reducing the buildup of unwanted hazardous fuels. The report also recommended expanded efforts to restore burnt lands because some of the fires burned with such intensity that they drastically changed ecosystems, and, without intervention, these ecosystems would recover slowly. The report recognized two key aspects of treatment activities: short-term treatments to remove hazards and stabilize soils and slopes, such as constructing dams to hold soil on slopes, and longer-term treatments to repair or improve lands unlikely to recover naturally from severe fire damage by, for example, reforesting desired tree species. To set priorities, restoration was to be undertaken on burnt lands that could affect public health and safety, as in the case of lands used as sources for domestic water supplies—that is, municipal watersheds; unique natural and cultural resources, such as salmon and bull trout habitat, and burnt land susceptible to the introduction of nonnative invasive species; and other environmentally sensitive areas where economic hardship may result from a lack of reinvestment in restoring damaged land, such as land used for recreation and tourism. To fund the National Fire Plan, Congress appropriated $2.9 billion for the two departments’ fiscal year 2001 wildland fire needs—an increase of $1.4 billion over the departments’ prior year funding of $1.5 billion. Of the $2.9 billion appropriated in 2001, $227 million was to be used for treating burnt lands. For fiscal year 2002 wildland fire needs, Congress appropriated $2.3 billion for the two departments and specified that $103 million was to be used for treating burnt lands. To carry out national fire plan goals and objectives, including those for treating burnt lands, Interior and the Forest Service have each designated national fire plan coordinators. To achieve more consistent and coordinated efforts in implementing the Federal Wildland Fire Policy and the National Fire Plan, and in response to a recommendation made by the National Academy of Public Administration, the Secretaries of Agriculture and of the Interior established a Wildland Fire Leadership Council in April 2002. Comprised of members of both departments, the council is charged with, among other things, coordinating efforts to restore ecosystem health and monitoring performance. Within the agencies of Interior and the Forest Service, wildland fire activities are largely carried out by local land units. Within Interior, BLM’s local land units include district or field offices; the Fish and Wildlife Service’s and the National Park Service’s local land units consist of facilities, refuges, or parks; and the Bureau of Indian Affairs’ local land units consist of agencies. The Forest Service’s local land units consist of national forests and grasslands. BLM’s state offices oversee the local land units, while the Bureau of Indian Affairs, Fish and Wildlife Service, National Park Service, and Forest Service regional offices oversee local land units. Interior and USDA have different policies and procedures to assess whether burnt lands need to receive any short-term or longer-term treatments following wildland fire. Interior has one overall policy and procedure for its four land management agencies to determine the need for both short- and longer-term treatments. USDA’s Forest Service has separate policies and procedures for assessing the need for short-term emergency stabilization treatments immediately following a wildland fire and for longer-term nonemergency treatments for rehabilitating burnt lands. Interior and the Forest Service have attempted to adopt the same policies and procedures for treating burnt lands, even though the National Fire Plan does not require them to do so and recently agreed to work towards standardizing certain aspects of their programs. Under Interior’s policy and procedure for implementing its emergency stabilization and rehabilitation program to treat burnt lands, its agencies are to take four key steps. The agencies are to (1) assess burnt lands to determine whether treatments should be taken to stabilize or rehabilitate them, (2) identify treatments when actions are considered necessary, (3) approve and fund necessary treatments, and (4) implement treatments once funding is available. Local land unit managers are responsible for having burnt lands assessed to determine whether stabilization or rehabilitation is needed. Interior recommends that these managers start the process before a fire is contained in order to identify any emerging issues, conduct a preliminary risk analysis, and ensure a smooth transition from fire suppression to emergency stabilization and rehabilitation. Local land unit managers decide whether an intensive assessment of the burnt lands is warranted. In most cases, these managers decide that no such assessment is needed because they believe that the burnt lands pose no risk and that the lands will recover on their own within a relatively short period. If local land unit managers decide that an intensive assessment is warranted, they assemble an interdisciplinary teams from the local land units to assess the burnt lands and where appropriate, propose treatment. The team’s composition varies according to the complexity of the fire and availability of personnel with different skills and backgrounds. In general, Interior’s interagency guidance recommends that teams comprised of staff specializing in, for example, wildlife, ecology, rangeland, soils, and watersheds. The guidance also suggests that managers include expertise from cooperating agencies’ offices, especially when needed skills are not available within the local office. The agencies can also have available state or regional staff assist local teams. While the teams are comprised of agency officials, they can and do consult, as needed, with other organizations and individuals, including those from local communities. In some instances, wildland fires may encompass multiple agencies’ lands, result in burnt conditions that are beyond the capability of the local staff to assess, or place many valued resources at risk. In these situations, the local land unit manager can ask Interior to deploy one of two interagency teams to assess large, multijurisdictional wildland fires. Interior’s national wildland fire management office must approve any request for assistance. These teams include specialists from each of the affected agencies and represent a wide variety of skills. In 2000 and 2001, these multiagency teams were deployed eight times to assess fires we included in our review. Both local and multiagency teams evaluate whether and what kinds of treatments are needed. They review any applicable land or resource management plans for the affected land management units to ensure that any recommended treatment action will be compatible with these plans. The teams also review other available data that may help identify resources at risk, including data on cultural resources; threatened and endangered species; vegetation inventories, including information on invasive species; and soil types. Upon completing their field inspections, teams brief local land unit managers on whether and what type of treatments may be appropriate. If the local land unit managers decide to proceed with treatment, they direct the team to prepare a treatment plan, which includes, among other things, a summary of activities and costs. In developing these plans, the team must consider the requirements of the National Environmental Policy Act and any other relevant statutes. In general, a team requires about 2 to 3 weeks to review the necessary land and resource management plan, data associated with the wildland fire, and any other data that may identify resources at risk; conduct the site inspection; and prepare the treatment plan. While Interior has a single process and uses the same funds and plans to identify both emergency stabilization and rehabilitation treatments, it recognizes that the treatments are intended for different purposes. Emergency stabilization treatments include those to (1) stabilize and prevent unacceptable degradation to natural or cultural resources, (2) minimize threats to life or property, or (3) repair, replace, or construct improvements to prevent land or resource degradation. Rehabilitation treatments include those to repair or improve lands unlikely to recover naturally. While Interior’s guidance indicates that plans are to identify treatments undertaken for emergency stabilization purposes as opposed to rehabilitation, our review of Interior’s emergency stabilization and rehabilitation plans for calendar year 2000 and 2001 fires indicates that they do not always make such a distinction. Interior’s guidance also states that both emergency stabilization and rehabilitation treatments are to be designed to be cost-effective and to meet treatment objectives. The agencies differ in how quickly they require that treatment plans be completed—from 5 days to 1 month. Once the treatment plan is completed, the Interior agencies must approve it, usually within 1 to 2 weeks. The agencies’ processes for approval vary, depending upon the cost of the treatment. For example, BLM has delegated approval authority for plans of less than $100,000 to its state offices, while its national office must approve plans of $100,000 or more. In contrast, the National Park Service does not delegate any approval authority to its local land management units; its regional offices approve plans of less than $300,000, while its national office approves plans of $300,000 or more. When a treatment plan and funding is approved, the local land unit officials are generally responsible for having the treatments specified in the plan implemented. Interior requires that treatments be implemented within 3 years. The Forest Service distinguishes between short-term emergency treatments to stabilize lands burnt by wildland fires and longer-term rehabilitation treatments. Its process for short-term treatments is similar to Interior’s. Under this process, local land units are responsible for assembling interdisciplinary teams of agency officials to survey fires that are 300 acres or larger to determine if emergency conditions exist and if so, whether treatments are needed. Forest Service teams can also consult with other agencies and individuals, as necessary. The Forest Service does not have a national team to assess large, multijurisdictional fires. However, Forest Service staff are members of Interior’s interagency teams and these teams have assessed fires on National Forest System lands. The Forest Service’s rehabilitation process, however, differs from Interior’s. Under the Forest Service’s emergency stabilization process, local land units are to undertake only those treatments necessary to alleviate emergency conditions following wildfire. These treatments include those necessary to protect life and property and to prevent additional damage to resources. The Forest Service directs that treatments be undertaken only when an analysis of risks shows that planned actions are likely to reduce risks significantly and are cost-effective. Further, because the Forest Service funds emergency stabilization with emergency wildland fire funding, to qualify for funding the Forest Service requires that treatment measures provide essential and proven protection at minimum cost. According to Forest Service officials, because the treatments are considered as emergency actions, the Forest Service does not complete environmental impact statements. In keeping with the emergency status of these treatments, the Forest Service requires that plans be developed and approved within 10 to 13 days following total containment of the wildland fire. Delegated approval authorities vary by Forest Service region. Certain regions, with a history of more frequent and larger fires, have higher approval authorities than other regions. For example, the Forest Service’s Pacific Southwest and Pacific Northwest regions (regions 5 and 6, respectively), which generally have most of the catastrophic wildfires, could approve plans costing up to $200,000 in 2000, while the Southern and Eastern regions (regions 8 and 9, respectively), where large, catastrophic fires are rare, were delegated no approval authority. Forest Service headquarters must approve plans exceeding regional delegated levels of approval authority. As with the Interior agencies, once an emergency stabilization plan is approved, the local land unit officials implement the plan. The Forest Service generally requires that treatments be implemented within the first year, but provides for funding to maintain or install additional treatments the next year. While the Forest Service’s short-term process for emergency stabilization is similar to Interior’s, its longer-term rehabilitation process is not. According to Forest Service officials, the agency developed a different process for undertaking longer-term treatment on burnt lands when the National Fire Plan was being developed and Congress was considering appropriating additional funds to the Forest Service for restoring damaged lands. Before the National Fire Plan, the Forest Service spent little money on rehabilitation because it did not receive appropriations specifically for such an effort. Once the agency realized that additional funding would be available through the National Fire Plan, it began planning a separate rehabilitation process. According to Forest Service officials, the agency decided to have two separate processes because emergency treatments to stabilize burnt lands are funded with emergency funding and must be undertaken quickly. Further, such treatments generally do not have long- term consequences for land management, whereas rehabilitation treatments can potentially have long-term consequences, which may require an environmental assessment, and involve a number of different Forest Service programs. In October 2000, the Forest Service asked the regional foresters to identify proposed rehabilitation projects that supported the National Fire Plan. In accordance with that plan, the Forest Service’s national fire plan coordinator gave primary responsibility to the regions for implementing the rehabilitation program. The coordinator instructed the regions to focus rehabilitation efforts on restoring watershed conditions, including protecting basic soil, water resources, and habitat for various native species such as plants and animals. Projects were envisioned to be those long-term efforts to rehabilitate or improve lands unlikely to recover naturally from wildland damage, or to repair or replace minor facilities damaged by fire. The coordinator also stressed the need for projects to be (1) consistent with long-term goals and approved land use plans; (2) based on sound analyses of the projects’ potential consequences; (3) developed cooperatively with other federal, state, or local jurisdictions when wildland fires crossed their jurisdictional boundaries; (4) those that meet the basic objective of protecting life, property, and unique or critical cultural and natural resources; and (5) undertaken within the perimeter of the burned area. Funding to the regions was allocated based on acres burned and acres severely burned. The funding for such projects can be available for up to 3 years. Building on these instructions, the Forest Service regions developed different processes to identify proposed rehabilitation projects, as illustrated by the experiences of the Northern and Intermountain regions, respectively (regions 1 and 4) and the Southwestern Region (region 3). Regions 1 and 4—which encompass Idaho, Montana, Nevada, North Dakota, Utah, and portions of South Dakota and Wyoming—were most affected by catastrophic wildland fires in 2000. The two regions jointly developed additional criteria to use in identifying and reviewing rehabilitation projects for fires that occurred in 2000. These criteria included whether the proposed project would improve or protect water quality, or restore long-term watershed integrate several components in the project; restore threatened or endangered species habitat; protect public health and safety; improve infrastructure as a necessary step in completing the project; address noxious or invasive weeds as a component of the project; be emphasized by the regional forester; or have visible accomplishments within the first year. According to region 1 and 4 officials, the regions developed these additional criteria for reviewing their forests’ rehabilitation proposals because Forest Service guidance was too general to assess and set priorities for projects. These additional criteria allowed the two regions to better compare proposals that the forests submitted. Region 3, which encompasses Arizona and New Mexico, and which was the next region most affected by wildland fires in 2000, used a different approach to identify and set priorities for projects. According to the region 3 emergency stabilization and rehabilitation program coordinator, while Congress was considering appropriating additional funding for the National Fire Plan, the region assembled a team to determine which fires were catastrophic in 2000 based on the (1) value of the losses incurred as a result of the fire, (2) capability to repair or restore the loss, and (3) potential cost to repair or restore the loss. Given these criteria, region 3 considered as catastrophic 5 of the 18 largest fires that occurred in 2000 and eligible for rehabilitation projects. Forest Service officials said that the agency and regions undertook similar processes to identify rehabilitation projects in 2002. However, the Forest Service did not distribute all of the $63 million appropriated in fiscal year 2002 because it needed some of these funds for wildfire suppression. The agency used some of this appropriation for suppression because putting out fires is the agency’s top priority. According to the Forest Service national rehabilitation program coordinator, the severe wildland fires in 2002 required the Forest Service to use $84 million in rehabilitation funding—a portion of the $63 million appropriated in fiscal year 2002 and a portion of the $142 million appropriated in fiscal year 2001 but not yet expended. As noted previously, prior to receiving additional funding under the National Fire Plan, USDA’s Forest Service largely limited its postwildland fire treatments to emergency stabilization. However, in 1998, Interior and USDA initiated an effort to apply a consistent approach for both emergency stabilization and longer-term rehabilitation. This included an effort to develop an interagency handbook that agencies in both departments could use. This effort was undertaken, in part, in response to the 1995 Federal Wildland Fire Policy, which recommended that agencies work toward standardizing their policies and procedures. The Wildland Fire Leadership Council recently addressed this effort, which was abandoned in 2002 because of differences the agencies perceived in their missions, lands, and use of resources. According to Interior and Forest Service officials, they had worked to integrate their different approaches, but discontinued this effort in 2002 because they decided that integration would be too difficult. The difficulty arose because, according to these officials, their agencies and the lands they manage are too dissimilar to have a consistent approach for treating burnt lands. For example, BLM’s emergency stabilization and rehabilitation efforts focus on stabilizing soils and ensuring a diversity of animal and plant species because its mission emphasizes sustaining its lands for multiple uses. The National Park Service’s emergency stabilization and rehabilitation efforts focus on naturally preserving the lands and resources for use by people. In contrast, the Forest Service stated that, historically, its efforts have focused on short-term stabilization treatments that are intended to protect life and property and prevent additional resource damage because its mission emphasizes protecting and improving forests and preserving watersheds. With the advent of the National Fire Plan, however, the Forest Service enlarged this focus to consider not only watersheds but also longer-term treatments to improve lands unlikely to recover naturally by, for example, planting trees or monitoring for and treating noxious plants or weeds. Because of this emphasis and the funding specifically authorized for rehabilitation, the Forest Service established a separate process for these longer-term efforts. The following illustrates the extent of the difference between Interior and the Forest Service: Interior uses the same process, staff, and funds to implement its emergency stabilization and rehabilitation program because, according to Interior officials, it is easier to do so. The Forest Service uses different processes, staff, and funds to implement its emergency stabilization program and its rehabilitation program because emergency stabilization has existed for about 25 years while it considers rehabilitation as an expanded mission based on the National Fire Plan appropriations language. The difference in how the two departments fund emergency stabilization and rehabilitation treatments resulted in the Office of Management and Budget directing the Department of the Interior to identify nonemergency funding options for its nonemergency treatments by March 2003. Interior and Forest Service officials acknowledged that the Federal Wildland Fire Policy encourages federal agencies to standardize processes and procedures and said that their respective departments are working together to better coordinate their programs. Even though the Fire Policy and the National Fire Plan do not require that the departments have the same processes for their respective programs or that they be fully integrated, the Wildland Fire Leadership Council addressed differences in the departments’ emergency stabilization and rehabilitation programs. In January 2003, the council decided that both departments should have standard and uniform definitions, time frames, and funding mechanisms for efforts they take under their respective programs. According to the Forest Service’s national emergency stabilization program coordinator, the council’s decision will result in the two departments resuming their efforts to develop and adopt the same interagency handbook for carrying out their emergency stabilization and rehabilitation programs. Following the calendar years 2000 and 2001 fires, Interior and USDA’s Forest Service approved 421 plans for stabilization and rehabilitation treatments for an estimated total of more than $310 million. Nearly all of the plans and costs were to treat fires that occurred in western states. Within Interior, BLM accounted for the most plans—210 out of 266—and approved the bulk of Interior’s funds—$88 million out of $118 million. The Forest Service accounted for the next largest number of plans—155—and approved $192 million—$53 million for short-term emergency stabilization and $139 million for longer-term rehabilitation. While the two departments implemented the same types of treatments on their lands following wildland fire, such as seeding, the frequency with which they relied on these treatments varied, primarily because of the types of lands they manage. As shown in table 1 for both Interior and the Forest Service, most emergency stabilization and rehabilitation treatments occurred in western states. Treatments occurred there primarily because much of the lands Interior and the Forest Service manage are in these states. Furthermore, during the summers of 2000 and 2001, states in the intermountain west were especially hard hit by drought and persistently dry conditions, which gave rise to two of the worst wildfire seasons in the past 50 years. As table 1 shows, Montana and Idaho received more than 50 percent of the stabilization and rehabilitation funding for the 2000 and 2001 fires. Montana, which received the largest allocation, proposed to use almost half of its funds for longer-term rehabilitation treatments in the Bitterroot National Forest. According to the estimates provided in the stabilization and rehabilitation plans, the costs to treat wildfires varied widely. About 56 percent ($174.3 million) of the estimated $310 million was associated with only 18 of the 421 plans. Most of the plans (87 percent) estimated that treatment costs would be under $1 million and the majority of those were less than $100,000. Table 2 shows the number and percentage of plans that fall within various cost estimate ranges and the total estimated costs and percentage within these ranges. The cost of individual emergency stabilization and rehabilitation treatments ranged from about $2,000 to about $42 million. Cost differences occurred primarily because of the number and type of treatments included in the plan and the number of acres to be treated. This is illustrated in the following examples: The most costly plan involved longer-term rehabilitation for the Bitterroot National Forest in Montana. In this plan, the Forest Service regional office included 5 different but almost simultaneous fires that engulfed about 185,000 acres in 2000. This plan includes planting trees, roadwork—including cleaning drainage structures, restoring road surfacing, and taking roads out of service—and removing dead and dying timber. The entire proposed cost of the plan is about $42 million, which, according to Forest Service officials, would be spent over a period of several years. One of the least costly plans—for the Lower Rio Grande Valley National Wildlife Refuge in southern Texas—proposed spending only about $2,500. While the fire was relatively small and only grew to about 10 acres, the tract was in an urban area, surrounded by many homes and farms. Given the fire’s location and the unique climate, geology, vegetation, and wildlife of the site, the Fish and Wildlife Service proposed to revegetate 5 of the burnt acres with native brush. Interior’s 4 agencies approved 266 plans, costing about $118.5 million. Of the four agencies, BLM approved the largest number of plans and had the largest share of total costs. Table 3 provides information on the number and cost of plans approved by Interior’s agencies in 2000 and 2001. Most of the funds Interior approved were used for seeding and fencing, primarily because most of the fires occurred on rangelands BLM manages in Idaho, Nevada, and Utah. About $67.2 million, or 70 percent, of the $96.1 million were for these two treatments. Table 4 provides data on the treatments Interior used most frequently and the cost of these treatments. Of the four Interior agencies, BLM accounted for the largest share of treatment costs and included some type of seeding as a treatment in about 190, or 90 percent, of its 210 plans. Similarly, BLM accounted for about $50 million of the $57.5 million that Interior approved for seeding. Much of the lands managed by BLM consist of rangelands that produce forage for wild and domestic animals, such as cattle and deer, as well as many other forms of wildlife; its lands include grasslands and deserts—both arid and semiarid land. Seeding was done to prevent soil erosion and to restore forage used by cattle, mule deer, or elk; habitat used by other species such as sage grouse; or reduce the potential for the invasion of undesirable or noxious plants or weeds. According to BLM officials, the method used to seed—whether by air or by drilling—depends primarily on the terrain, soil, and seed or seed mixture used. This is illustrated by the following examples: Aerial seeding. One of the largest seeding treatments occurred to aerially seed about 40,000 acres in Nevada burned by the Twin Peaks Fire in 2000, at a cost of $5.4 million. For seeding the entire burnt area with a native seed mixture of wheat grasses, sagebrush, and wildrye, the local office decided that aerially seeding would be the most appropriate method. The seeded area was hilly to mountainous and because of this, the use of a helicopter or fixed-wing aircraft was proposed to spread seed across the burnt area. The seeding was intended to reduce the invasion and establishment of undesirable or invasive species of vegetation, particularly noxious weeds. In addition, the seeding—if successful—would provide mule deer and livestock with critical forage. Drilling. According to BLM officials, BLM frequently uses rangeland drills to seed. For example, following the Flat Top, Coffee Point, and Tin Cup wildfires, which burned about 117,000 acres of the Big Desert in Idaho, BLM approved $1.5 million to drill and aerially seed the burnt acreage. For seeding a mixture of wheatgrass, ricegrass, needlegrass, wildrye, and rice hulls, the local office decided to use a rangeland drill because the terrain was relatively flat and could be easily drilled. According to BLM, if BLM had not seeded, the lack of remaining seed could have impaired the land’s recovery and, in the long term, reduced species diversity and degraded habitat conditions for all wildlife species that used the Big Desert. Figure 1 depicts BLM seeding with a rangeland drill. Interior agencies also frequently repaired or installed fencing following wildland fire, primarily to protect burnt rangelands from cattle grazing to allow for regeneration. Under Interior policy, BLM can exclude burnt lands from grazing that are recovering from wildfire for a minimum of 2 years. Of Interior’s 266 plans, 171 included fencing at a cost of $9.7 million. Most of this cost—about $8.1 million—was for fencing on BLM lands. This is illustrated by the following examples: After the West Mona Fire burned more than 22,500 acres in Utah, BLM approved a $1.7 million plan, which included about $241,000 to remove about 28 miles of fencing that was destroyed by the fire, construct 34 miles of new protective fence, repair 11 miles of existing fence, and install 6 cattleguards. The new fencing was to be installed after the area was seeded. The fencing was to protect the burnt and seeded areas from livestock grazing for 2 years. After the Abert Fire burned 10,000 acres in Oregon, BLM approved a $61,000 plan that included about $10,500 for fencing. Much of the burnt acreage, before the fire, consisted mainly of sagebrush and native bunch grasses. BLM concluded that the majority of the burnt area retained sufficient native seeds and plant material in the soil for it to recover naturally. However, to help ensure natural vegetative recovery, BLM concluded that the burnt area needed to be protected from livestock grazing for at least 2 years. Figure 2 shows BLM grazing lands that were burnt and will require new fencing to exclude cattle. Reforestation, while not frequently used, was fairly costly. Reforestation was used in 24 of the 266 plans, for a cost of $6.6 million, or an average of about $275,000 per treatment. The only other treatment that was comparable in cost was seeding, which averaged about $248,000 per treatment. Reforestation was generally approved for funding to control the spread of invasive species or to reduce wind and water erosion. For example, the Fish and Wildlife Service developed a $181,500 plan to treat the Ash Meadows National Wildlife Refuge in Nevada following a fire that burned about 658 acres. The assessment team recommended that staff from the local land unit collect seeds from mesquite and ash trees, contract with nurseries to grow seedlings, and plant seedlings and cuttings primarily to control the spread of invasive species and reduce erosion. In addition, the Bureau of Indian Affairs used reforestation to replace commercial timber trees that were lost as a result of wildfires. Beginning in 1998, the Bureau of Indian Affairs began to allow a limited amount of this treatment to help ensure that Indian forest land continued to be perpetually productive—a management objective established by the National Indian Forest Resources Management Act. According to bureau officials, catastrophic wildland fires can destroy viable seed necessary for regrowth and the additional funding provided by the National Fire Plan allowed the bureau to better meet reforestation needs after such wildfires. For example, following the Clear Creek Divide Fire in 2000 on the Salish and Kootenai Indian Reservation, the bureau approved $2 million to collect ponderosa and lodgepole pine and western larch tree seeds on the reservation, grow 2.5 million seedlings, and plant them on about 8,000 acres. In conjunction with seeding and fencing, Interior agencies frequently included monitoring burnt areas to see if noxious or invasive plants or weeds had regenerated or moved into the area and treating them as necessary. Of Interior’s 266 plans, 166, or more than 60 percent, included monitoring and/or treating noxious or invasive plants or weeds as a treatment, for a total cost of $6.9 million. BLM accounted for most of these treatments. According to BLM officials, noxious or invasive weeds, particularly cheatgrass, are one of the factors that has caused an increase in the number and size of wildland fires. Such noxious or invasive weeds, which grow vigorously in the early spring, can crowd out native grasses and, during the arid summer months, can dry and provide excessive fine fuels for wildland fires to spread over large expanses of land. Because fire does not destroy some noxious or invasive plant seeds, the plants can resprout and grow with even greater vigor following a wildland fire. According to BLM officials, many local land units had completed the necessary environmental assessments to use selected herbicides on specified noxious or invasive weeds on its lands. As a result, the local land units could include noxious or invasive weed treatments in their emergency stabilization and rehabilitation plans. Figure 3 shows dried, flammable noxious or invasive weeds prone to wildfire. Interior agencies also included cultural resource surveys in many plans and treatments for known artifacts damaged or threatened by wildfire. Over half of the plans included cultural resource surveys, for a total of $5.2 million. Although cultural surveys are not treatments, but activities, they were included as treatment costs. According to BLM, which conducted many of these surveys, it routinely conducts cultural surveys before conducting ground-disturbing activities that have the potential to affect sites or objects that could be or are eligible for the National Register of Historic Places. When BLM anticipated any ground-disturbing treatment, such as rangeland drill seeding or installing new fencing, it included cultural resource surveys. Most of the funds the Forest Service approved for emergency stabilization or rehabilitation were for longer-term rehabilitation. Of the $192 million that the Forest Service approved, $139 million was for longer-term rehabilitation while $53 million was for short-term emergency stabilization. As noted previously, the Forest Service did not use all of its fiscal year 2002 appropriation of $63 million on longer-term rehabilitation because it needed to spend some of these funds on suppressing wildfires. Table 5 provides information on treatments and their costs in the Forest Service’s 113 emergency stabilization plans and its 42 rehabilitation plans. According to Forest Service officials, for short-term emergency stabilization, the agency relies on treatments that are intended to reduce soil erosion in watersheds that have the greatest potential to create further damage to people, property, or other valued resources if the agency does not act before the first major storm event after a wildfire. For example, some watersheds are used as sources of drinking water supplies for municipalities. Because much of its lands are steeply sloped, the agency relies on check dams, straw wattles (tubes of straw wrapped in netting), and other similar structures, such as logs, to retain soil, as well as seeding with fast-growing grasses. In contrast, for longer-term rehabilitation, the agency repairs resource damage caused by the fire through treatments, such as road or trail work to reduce erosion in other watersheds, reforestation to replace timber growth, and monitoring for or treating noxious or invasive weeds. As shown in table 5, for stabilization treatments, the agency approved about 31.5 percent of its 2001 and 2002 funds for erosion treatments such as building check dams with rocks, logs, or straw, which are then placed in stream beds or in steeply sloped channels on hillsides in order to slow runoff from storm events and help prevent soil erosion. This runoff can consist of water, soil, rocks, branches, and trees. To trap sediment, the Forest Service uses felled logs or log terraces placed perpendicular to sloped hillsides. It may specify the use of straw wattles placed perpendicular to slopes to trap sedimentation when the number of logs is insufficient to trap erosion effectively. Straw mulch or branches cut from trees may also be placed on slopes to retard soil erosion. For example, following the Trail Creek Fire, which burned about 32,000 acres on the Boise National Forest in Idaho, the Forest Service approved an emergency stabilization plan that included about $3 million for straw wattles, $344,000 for cutting down burnt trees and positioning them along slopes, $203,000 for mulch, and $203,000 for straw bales and other soil erosion control structures. The Forest Service plan included multiple soil erosion treatments because the property at risk from soil erosion included homes, community centers, and businesses. Figures 4 and 5 show slope stabilization treatments on Forest Service lands, including straw wattles and mulch. As table 5 also shows, the Forest Service used more than 25 percent of both its stabilization and rehabilitation funds for road and trail treatments because, according to Forest Service officials, it has an extensive network of roads and trails on its forests that required treatment after the 2000 and 2001 fires. Road work includes installing and enlarging culverts so that additional runoff anticipated from burnt lands can pass under roadways, and regrading roads so that storm runoff will be less likely to erode road surfaces. Similarly, trail work includes regrading or repairing trails to reduce erosion and protect public safety. If the roads or trails pose a public health or safety risk, and if the treatments need to be implemented before a major storm event occurs, then short-term stabilization funds are used. In contrast, if the roads or trails do not pose a health or safety concern, then the Forest Service uses longer-term rehabilitation funds. For example, following the Bitterroot Complex of five fires or fire complexes that burned about 185,000 Forest Service acres, the Forest Service recommended about $4 million in emergency road and trail treatments, to prevent damage by debris torrents and runoff. Treatments included installing larger culverts, cleaning ditches and culverts, recontouring roads, and repairing trails. If these treatments were not taken, the Forest Service anticipated that (1) fish habitat could be degraded and (2) private residences, a recreational development, and an irrigation system that were downstream from the burnt area could be harmed. In contrast, the rehabilitation plan included about $11 million for road and trail treatments. This funding is for roadwork along 400 miles of roads within the areas that burned with moderate to high intensity. Because vegetation no longer existed to stabilize road surfaces and slopes, the Forest Service stated it needed to perform work to reduce erosion from them. Similarly, 150 miles of trail were located in intensely burnt areas, which rendered some trails unsafe. Figure 6 shows a culvert installed to handle anticipated increased storm runoff. Seeding was another widely used stabilization treatment. This treatment accounted for more than 25 percent of the stabilization costs for 2000 and 2001 fires. Seeding was generally used to reduce erosion and thereby better protect watersheds. Forest Service plans included treatments such as seeding with fast-growing grasses—such as barley and winter wheat—that would be more likely to grow quickly or would be less likely to compete with the longer-term recovery of natural vegetation. For example, the Forest Service approved about $7 million for the Cerro Grande Fire for seeding to help stabilize soils. The assessment team concluded that natural regrowth of vegetation would be too slow to prevent significant runoff and soil erosion. It recommended grass seeding with annual ryegrass, barley, mountain brome, and slender wheatgrass, to quickly restore vegetation and reduce soil erosion, protect soil productivity, and reduce runoff. Reforestation treatments were almost entirely done as a longer-term rehabilitation treatment and accounted for about 25 percent of the rehabilitation costs for the 2000 and 2001 wildfires. The Forest Service uses reforestation treatments sparingly and restricts their use as a stabilization treatment because (1) replanting commercial species burned by wildfire is viewed as the responsibility of the forest management program, as opposed to an emergency measure to be funded by the wildland fire program, and (2) planting trees does not meet the emergency stabilization objective of preventing additional damage to resources. Rather, replanting trees is generally considered as repairing resource damage caused by wildfire and therefore not a large part of the rehabilitation program. In keeping with its interpretation of the need to restrict emergency stabilization treatments as those necessary to prevent additional resource damage, the Forest Service generally restricts the use of reforestation to no more than $25,000 per treatment. However, once it received funding under the National Fire Plan for longer-term rehabilitation, the Forest Service used this funding to develop reforestation proposals for 21 national forests burned by wildland fire. Similarly, the percentage of funding the Forest Service used for noxious or invasive weed monitoring or treatment varied depending on whether the treatment was for emergency stabilization or rehabilitation. According to Forest Service officials, noxious or invasive weed monitoring or treatment is not generally viewed as an emergency treatment. For example, the Forest Service proposed spending $1.3 million for noxious or invasive weed monitoring or treatment as an emergency stabilization measure; however, it proposed spending $25.1 million for such monitoring and treatment as a rehabilitation measure. Similarly, in its rehabilitation plan for the Salmon Challis National Forest in Idaho, the Forest Service proposed spending $9.5 million on noxious or invasive weed treatments because of known infestations of noxious weeds where several fires occurred in 2000. The weeds were expected to spread rapidly through the burnt areas, especially where fire suppression activities, such as bulldozing, exposed bare soils. The Forest Service also proposed to conduct a National Environmental Policy Act analysis for treating noxious or invasive weeds in another portion of the forest that had also been burnt in 2000 and which had not yet had an environmental analysis completed for such a treatment. Neither we nor the Forest Service or Interior know the overall effectiveness of emergency stabilization and rehabilitation treatments because local land units do not routinely document monitoring results, collect comparable monitoring information, and disseminate the results of their monitoring to other land units or to the agencies’ regional or national offices. As a result, it is difficult to compile information from land units to make overall assessments about the extent to which treatments are effective or about the conditions in which treatments are most effective. Furthermore, the departments have not developed an interagency system to collect, store, and disseminate monitoring results. Consequently, it is difficult for agency officials to learn from the results of treatments applied on other sites in order to most efficiently and effectively protect resources at risk. As noted previously, both Interior and the USDA’s Forest Service require local land units to install treatments that are effective. In addition, Interior requires, and the Forest Service strongly encourages, local land units to monitor for treatment effectiveness. However, neither department specifies how land units should conduct such monitoring or how they should document monitoring results. Both our and the departments’ own internal reviews found that inconsistencies in monitoring methods prevent a comprehensive assessment of treatment effectiveness. To determine the methods local land units use to monitor and document the effectiveness of their treatments, we reviewed 18 emergency stabilization and rehabilitation plans that were implemented on 12 local land units—6 of Interior’s and 6 of the Forest Service’s. We selected these 12 local land units because they obligated the most funds for emergency stabilization and rehabilitation treatments within their regions in 2000, the most recent year since the establishment of the National Fire Plan in which local land units could have accomplished significant monitoring at the time of our review. For each of the 18 plans, we reviewed up to 3 of the most costly treatments, for a total of 48 treatments. These 48 treatments are not a representative sample of all emergency stabilization and rehabilitation treatments implemented by the departments, and therefore our findings cannot be projected. However, the data do represent monitoring practices for a significant proportion of departmental outlays for treatments, since the total cost of the treatments we reviewed was $84 million, or 30 percent of the total funds obligated by Interior and the Forest Service for emergency stabilization and rehabilitation treatments undertaken for wildfires that occurred in 2000 and 2001. Local land units monitored all of the 48 treatments we reviewed, but documented conclusions about treatment effectiveness for only half of the 48 treatments. Land units monitored some treatments through visual inspection alone and other treatments through both visual inspection and data collection. For treatments that entail building or repairing infrastructure—such as roadwork, trail repair, and fencing—local land units typically monitored treatment effectiveness solely through visual observation. Of the 19 such treatments, local land units visually observed all and collected monitoring data for only 1. For example, national forests often resurface roads and install drainage systems, such as culverts, to prevent storm runoff from concentrating into torrents, eroding road surfaces and depositing sediment into streams. To monitor the effectiveness of such treatments, according to local national forest officials, staff typically drive along repaired road segments and visually observe road surfaces for gullies or other signs of erosion. In contrast, for treatments designed to restore natural conditions—such as seeding, reforestation, weed treatment, and erosion barriers—staff often collect monitoring data, in addition to visually observing treatment sites. Of the 30 such treatments, local land units collected monitoring data on treatment effectiveness for 22 and visually observed all 30. For example, one BLM district office used two methods to monitor their seeding treatment: (1) they visually observed the seeded acreage and estimated the proportion of the burnt area covered by native plants, weeds, and bare soil; and (2) they collected data on the most abundant plant species, precipitation levels, soil types, and terrain within a selected number of small, delineated sections within the seeded acreage. Local land units documented conclusions about treatment effectiveness for 24 of the 48 treatments we reviewed. In documenting these results, land units used a wide variety of different formats, including summaries of visual observations, tables of data analyses, and presentations for academic conferences. Even though the 12 local land units we reviewed generally monitored the effectiveness of treatments, each used a different method to do so. According to local land unit officials, departmental guidance does not identify the methods they should use to visually inspect different types of treatments, when they should collect and analyze monitoring data, the types of data they should collect, or the techniques they should use to collect and analyze monitoring data. In some instances, local land unit officials said they used monitoring methods prescribed for programs other than emergency stabilization and rehabilitation. For example, on three national forests, Forest Service officials said that they used monitoring methods specified by the agency’s forestry, or silviculture, program to monitor reforestation treatments. In another instance, an interagency technical reference describes 12 procedures for monitoring vegetation, but the departments do not indicate which of these methods should be used to monitor the seeding applied to burnt lands. As a result of the lack of clarity, the 12 local land units differed significantly in the methods they used to monitor the 30 treatments designed to restore natural conditions. Of these 30 treatments, local land units collected data to monitor the effectiveness of 22 of the treatments, in addition to making visual observations, and relied solely on visual observations to monitor the remaining 8 treatments. Likewise, local land units monitored untreated sites for comparison with treated sites in 17 instances, while they monitored just the treated sites in the remaining 13 instances. Furthermore, in judging whether a treatment was effective, local land units established measurable standards of effectiveness for 9 of the 30 treatments and relied purely on the knowledge of local land officials to make this judgment for the other 21. As one local land unit official said, each staff member has his or her “own definition of success.” Overall, local land unit officials judged most of the treatments as effective. However, because local land units (1) collected different monitoring data, (2) used different methods to collect monitoring data, and (3) developed their own definitions of treatment effectiveness, the results of monitoring treatments we reviewed for these 18 emergency stabilization and rehabilitation plans cannot be compared to determine if the treatments were effective. For example, three national forests we reviewed spent more than $5 million to install erosion barriers on severely burnt slopes to protect homes and streams from flooding and sedimentation after catastrophic wildfires in 2000. Although all three forests installed the same treatment to accomplish the same objective, the forests’ monitoring methods differed in the extent to which they collected monitoring data, type of monitoring data they collected, methods used to collect and analyze monitoring data, and standards for judging treatment success. This situation is illustrated by the following examples: In one forest, local land unit officials observed treated slopes for evidence of erosion but did not collect monitoring data or document their findings. Because the officials observed that only small amounts of sediment washed to the bottom of slopes after a rainstorm, they concluded that the treatments had been effective. Without collecting monitoring data, however, these officials could not accurately estimate the amount of erosion prevented by the barriers placed on the slope or the level of precipitation that would render the barriers ineffective. In another forest, local land unit officials worked with Forest Service researchers to collect data on precipitation levels and soil erosion from both treated and untreated slopes, in addition to conducting visual observations. The researchers used a computerized hydrological model to analyze the monitoring data and concluded that the erosion barriers decreased the risk of erosion by 19 percent—from an 86 percent risk on untreated slopes to a 67 percent risk on treated slopes—and documented these results in a presentation to a professional conference. However, during visual observations, local land unit officials disagreed on whether the presence of sediment trapped behind the erosion barriers constituted treatment success: some believed that the barriers were effective because they had trapped erosion from washing further down the slope, while others concluded that the barriers were ineffective because they had not prevented soil from eroding at the top of the slope. In a third forest, local land unit officials collected monitoring data and visually observed the erosion barriers. However, they said it was difficult to accurately measure soil erosion and water quality in order to determine treatment effectiveness. They therefore did not report on their data collection and analysis and relied on visual observations to judge treatment effectiveness: after observing significant amounts of erosion, they concluded that the treatments were not effective. Because these national forests used different methods to judge treatment effectiveness, we could not draw overall conclusions about the effectiveness of erosion barriers in protecting resources at risk at these three forests. We found similar inconsistencies in monitoring data, monitoring methods, documentation, and standards for treatment effectiveness among other Forest Service land units as well as Interior’s. For example, at two BLM district offices, we reviewed how local land unit officials monitored seeding of burnt areas that was intended to establish native species and prevent the spread of noxious weeds. One district collected data from both seeded and unseeded plots, while the other only collected data from seeded plots. In addition, one district used a measurable standard to judge treatment success, while the other relied on the professional judgment of land managers. Similarly, a 2000 USDA Forest Service study and a 2002 Interior study found that it is difficult to determine overall treatment effectiveness because land units use different methods to monitor identical treatments and rarely document monitoring results. For example, as part of its study, Forest Service officials reviewed more than 150 monitoring reports for emergency stabilization and rehabilitation treatments undertaken at national forests. As part of its study, Interior reviewed techniques that BLM field offices in Idaho, Nevada, Oregon, and Utah used to monitor seeding treatments. Both of these studies concluded that local land units often did not collect or record data important to interpreting treatment effectiveness, including data on site conditions and treatment outcomes. In addition, both studies found that only approximately one third of local land units collected monitoring data, and among these local land units, few collected the same type of data or used the same data collection methods. Because of the lack of documentation and the differences in monitoring methods, neither study was able to determine the validity of monitoring results, to calculate the extent to which treatments were effective, or to compare the effectiveness of treatments in different regions or land units. According to Interior and Forest Service officials, including the authors of these studies, the departments know little about the extent to which emergency stabilization and rehabilitation treatments prevent erosion, protect water quality, restore native vegetation, reduce invasive weeds, or protect wildlife. In a separate 2001 study of its emergency stabilization program in the Northern and Intermountain regions, the Forest Service concluded that the agency is “often . . . uncertain that actually work. There is a concern that treatments may look good, but their functional effectiveness is unknown.” Improved monitoring would provide critical information to departmental officials making decisions about emergency stabilization and rehabilitation treatments, according to the Interior and Forest Service studies. According to the Forest Service study, knowing the effectiveness of particular treatments would help local land units select the most appropriate treatments for installation and could assist them in defending and explaining their decisions. For example, knowing the likelihood that erosion barriers will effectively prevent erosion on a certain soil type could help land unit officials determine whether installing such barriers is worthwhile, according to the lead author of the study. Likewise, the Interior study noted that a synthesis of monitoring data could assist BLM in restoring native plants and reducing invasive weeds in the Intermountain West. In order to gather such information, these studies recommended that the agencies improve monitoring. The Forest Service study of treatment effectiveness recommended that national forests “increase monitoring efforts” to determine the effectiveness of treatments under various conditions, while the agency’s review of the emergency stabilization program recommended “a quick format for minimal quantitative monitoring.” Similarly, the Interior study recommended that BLM districts adopt a common monitoring technique and report whether treatments meet their objectives. The departments have not implemented these recommendations, however. According to departmental officials responsible for overseeing their emergency stabilization and rehabilitation efforts, implementation has not occurred because of the difficulty associated with the development of standardized monitoring and data collection methods and the collection of such data. At the local level, even though land units typically conduct some type of monitoring and view monitoring as valuable, agency officials consider extensive monitoring to be a less important use of their time than other immediate wildland fire duties, such as serving on emergency stabilization and rehabilitation assessment teams and overseeing the installation of treatments. These wildland fire duties are in addition to their normal duties they must carry out on a routine basis. Furthermore, departmental officials said that because land characteristics and treatment objectives vary significantly from land unit to land unit and from agency to agency, it is difficult to establish standard monitoring or data collection methods that would apply in all circumstances. At the same time, however, they acknowledged that there are enough commonalities among land units, agencies, and treatments, that some aspects of monitoring and data collection could be standardized, such as consistently collecting and documenting data on precipitation, soil type, and terrain. BLM officials added that they have recently begun to discuss the development of standardized monitoring methods and possible criteria for treatment success. Departmental officials commented, however, that if monitoring methods were standardized and data were routinely collected and analyzed, it might be more appropriate for an independent organization such as the department’s science agency—the U.S. Geological Survey—to conduct this work and assess the relative success and failure of treatments. Interagency and departmental policies direct the departments to collect, archive, and disseminate monitoring results collected by local land units so that the departments can make more informed decisions on the effectiveness of the treatments being used. According to Interior, for example, “Priority should be given to developing a simple interagency electronic mechanism for archiving and broadly disseminating the treatment and technique results.” Similarly, the Forest Service cited the need for the agency to develop a clearinghouse of monitoring plans and a system for sharing monitoring results. Nevertheless, neither Interior nor the Forest Service developed an interagency system to collect, store, and disseminate monitoring results of emergency stabilization and rehabilitation treatments. Based on our review of treatments for 18 emergency stabilization and rehabilitation plans at 12 local land units, we found that local land units did not routinely share monitoring results with other land units or with program management, even in instances when they learned valuable lessons about treatment effectiveness. For example, according to local land unit officials, they shared information with their peers through informal means such as phone calls to neighboring land units and conversations at regional meetings for only 24 of the 48 treatments we reviewed. Similarly, these officials said that they submitted their monitoring results to their agency’s state or regional offices for only 19 of the 48 treatments. At the same time, local land unit officials said they learned lessons while monitoring that would be worth sharing with other land units in 37 of the 48 cases. Currently, the departments do not have an interagency database that local land units can submit monitoring data and then use to determine the relative success of different treatments, according to Forest Service and Interior emergency stabilization and rehabilitation officials. Several local land unit officials said that if such information were accessible, they would be better able to select the most appropriate treatment to meet certain objectives in specific conditions. Officials in one BLM Nevada land unit said that the BLM state office was developing a database to collect, store, and disseminate monitoring results. BLM Nevada officials said that the database would be used to collect and store the specifications and results of seeding treatments that have been applied on BLM lands in the entire state. When BLM officials in Nevada then consider using a seeding treatment following a wildfire, they would be able to search the BLM Nevada database to identify the results of prior seeding treatments that were applied in similar terrain, on similar soil types, at similar elevations, and with similar precipitation levels, according to these officials. Local land unit officials could use this information to make treatment decisions, such as whether to seed a burnt area or whether to allow it to recover naturally. BLM Nevada officials said that such a database would be “worth its weight in gold” because of the difficulty in identifying the most appropriate plant species and seed application techniques that will be effective in Nevada’s arid rangelands. According to Interior and Forest Service officials responsible for their emergency stabilization and rehabilitation programs, the departments had not developed an interagency monitoring database for the same reasons that they have not standardized monitoring and data collection methods: coordinating such a task with multiple agencies would require a substantial amount of work and monitoring has historically been considered a lower priority than other more pressing tasks. Departmental officials said that it would be time-consuming to develop a database to meet the needs of multiple agencies, each of which manages different types of land. Other departmental officials said that the departments typically respond well to emergencies, such as fire suppression, but have placed less emphasis on monitoring. These officials acknowledged, however, that a monitoring database would be valuable and said that they had scheduled interagency meetings in early 2003 to discuss developing such a database. While the Forest Service has already begun work on a database of monitoring results, the database is limited in scope and application. The database includes information that the Forest Service collected as part of its 2000 study of the effectiveness of emergency stabilization treatments, according to the agency official who led that study. Beginning in 2003, this official said that local Forest Service land unit officials will be able to access information collected during the course of that study, including any monitoring information, to help inform their treatment decisions. This official noted, however, that because of differences and shortcomings in the ways that national forests collected and retained monitoring information for the emergency stabilization plans that were reviewed for that study, the database has several limitations: it will (1) not provide quantitative data on the extent of treatment effectiveness; (2) not provide information necessary to determine the conditions—such as soil characteristics or vegetation types—under which treatments are most effective; (3) not provide a means by which local Forest Service land unit officials could report their current monitoring results to other local land units or to Forest Service regional or national offices. Most lands burned by catastrophic wildfires will recover naturally, without posing a threat to public safety or ecosystems. However, in those relatively few instances where burnt lands threaten safety, ecosystems, or cultural resources, emergency stabilization and rehabilitation treatments can play a critical role—a role that is emphasized by the appropriations Congress has dedicated to postwildfire treatments. The treatments Interior and the Forest Service use to protect and restore burnt lands—slope stabilization measures such as mulching to prevent soil from eroding into rivers and streams, seeding to regenerate important grasses and shrubs, and noxious or invasive weed monitoring and control—appear, on the face of it, to be reasonable. For the most part, however, Interior and the Forest Service are approving treatment plans without comprehensive information on the extent to which a treatment is likely to be effective given the severity of the wildfire, the weather, soil, and terrain. Such information could help ensure that the agencies, including the local land units, are using resources effectively to protect public safety, ecosystems, and cultural resources. Interior and USDA’s Forest Service have also done studies that recognize the need for information on treatment effectiveness, but they have not emphasized the importance of collecting, storing, analyzing, and disseminating such data. Nor can they reasonably take action to collect, store, analyze, or disseminate such data until the departments have comparable monitoring data from their local land units. Interior and the Forest Service have yet to set standards for data collection, develop reporting procedures, or establish criteria for judging treatment effectiveness, which makes it possible to assess treatment effectiveness. As their and our own analyses have shown, this situation has resulted in local land units using different monitoring methods, even when similar treatments are being used under similar conditions, and a lack of consistency in judging whether treatments have been effective. In order to better ensure that funds for emergency stabilization and rehabilitation treatments on burnt lands are used as effectively as possible, we recommend that the Secretaries of Agriculture and of the Interior require the heads of their respective land management agencies to specify the type and extent of monitoring data that local land units are to collect and methods for collecting these data, and develop an interagency system for collecting, storing, analyzing, and disseminating information on monitoring results for use in management decisions. We provided a draft of this report to the Secretaries of Agriculture and of the Interior for review and comment. The departments provided a consolidated response to our draft report, which is included in appendix II of this report. They generally agreed that more can be done to ensure that funds for emergency stabilization and rehabilitation on burnt lands are used as effectively as possible and with our recommendations that they obtain and disseminate better data for determining treatment effectiveness. In commenting on our recommendation that the departments obtain better data on treatment effectiveness, the departments said that they were aware that some of their own studies had previously identified the need to obtain and disseminate better data for determining treatment effectiveness. They cited several examples where they have or are trying to accomplish this, including an effort to determine the effectiveness of log erosion barriers, which is cited in this report. The departments, in their comments, said they recognize that many of the efforts are individual agency initiated actions, as opposed to a systematic approach, to collect data on treatment effectiveness. They said that they are currently planning actions that would address data collection concerns in a more collaborative manner by establishing an interdepartmental committee of scientists and managers to identify the dominant postfire stabilization and rehabilitation treatments for which monitoring methods will be established. An interdepartmental approach is essential, not only for identifying the amount and type of data that local land units should collect, but also for developing an interagency and interdepartmental system for routinely collecting, storing, analyzing, and disseminating these data. The departments also provided several technical changes that we incorporated into the report, as appropriate. As arranged with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Chairman and Ranking Minority Member, Subcommittee on Public Lands and Forests, Senate Committee on Energy and Natural Resources; the Chairman and Ranking Minority Member, House Committee on Resources; the Chairman and Ranking Minority Member, Subcommittee on Forests and Forest Health, House Committee on Resources; the Chairman and Ranking Minority Member, Subcommittee on Interior and Related Agencies, House Committee on Appropriations; the Ranking Minority Member, House Committee on Agriculture; the Ranking Minority Member, Subcommittee on Department Operations, Oversight, Nutrition and Forestry, House Committee on Agriculture; and other interested congressional committees. We will also send copies of this report to the Secretary of Agriculture; the Secretary of the Interior; the Chief of the Forest Service; the Directors of BLM, the National Park Service, and the Fish and Wildlife Service; the Deputy Commissioner, Bureau of Indian Affairs; the Director, Office of Management and Budget; and other interested parties. We will make copies available at no charge to others upon request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix III. To describe the Department of the Interior’s and the U.S. Department of Agriculture’s Forest Service processes for implementing their emergency stabilization and rehabilitation programs, we obtained departmental manuals, handbooks, and other guidance that describe Interior’s process for implementing emergency stabilization and rehabilitation and the Forest Service’s emergency stabilization program. We also interviewed Interior and Forest Service officials responsible for overseeing the department’s respective programs to obtain an overview of Interior’s and the Forest Service’s processes for their programs. Because the Forest Service’s rehabilitation program is relatively new and has not yet been incorporated into the Forest Service manual or handbook, we obtained guidance developed by the Forest Service and provided to Forest Service regional offices on the process used to implement that program. We also obtained additional guidance and documentation from the Forest Service’s Northern, Southwestern, and Intermountain regions (regions 1, 3, and 4, respectively)—the three regions that received the largest share of Forest Service rehabilitation program funding in fiscal year 2001—to determine what additional processes these regions developed and used to implement the program. Further, we interviewed Bureau of Land Management (BLM), Bureau of Indian Affairs, Fish and Wildlife Service, National Park Service, and Forest Service officials at regional, state, and local land management units that had experienced wildland fires in 2000 or 2001 to discuss procedures used in assessing burnt lands and identifying appropriate treatments. To identify the costs and types of treatments the departments have implemented, we obtained 266 emergency stabilization and rehabilitation plans that Interior agencies prepared for wildfires that occurred in calendar years 2000 and 2001 on BLM managed lands in Idaho, Nevada, Oregon, and Utah; Bureau of Indian Affairs managed lands in its Northwest, Rocky Mountain, Southwest, and Western regions; Fish and Wildlife Service managed lands in its Mountain Prairie, Pacific, Southeast, and Southwest regions; and National Park Service managed lands in its Intermountain and Pacific West regions. For the Forest Service, we requested and obtained 155 emergency stabilization plans and rehabilitation plans for wildfires that occurred in calendar years 2000 and 2001 on Forest Service lands managed in its Intermountain, Northern, Pacific Northwest, Pacific Southwest, and Southwestern regions (regions 4, 1, 6, 5, and 3, respectively). We selected these Interior and Forest Service regions because they accounted for about 90 percent of the plans that the departments developed for treating wildfires that occurred in 2000 and 2001. To identify the types of treatments implemented, we reviewed these 421 plans and identified treatments proposed and approved in the plans. To identify the costs of the plans and the treatments, we obtained estimated costs that the departments approved to carry out the plans and implement the individual treatments. Because these costs are estimates, they do not necessarily reflect actual costs that could be incurred in carrying out the plans during the 3 years that may be required to implement them. We did not obtain actual costs incurred, to date, in carrying out these plans because this data are not readily available. To determine whether emergency stabilization and rehabilitation treatments are achieving their intended results, we reviewed 18 emergency stabilization and rehabilitation plans that were implemented on 12 land units—6 of Interior’s and 6 of the Forest Service’s. We selected these 12 land units because they obligated the most funds for emergency stabilization and rehabilitation treatments within their regions in 2000, the most recent year since the establishment of the National Fire Plan in which local land units could have accomplished significant monitoring at the time of our review. We did not select emergency stabilization and rehabilitation plans for wildland fires that occurred in 2001 because, at the time of our review, local land units would have had little time to monitor treatments that had been implemented. For each of the 18 plans, we reviewed up to 3 of the most costly treatments. One of the 18 plans we selected had only 2 treatments, both of which we reviewed. In addition, we did not review five treatments we initially selected either because the treatments had not yet been fully implemented, or because we were unable to obtain timely information on the treatment’s status. Therefore, the total number of treatments we reviewed was 48. For each of these treatments, we interviewed the land manager responsible for monitoring and reviewed associated documentation of monitoring results, when available. These 48 treatments are not a representative sample of all emergency stabilization and rehabilitation treatments implemented by the departments, and therefore our findings cannot be projected. However, the data do represent monitoring practices for a significant proportion of departmental outlays for treatments, since the total cost of the treatments we reviewed was $84 million, or 30 percent of the total funds obligated by Interior and the Forest Service for emergency stabilization and rehabilitation treatments undertaken for wildfires that occurred in 2000 and 2001. In addition, we obtained program reviews or other studies conducted by the Forest Service or Interior on their emergency stabilization and rehabilitation reports to determine if the departments monitor treatments and, if so, the type and quality of departmental monitoring data. We also interviewed emergency stabilization and rehabilitation officials at the departments’ national, regional, or state levels, and local land unit offices to determine what monitoring is being conducted by local land unit offices, whether data are collected, and what use is made of these data for assessing treatment effectiveness or sharing lessons learned. We conducted our review from August 2001 through February 2003 in accordance with generally accepted government auditing standards. In addition, Mark Braza, Marcia Brouns McWreath, Carol Herrnstadt Shulman, and Katheryn Summers made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
|
Wildfires burn millions of acres annually. Most burnt land can recover naturally, but a small percentage needs short-term emergency treatment to stabilize burnt land that threatens public safety, property, or ecosystems or longer-term treatments to rehabilitate land unlikely to recover naturally. The Department of the Interior (Interior) and the Department of Agriculture's (USDA's) Forest Service--the two departments that manage most federal land--spend millions of dollars annually on such treatments. GAO was asked to (1) describe the two departments' processes for implementing their programs, (2) identify the costs and types of treatments implemented, and (3) determine whether these treatments are effective. Both Interior and USDA's Forest Service use multidisciplinary teams of experts, such as ecologists and soil scientists, to assess damage and potential risks burnt land poses and to develop emergency stabilization and rehabilitation plans that identify needed treatments to reduce or eliminate those risks. The two departments differ in how they manage their programs, however. Interior uses a single process to assess damage and identify treatments for short-term emergency stabilization and longer-term rehabilitation, while USDA's Forest Service uses different processes for each of these two treatment types. The two departments recognize these differences and recently agreed to work toward standardizing certain aspects of their programs, such as definitions and time frames. Following the 2000 and 2001 fires, the Forest Service obligated $192 million and Interior $118 million for 421 emergency stabilization and rehabilitation treatment plans GAO reviewed. Treatments included seeding; fencing; installing soil erosion barriers such as straw bundles, or wattles; and road or trail work. Most of Interior's land--managed by the Bureau of Land Management--consists of rangeland. Thus, the bureau primarily seeded native grasses to retain soils and forage for cattle and wildlife and fenced to prevent grazing. Forest Service land is often steeply sloped and includes watersheds used for drinking water and timber. The Forest Service primarily seeded fast-growing grasses and built soil erosion barriers for emergency stabilization, and worked on roads, trails and reforested for rehabilitation. Neither the departments nor GAO could determine whether emergency stabilization and rehabilitation treatments were achieving their intended results. The departments require that treatments be monitored, but they do not specify how and the type of data to collect or analyze for determining effectiveness. The departments have stressed the need to systematically collect and share monitoring data for treatment decisions. Yet neither has developed a national interagency system to do so. Therefore, the nature and extent of data collection, analysis, and sharing vary widely. The departments recognize that they need better information on treatment effectiveness. However, they have not yet committed to this effort.
|
IRS is in the midst of reorganizing and reengineering its major technological, organizational, operational, and financial processes. Through its reengineering efforts and Tax Systems Modernization (TSM) program, IRS plans to introduce new technology to support these and other changes directed toward making it a more efficient organization. However, as a result of tighter budgets and increasing congressional concern over the pace of its modernization, IRS is rethinking its business vision and rescoping its planned changes. Tax Systems Modernization: Progress in Achieving IRS’ Business Vision (GAO/T-GGD-96-123, May 9, 1996). In hearings before the Senate Governmental Affairs Committee, we described IRS’ progress toward achieving its vision for improving its operations by the year 2001 and its plans for using TSM in support of that vision. IRS’ vision calls for organizational, technological, and operational changes in processing tax returns, providing customer service, and ensuring compliance with tax laws. IRS has made some progress in modernizing its operations but still falls far short of its vision. One of the biggest problems facing IRS has been its inefficient system for processing most tax returns. It had not significantly reduced the number of paper returns it processes, nor had it delivered the new systems needed to process paper documents more efficiently. IRS’ efforts to improve customer service have also fallen short. Even if taxpayers were able to reach IRS, agency assistors did not always have the information readily available to answer their questions, and IRS has been slow in changing work processes and implementing new information systems. Furthermore, budget reductions and concerns over taxpayer burden led IRS to cancel or postpone certain programs that were intended to help improve compliance. Until IRS successfully modernizes its processes and operations, we do not believe it will achieve its business vision. Managing IRS: IRS Needs to Continue Improving Operations and Service (GAO/T-GGD/AIMD-96-170, July 29, 1996). In our first statement before the National Commission on Restructuring the Internal Revenue Service, we shared some of the long-standing challenges that IRS faces as it strives to improve its organization, operation, processes, and customer service. These challenges include the inefficient manner in which IRS processes most tax returns; the managerial, technical, and human resource obstacles that hinder IRS from fulfilling its customer service vision; the problems that continue to undermine the effectiveness of IRS’ collection programs; the need to resolve serious financial management problems that affect the reliability of its financial statements; and the need to better coordinate its reengineering efforts, which could generate new business requirements that are not addressed by TSM projects or that make some of those projects obsolete. Financial Audit: Examination of IRS’ Fiscal Year 1995 Financial Statements (GAO/AIMD-96-101, July 11, 1996). In accordance with the Chief Financial Officers Act of 1990, this report to Congress presented the results of our efforts to audit IRS’ Principal Financial Statements for fiscal years 1995 and 1994. The report discussed IRS’ continuing financial management problems in such areas as reconciling taxpayer revenue and refund accounts, substantiating the amounts of various types of taxes it collects, verifying nonpayroll operating expenses, and reliably reporting its accounts receivable balances. It also described IRS’ weaknesses in its controls over recordkeeping and computer security, and it contained our formal opinions and reports on IRS’ financial statements, internal controls, and compliance with laws and regulations. IRS data indicate that the portion of the gap between taxes owed and paid that can be attributed to individual taxpayers amounts to about $94 billion per year. Of that amount, the failure to report income accounts for about $73 billion. Tax law changes as well as changes to IRS’ taxpayer guidance could increase compliance by making it easier for taxpayers to comply with tax laws. Changes in IRS’ enforcement programs could make these programs more effective and less intrusive on taxpayers. Businesses play a significant role in our tax system. They not only pay income taxes but are also responsible for providing information to taxing authorities about payments to individuals and for withholding income and Social Security taxes from employees’ salaries. IRS data indicate that the portion of the tax gap attributable to corporate taxpayers amounts to about $33 billion per year. Of this amount, large corporations account for about $24 billion, and small corporations account for about $7 billion. Businesses, particularly large corporations, also spend considerable sums of money resolving disputes with IRS over audit results. Internal Revenue Service: Results of Nonfiler Strategy and Opportunities to Improve Future Efforts (GAO/GGD-96-72, May 13, 1996). Our report to the Commissioner of Internal Revenue discussed our review of IRS’ strategy, which began in fiscal 1993, to bring an estimated 10 million individual and business nonfilers back into the system and keep them there. We assessed the results of IRS’ strategy and determined whether opportunities existed to improve any future nonfiler efforts. According to IRS, the Nonfiler Strategy was generally a success. The agency (1) reduced the size of the nonfiler inventory; (2) eliminated unproductive cases, which allowed it to focus enforcement resources more effectively; and (3) increased the number of returns secured from individual nonfilers. However, although IRS achieved its goal of reducing the backlog of nonfiler investigations, it was unclear how much, if at all, voluntary taxpayer compliance improved as a result of the strategy. The absence of comprehensive cost data made it difficult to assess the return on IRS’ investment. We identified several areas in which opportunities existed to improve any future effort directed at nonfilers. These opportunities related to (1) the time it took IRS to make telephone contact with nonfilers; (2) the use of higher graded staff to perform tasks that might have been effectively performed by lower graded staff; and (3) procedures for dealing with recidivists—that is, nonfilers who were brought into compliance and then became nonfilers again. We also recommended establishing measurable goals and developing comprehensive data on program costs. Tax Administration: IRS Is Improving Its Controls for Ensuring That Taxpayers Are Treated Properly (GAO/GGD-96-176, Aug. 30, 1996). In this report to the Chairman of the Senate Finance Committee, we addressed continuing allegations of taxpayer abuse by IRS employees years after the enactment of the Taxpayer Bill of Rights in 1988. We studied the adequacy of IRS’ controls to protect against taxpayer abuse; the extent of information available concerning abuse allegations received and investigated by IRS, the Department of the Treasury’s Office of Inspector General (OIG), and the Department of Justice; and the role of the Inspector General in investigating abuse allegations. Owing to the lack of specific data elements on taxpayer abuse in the information systems at IRS, Treasury’s OIG, and the Department of Justice, we were unable to determine the adequacy of IRS’ system of controls that are used to identify, address, and prevent instances of abuse. However, we were encouraged by IRS’ decision to develop a taxpayer complaint tracking system that essentially adopts the definition of taxpayer abuse that is included in our 1994 report. IRS’ “Customer Service Vision” guides its efforts to improve customer service. The vision is founded on increased accessibility, including up-front problem identification, improved notices and publications, telephone interaction in lieu of correspondence, one-stop service, and blended work groups. As part of this effort, IRS is identifying new ways to improve service and developing systems to support this service. Tax Administration: IRS Faces Challenges in Reorganizing for Customer Service (GAO/GGD-96-3, Oct. 10, 1995). As part of our continuing efforts to provide Congress with the information and analyses it needs to improve IRS’ tax administration, we reviewed the progress IRS was making toward achieving its customer service vision. Our report discussed (1) IRS’ goals for customer service and its plans to achieve them, (2) the gap between current performance and these goals, (3) agency progress to date, (4) current management concerns, and (5) several important challenges IRS faces. We found that although IRS had made some progress toward its customer service vision, the agency’s lack of clarity in management responsibilities had, to some extent, hampered its ability to implement its customer service plans. IRS expects to improve its customer service by having fewer work locations and automated workload management, giving customer service representatives better access to taxpayer accounts, improving taxpayer accessibility to telephone service, and allowing taxpayers to resolve their inquiries after a single telephone contact. IRS has been particularly slow, however, in improving taxpayer accessibility and reassigning staff to customer service centers. IRS needs to determine how to manage the transition to a different organization while maintaining its ongoing workload, decide what the role of customer service representatives will be, develop and effectively use new information technology, and devise ways to measure the work of new customer service centers and balance their competing workloads. We recommended that IRS assign ownership and clearly define roles and responsibilities for its projects and emphasize the need to develop, test, and implement new customer service products and services. Tax Administration: Making IRS’ Telephone Systems Easier to Use Should Help Taxpayers (GAO/GGD-96-74, Mar. 11, 1996). At the request of the House Subcommittee on Oversight, Committee on Ways and Means, we reviewed IRS’ development and use of interactive telephone systems to improve customer service, focusing on IRS’ efforts to make the telephone systems easier to use, protect taxpayer data, and assign responsibility for providing developers with systems requirements information. Three prototype interactive telephone systems—designed to reduce correspondence between IRS and taxpayers and to make the agency more accessible—suffered from too many menu options and other problems. Future interactive systems may require more safeguards to protect taxpayer data. Resolving the shortcomings in the current systems is essential if IRS is to achieve its goal of increasing interactive telephone assistance to 45 percent of taxpayers’ calls. We recommended that IRS conduct a cost-benefit analysis of the actions needed to overcome the problems of too many menu options, including the addition of multiple toll-free numbers and publication of written instructions on how to use the interactive menus. IRS’ current procedures for processing tax returns are dependent on antiquated technology, which will eventually be replaced through TSM. Critical in that regard is the extent to which IRS can increase electronic filings. Meanwhile, existing systems and capabilities must continue functioning. Tax Administration: Electronic Filing Falling Short of Expectations (GAO/GGD-96-12, Oct. 31, 1996). In a report to the Senate Committee on Governmental Affairs, we discussed IRS’ progress in broadening the use of electronic filing, the availability of the data needed to develop an electronic filing strategy, and the implications for IRS if it does not significantly reduce its paper processing workload. Although IRS has some data on the cost of processing electronic and paper returns, it does not have comparative data on other costs, such as storage and retrieval, that can vary depending on how a return is filed. IRS also does not have adequate data on why taxpayers do not file electronically and what it would take to get them to do so, nor does it have estimates on the number of electronic returns it could expect to receive after some market intervention. We recommended that IRS (1) identify those groups of taxpayers who offer the greatest opportunity to reduce IRS’ paper processing workload and operating costs if they filed electronically and (2) develop strategies that seek to eliminate impediments that inhibit those groups from filing electronically. We also recommended that IRS adopt electronic filing goals that focus on reducing the paper processing workload and operating costs and prepare contingency plans for the possibility that electronic filings will fall short of expectations. IRS’ accounts receivable is recognized by us and the Office of Management and Budget (OMB) as a high-risk area. The primary reason for this designation is that IRS’ efforts to collect the tens of billions of dollars taxpayers owe in delinquent taxes have been inefficient and unbalanced. Despite many initiatives to correct its accounts receivable problems, IRS has made little sustained progress in resolving the problems at the root of its collection performance. Tax Administration: IRS Tax Debt Collection Practices (GAO/T-GGD-96-112, Apr. 25, 1996). During its review of IRS’ debt collection practices, the House Ways and Means Subcommittee on Oversight asked us to discuss the challenges facing IRS and the potential benefits of involving private parties in the collection of tax debts. IRS faces some formidable challenges in collecting tens of billions of dollars in delinquent taxes, including a lack of accurate and reliable information on either the makeup of its accounts receivable or the effectiveness of the collection tools and programs it uses, and an aged inventory of receivables, outdated collection processes, and antiquated technology. This lack of reliable information on taxpayer accounts affects IRS’ ability to determine whether its agents are resolving cases in the most efficient and effective manner. Similarly, the lack of reliable performance data affects IRS’ ability to target its collection efforts to specific taxpayers or specific types of debts. We believe that IRS needs a long-term comprehensive strategy to guide its efforts to improve tax debt collection, and such a strategy must start with accurate and reliable information. Without this strategy, any changes made to the system may not provide the planned results. We also believe that private industry may provide some help in collecting tax debts by assisting in performing some collection-related activities. Tax expenditures—tax provisions that grant special relief to encourage certain behaviors or to aid taxpayers in special circumstances—cost $400 billion of federal revenue annually. Tax expenditures are not subject to systematic review, and policymakers have few opportunities to make explicit comparisons between tax expenditures and federal spending programs. Improving the effectiveness of tax expenditures can result in significant savings. Tax Policy and Administration: Review of Studies of the Effectiveness of the Research Tax Credit (GAO/GGD-96-43, May 21, 1996). During a congressional hearing in 1995, we were asked to evaluate recent studies of the research tax credit to determine whether the evidence was adequate to conclude that each dollar taken of the research tax credit stimulates at least one dollar of research spending in the short run and about two dollars of research spending in the long run. In response to a congressional request, we reviewed eight studies of the research tax credit, focusing on the adequacy of the studies’ data and methods to determine the amount of research spending stimulated per dollar of foregone tax revenue and other factors that determine the credit’s value to society. The studies we reviewed indicated that the amount of research spending stimulated by the research tax credit was larger than estimated by most of the studies published during the 1980s. The eight studies, however, provided mixed evidence on the amount of spending stimulated by the credit per dollar of revenue cost. Further, the estimates presented in recent studies do not provide all the information needed to evaluate the effectiveness of the latest version of the credit. The amount of research spending stimulated per dollar of incentive revenue cost depends not only on the responsiveness of spending to a tax incentive but also on the credit’s design. There has been little research on how the latest design of the credit has affected incentives and costs. Although most of the studies we reviewed used more sophisticated statistical techniques and more years of data than prior studies of the credit, all of them had data and methodological limitations. For example, the studies did not use tax return data to determine the credit’s incentive because the authors did not have access to such confidential data. Therefore, the authors had to rely on publicly available data, which used different definitions of taxable income and research spending. As a result of the data limitations, we were unable to conclude that a dollar of research tax credit would stimulate a dollar of additional research spending and, in the long run, lead to about two dollars of research spending. We are sending copies of this report to other congressional committees, the Director of OMB, the Secretary of the Treasury, and the Commissioner of Internal Revenue. Copies will also be available to others upon request. Major contributors to this report are listed in appendix III. If you or your colleagues would like to discuss any of the matters in this report, please call me on (202) 512-9110. IRS Financial Audits: Status of Efforts to Resolve Financial Management Weaknesses (GAO/T-AIMD-96-170, Sept. 19, 1996); Financial Audit: Actions Needed to Improve IRS Financial Management (GAO/T-AIMD-96-96, June 6, 1996); and IRS Operations: Significant Challenges in Financial Management and Systems Modernization (GAO/T-AIMD-96-56, Mar. 6, 1996). In testimony before the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight and before the Senate Committee on Governmental Affairs, we discussed the results of our fiscal year 1994 financial audit of IRS. We found that IRS could not adequately verify or reconcile, in total and by type of tax, the amount of revenue it collected or the refunds it made for fiscal years 1992 through 1995. We also found large discrepancies between information in IRS’ masterfiles and the Treasury data used for various types of taxes. Several internal control weaknesses contributed to IRS’ financial management problems. For example, IRS was unable to provide adequate documentation for numerous transactions posted to taxpayer accounts and in its nonmaster file because the information had been lost, physically destroyed, or was no longer maintained. In addition, taxpayer refunds were not always screened by IRS employees, as required by IRS policy, to determine whether the refunds could be offset against any outstanding debts. The reliability of IRS’ $113 billion tax debt inventory was also in question because IRS could not verify the accounts receivable balance or the amounts that were considered collectible. IRS also had problems in substantiating nonpayroll expenses and reconciling appropriations available for expenditure with Treasury’s central accounting records. This occurred, primarily, because IRS was unable to provide support for when and if certain goods and services were received. Not having this support made IRS vulnerable to receiving inappropriate interagency charges and seriously undermined any effort to provide reliable, consistent cost or performance information. In our March 6, 1996, testimony, we also noted that IRS’ attempts to modernize its tax processing systems were at serious risk because of management and technical weaknesses. For example, IRS did not have a comprehensive business strategy to cost-effectively reduce paper submissions, and it had not yet fully developed and put in place the requisite management, software development, and technical infrastructures necessary to successfully implement its modernization program. In addition, IRS failed to manage the selection and acquisition of information systems as develop adequate cost-benefit analyses for its modernization proposals, implement quality assurance metrics for its software development projects, require adequate systems and acceptance testing, and define standard interfaces before implementing interconnecting systems. Over the past 4 years, we made 59 recommendations to improve IRS’ financial management systems and reporting and numerous other recommendations to improve IRS’ modernization efforts. IRS agreed with most of our recommendations and was working to improve its financial management and information technology capability and operations. However, we noted that many difficult problems remained, and IRS needed to intensify and sustain its efforts. IRS Operations: Critical Need to Continue Improving Core Business Practices (GAO/T-AIMD/GGD-96-188, Sept. 10, 1996) and Internal Revenue Service: Business Operations Need Continued Improvement (GAO/AIMD/GGD-96-152, Sept. 9, 1996). In testimony before the Senate Committee on Governmental Affairs, following on our report to the Chairman and Ranking Minority Member, we discussed the problems IRS was experiencing in fulfilling its business vision, overcoming management and technical weaknesses in its TSM effort, and improving the reliability of its financial management systems. We said that IRS needed to develop an effective implementation strategy for achieving its business vision. We suggested that such a strategy should include developing the capacity to make sound investments in information technology, building the necessary in-house technical expertise needed to effectively manage its Tax Systems Modernization (TSM) projects, and addressing the serious financial management problems that affect the credibility of its financial information. IRS, the Department of the Treasury, and the Office of Management and Budget needed to ensure that IRS’ information management initiatives are promptly and fully implemented; Congress should consider limiting IRS funding for TSM to critical and cost-effective projects; and the National Commission on Restructuring the Internal Revenue Service would have a leading role in evaluating IRS’ operations and recommending organizational, management, and operating changes. Tax Systems Modernization: Actions Underway But Management and Technical Weaknesses Not Yet Corrected (GAO/T-AIMD-96-165, Sept. 10, 1996) and Tax Systems Modernization: Cyberfile Project Was Poorly Planned and Managed (GAO/AIMD-96-140, Aug. 26, 1996). In testimony before the Senate Committee on Governmental Affairs, we discussed IRS’ efforts to modernize its tax processing system. The testimony followed up on our August report on Cyberfile. Overall, we found pervasive management and technical problems with IRS’ modernization efforts. IRS had not defined a process for selecting, controlling, and evaluating its technology investments; had not completed procedures for requirements management, quality assurance, configuration management, and project planning and tracking; and had not defined its systems, security, and data architectures. A Treasury report acknowledged that IRS did not have the capability to develop and integrate its TSM effort, and although it directed that IRS obtain additional contractual help to do so, it also acknowledged that IRS did not have the capability to successfully manage all of its current contractors. For example, IRS’ Cyberfile project was intended to enable taxpayers to prepare and electronically submit their tax returns via personal computers. However, IRS’ selection of the Commerce Department’s National Technical Information Service (NTIS) to develop Cyberfile was not based on sound analysis of NTIS’ ability to develop and operate an electronic filing system, and in fact, development and acquisition were undisciplined, and Cyberfile was poorly managed and overseen. In the end, Cyberfile was not delivered on time, and IRS, after advancing more than $17 million to NTIS, suspended Cyberfile’s development. During the project, IRS and NTIS failed to follow all applicable procurement laws in developing Cyberfile; NTIS circumvented procurement laws in implementing Cyberfile; Cyberfile’s obligations and costs were not accounted for properly; and adequate financial program management controls were not implemented to ensure that Cyberfile would be cost-effective. In the past, we made numerous recommendations to IRS relating to its systems modernization effort. Although IRS had initiated activities intended to begin responding to our recommendations, none of them had been fully implemented. At the time of our testimony, IRS did not have the effective strategic information management practices needed to manage its modernization efforts, the mature and disciplined software development processes needed to ensure that systems built would perform as intended, a completed systems architecture that was detailed enough to guide and control systems development and acquisition, and a schedule for accomplishing any of the aforementioned tasks. Further, IRS did not manage all of its current contractual efforts effectively, and its plans to use a “prime” contractor and transition much of its systems development to additional contractors was not well defined. In our testimony, we suggested that Congress consider limiting TSM spending to only cost-effective modernization efforts that support ongoing operations and maintenance; correct IRS’ pervasive management and technical weaknesses; are small, represent a low technical risk, and can be delivered in a relatively short time; and involve deploying already developed systems that have been fully tested, are not premature given the lack of completed systems architecture, and have proven business value. Our testimony also suggested that Congress consider requiring that IRS institute disciplined systems acquisitions processes and develop plans and schedules before permitting IRS to increase its reliance on contractors. In our August report, we recommended that IRS review the breakdown in its acquisition and financial management processes and controls that permitted the mismanagement of Cyberfile and take steps to correct the weaknesses and ensure that they are not repeated on future projects. We also recommended that the Department of Commerce review NTIS’ acquisition and financial management processes and controls that permitted it to disregard procurement laws and regulations, take steps to correct the weaknesses and ensure that they do not recur, and not permit NTIS to accept new systems development projects from other federal agencies until the weaknesses are corrected. Managing IRS: IRS Needs to Continue Improving Operations and Service (GAO/T-GGD/AIMD-96-170, July 29, 1996). In testimony before the National Commission on Restructuring the Internal Revenue Service, we discussed various challenges facing IRS as it tries to make its organization, operations, and processes more effective and efficient and improve service to taxpayers. Our testimony made the following points: One of IRS’ biggest problems has been the inefficient way in which it processes most tax returns. IRS needs to develop an effective strategy to reduce the volume of paper returns. Although IRS has taken some actions to increase the number of electronic returns filed, it does not have a comprehensive strategy to reach its electronic filing goal by 2001. IRS’ strategy for improving customer service offers promise because it is designed to improve taxpayers’ ability to get assistance from IRS and provide its employees with access to the information they need to help taxpayers. However, IRS needs to develop a framework to overcome the important managerial, technical, and human resource challenges it faces in implementing this vision. Long-standing problems continue to undermine the effectiveness of IRS’ collection programs. To address these problems, significant changes are needed in the way IRS does business. IRS has made some progress in resolving issues that prevented us from expressing an opinion on the reliability of its financial statements; however, serious financial management problems remain uncorrected. IRS needs to develop the capacity to make sound investment decisions in information technology. The outcome of IRS’ reengineering efforts could generate new business requirements that are not addressed by modernization projects or that make some of those projects obsolete. Financial Audit: Examination of IRS’ Fiscal Year 1995 Financial Statements (GAO/AIMD-96-101, July 11, 1996). In accordance with the Chief Financial Officers Act of 1990, we reported on our audit of IRS’ principal financial statements for fiscal year 1995. We stated that we were unable to give an opinion on those statements or report on compliance with laws and regulations because of limitations on the scope of our work and material weaknesses in internal controls resulted in ineffective controls for safeguarding assets from material loss and for ensuring material compliance with relevant laws and regulations. The following five management problems undermined our ability to attest to the reliability of IRS’ financial statements: The amount of total revenue ($1.4 trillion) and tax refunds ($122 billion) could not be verified or reconciled to accounting records maintained for individual taxpayers in the aggregate. The amount reported for various types of taxes, such as Social Security and excise taxes, could not be substantiated. The reliability of estimates of $113 billion for valid accounts receivable and $46 billion for collectible accounts receivable could not be determined. A significant portion of IRS’ reported $3 billion in nonpayroll operating expenses could not be verified. The amount IRS reported as appropriations available for expenditure for operations could not be reconciled fully with Treasury’s central accounting records. IRS advised us of various steps it was taking to correct the problems our audit identified (such as development of software programs to capture detailed revenue and refund transactions), but we could not verify the results of IRS’ efforts because they were not complete. Tax Systems Modernization: Actions Underway But IRS Has Not Yet Corrected Management and Technical Weaknesses (GAO/AIMD-96-106, June 7, 1996). Pursuant to a legislative requirement, we reviewed IRS’ actions to correct the management and technical weaknesses we identified in July 1995 that jeopardized IRS’ TSM efforts. We found that although IRS had initiated several corrective actions, many of them were still incomplete; and none, either individually or in the aggregate, fully responded to any of our recommendations. For example, IRS had created an investment review board to select, control, and evaluate its information technology investments. However, the criteria for the board’s decisions were still undocumented and undefined, and IRS still had not reviewed the basis for its planned and ongoing systems. Furthermore, while IRS had completed a descriptive overview of an integrated, three-tier, distributed systems architecture, it had not completed the systems architecture nor its security and data architectures, and there was no schedule for doing so. IRS had not established an effective organizational structure to consistently manage and control TSM. IRS had planned to use software development contractors to develop TSM, yet its plans and schedules in this area were poorly defined and could not be fully understood or assessed. Moreover, as the experiences with Cyberfile and the Document Processing System projects made clear, IRS did not have the mature processes needed to acquire software and manage contractors effectively. We suggested that Congress consider limiting TSM spending to only cost-effective modernization efforts that support ongoing operations and maintenance; correct pervasive management and technical weaknesses; are small, represent low risk, and can be delivered quickly; and involve deploying already developed and fully tested systems that have proven business value and are not premature given the lack of a completed architecture. Tax Systems Modernization: Progress in Achieving IRS’ Business Vision (GAO/T-GGD-96-123, May 9, 1996). In testimony before the Senate Committee on Governmental Affairs, we discussed IRS’ progress in achieving its business vision for 2001 and modernizing its operations. IRS has developed a vision for 2001 that calls for organizational, technological, and operational changes affecting the way in which the agency processes tax returns, provides customer service, and ensures compliance with tax laws. IRS has made progress in modernizing its operations, but the differences between IRS’ existing operations and those proposed in its vision are great. One of the biggest problems facing IRS is its inefficient system for processing most tax returns. IRS has made little progress either in reducing the number of paper returns it processes or in delivering the new systems needed to improve paper processing. For example, IRS established a goal to receive 80 million tax returns electronically by 2001, but it lacks a comprehensive business strategy for achieving that goal. The second part of IRS’ vision is to improve service to taxpayers. Taxpayers have long had a problem reaching IRS by telephone, and when they do, IRS assistors do not always have easy access to the information needed to resolve the problems. IRS’ strategy for improving customer service includes consolidating work units, changing work processes, and increasing the use of or implementing new information systems. It is a promising strategy, but IRS faces many challenges in its implementation. The third part of IRS’ vision is to increase compliance with tax laws. Compliance levels have remained at 87 percent for the last several years, and the goal is to increase compliance to 90 percent. IRS established that goal on a set of assumptions that have since changed significantly—changes that could jeopardize the achievement of its goal. For example, budget and taxpayer burden concerns led IRS to postpone indefinitely the Taxpayer Compliance Measurement Program (TCMP), the results of which were to provide more up-to-date information for a new compliance research information system. We questioned IRS’ ability to make sound investment decisions until it reengineers important processes, such as tax return processing. Until clearly defined business requirements drive its modernization projects, there is no guarantee that these projects will successfully improve IRS operations. Tax Administration: IRS’ Fiscal Year 1996 and 1997 Budget Issues and the 1996 Filing Season (GAO/T-GGD-96-99, Mar. 28, 1996). In testimony before the Subcommittee on Oversight, House Committee on Ways and Means, we discussed budget issues facing IRS in fiscal years 1996 and 1997 and provided interim results on our review of the 1996 filing season. IRS’ 1996 appropriation totaled $7.3 billion—$860 million less than what the President had requested and $160 million less than IRS’ fiscal year 1995 appropriation. To cover the resulting labor cost shortfall, IRS officials said that they reduced travel and overtime costs, cash awards, hours for seasonal staff, and the number of nonpermanent staff. IRS wanted to ensure that it had enough staff to process returns and issue refunds, so most of the cuts were absorbed by compliance programs. IRS requested nearly $8 billion for fiscal year 1997, an increase of $647 million from fiscal year 1996. The largest increases were for compliance initiatives and TSM, two areas that have been plagued by problems in the past. Concerning TSM, we expressed the belief that IRS could not make effective use of systems development funds at that time because it had not yet corrected managerial and technical weaknesses that we had identified in July 1995. The interim results of our review of the 1996 filing season indicated that IRS was delaying fewer refunds than it did in 1995 while it validated Social Security numbers (SSN) and Earned Income Credit (EIC) claims; taxpayers appeared to be having an easier time reaching IRS by telephone, although accessibility was still low; and more taxpayers were using alternative filing methods. Tax Systems Modernization: Management and Technical Weaknesses Must Be Overcome to Achieve Success (GAO/T-AIMD-96-75, Mar. 26, 1996). In testimony before the Senate Committee on Governmental Affairs, we discussed the status of IRS’ TSM program. IRS had spent more than $2.5 billion on TSM through fiscal year 1995 and planned to spend a total of up to $8 billion through 2001. TSM is central to IRS’ vision of a paper-free work environment in which taxpayer account updates are rapid and taxpayer information is readily available to IRS employees responding to taxpayer inquiries. We found that IRS still did not have a comprehensive strategy to maximize electronic filings; assumed that it would receive specified funding for systems development and technology, yet was unable to assure Congress that it could spend its 1996 and future TSM appropriations judiciously and effectively; did not have a complete and integrated systems architecture; did not have a single entity that had the responsibility and authority to control all of its information systems projects; did not have key planning documents, risk analyses and alternate solutions, or a formal process to define, manage, and control its Cyberfile project; failed to provide adequate physical security and software controls for Cyberfile; and could not adequately document many of its financial transactions, including revenue collections, or reconcile its accounts with those at the Department of the Treasury. Status of Tax Systems Modernization, Tax Delinquencies, and the Potential for Return-Free Filing (GAO/T-GGD/AIMD-96-88, Mar. 14, 1996). In testimony before the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations, we discussed IRS’ efforts to manage and implement its multibillion dollar TSM program and to collect tens of billions of dollars in tax debts. We also discussed the potential for implementing a return-free filing system. Regarding TSM, we discussed weaknesses in IRS’ electronic filing strategy; strategic information management; software development; systems architecture, integration, and testing; and accountability and control of systems modernization. These weaknesses must be corrected if TSM is to succeed. In the tax collection area, we noted that IRS’ efforts are hampered by inaccurate and unreliable information, antiquated computer systems and a rigid collection process, unintended problems with implementing safeguards against taxpayer abuses, a lack of accountability in its organizational structure, and staffing imbalances. As a result, IRS cannot accurately identify how much money the government is owed or how much of the debt is collectible. We also discussed how return-free filing could reduce the filing burden of many taxpayers while reducing the amount of paper IRS must process. We discussed two forms of return-free filing: “final withholding” and “agency reconciliation,” both of which would require several changes to the current tax system. In the final withholding system, the withholder of income taxes determines the taxpayer’s liability and withholds that amount. While taxpayers would save millions of hours in preparation time and millions of dollars in preparation costs, employers would incur additional administrative costs associated with their increased responsibilities. In the agency reconciliation system, the tax agency determines the taxpayer’s tax liability based on information documents. Under this system, taxpayers would also save millions of hours in preparation time and millions of dollars in preparations costs. With either system, IRS would save millions of dollars in processing costs. IRS Staffing Trends (GAO/GGD-96-73R, Jan. 31, 1996). In a letter to the Chairman, Senate Committee on Finance, we provided information on IRS’ staffing trends from fiscal years 1981 through 1996. This information updated what we had reported in 1993. We showed staffing data from two sources—IRS’ annual reports and budgets. The annual report data showed higher staffing levels because that source included staffing that was funded through reimbursements from other agencies. Tax Administration: Income Tax Treatment of Married and Single Individuals (GAO/GGD-96-175, Sept. 3, 1996). This report, prepared at the request of Senator Orrin G. Hatch, provides information on income tax provisions in the Internal Revenue Code (IRC) that potentially create “marriage penalties” or “marriage bonuses,” with respect to the tax liability of married couples. A marriage penalty results when two married persons have a greater tax liability than two single persons with the same total income. Conversely, a marriage bonus results when a married couple owes less taxes than two similarly situated single persons. We identified 59 IRC provisions in which tax liability depends at least partially on the taxpayer’s marital status. They include three sections most commonly discussed in connection with marriage penalties and bonuses: those on the tax rate, the standard deduction, and the EIC. The 59 provisions can be grouped into 4 categories. One group of nine provisions, such as those on the tax rate and Social Security taxation, makes some adjustment for the differences between joint and single income, but adjustments for married couples filing jointly are less than twice those allowed for single taxpayers. A second group of 15 provisions, such as those limiting capital losses and the home-mortgage interest deduction, includes only one limitation that applies equally to married and single taxpayers. A third group of nine provisions, such as those allowing a personal exemption, treats married couples as if they are single individuals or provides couples with twice the benefit allowed a single person. A fourth group of 26 provisions treats a married couple as a unit for tax purposes. The single most important factor that determines a provision’s effect on married couples is how income is divided between spouses. Fifty-six of the 59 tax provisions could result in a marriage penalty or bonus depending on the taxpayer’s individual circumstances. Couples with disparate incomes generally could enjoy a marriage bonus, while couples with equivalent incomes generally incur a marriage penalty. Other factors affecting a couple’s tax liability include which spouse owns property, has capital gains or losses, and is qualified for tax deductions and credits. Insufficient data prevented us from quantifying the number of taxpayers potentially subject to the various marriage penalties and bonuses described in the report. Tax Administration: IRS Is Improving Its Controls for Ensuring That Taxpayers Are Treated Properly (GAO/GGD-96-176, Aug. 30, 1996). Allegations of taxpayer abuse prompted Congress to pass the Taxpayer Bill of Rights in 1988. Six years later, we issued a report urging IRS to strengthen its controls for ensuring that taxpayers are treated properly.Pursuant to a request from the Chairman, Senate Committee on Finance, we examined the adequacy of IRS’ controls to protect against abuse of taxpayers; the extent of information available concerning abuse allegations received and investigated by IRS, Treasury’s Office of the Inspector General (OIG), and the Department of Justice; and the OIG’s role in investigating abuse allegations. IRS had initiated actions to implement many of the recommendations we made in our 1994 report and had initiated other actions in anticipation of provisions included in Taxpayer Bill of Rights 2 (P.L. 104-168). Among other things, IRS was improving controls over employee access to computerized taxpayer accounts; establishing an expedited appeals process for some collection actions; and classifying recurring taxpayer problems by major issues, such as penalties imposed on taxpayers. Those actions, if effectively implemented, could improve IRS’ overall treatment of taxpayers and better protect against taxpayer abuse. Despite IRS’ actions, we were unable to reach a conclusion on the overall adequacy of its controls because it did not yet have the capability to capture needed management information. IRS was establishing a tracking system for taxpayer complaints and reviewing its management information systems to determine the best way to capture relevant information for that system. If effectively designed, such a system could better allow IRS to ensure that instances of taxpayer abuse are identified and addressed and that actions are taken to prevent their recurrence. We were not able to determine the extent to which allegations of taxpayer abuse were received and investigated because the information systems at IRS, Treasury OIG, and Justice did not include specific data elements on taxpayer abuse. Treasury OIG officials said that they generally handled allegations involving IRS executives but often referred allegations involving lower level managers to IRS for investigation and administrative action. Although we did not independently test the effectiveness of this arrangement, we found no evidence to suggest that these allegations were not being properly handled. Tax Administration: Tax Compliance of Nonwage Earners (GAO/GGD-96-165, Aug. 28, 1996). At the request of the Chairman, Subcommittee on Oversight, House Committee on Ways and Means, we provided information regarding the tax implications of the rapid growth in nonwage income. Nonwage income has grown significantly since 1970, when it accounted for 16.7 percent of total income for individuals. For tax year 1992, the most recent available information at the time we did our work, nonwage income accounted for over 23 percent (or $859 billion) of total income for individuals. Individual tax returns showing only nonwage income increased from about 10 percent in 1970 to over 15 percent in 1992. The six largest sources of nonwage income—pensions, interest, self-employment, capital gains, dividends, and partnerships/subchapter S corporations—accounted for 91.6 percent of all nonwage income reported for 1992. Pension income was by far the largest and fastest growing source of nonwage income. IRS data showed that taxpayers who earned most of their income from nonwage sources were more likely to have problems paying their taxes than were wage earners and, as a result, owed more delinquent taxes than did wage earners. Approximately 74 percent of IRS’ inventory of tax debt for individual taxpayers at the end of fiscal year 1993 was owed by taxpayers with primarily nonwage income. Taxpayers with self-employment, interest, and dividend income accounted for about two-thirds of this nonwage income. The proportion of the taxpayer population reporting nonwage income will continue to increase as the population ages because pension income is the largest and fastest growing source of nonwage income. Options for improving timely tax payments on nonwage income include withholding income taxes on more sources of nonwage income, increasing taxpayer awareness of tax payment responsibilities for nonwage income, and modifying the estimated tax payment system. Tax Administration: Issues in Classifying Workers as Employees or Independent Contractors. (GAO/T-GGD-96-130, June 20, 1996). In testimony before the Subcommittee on Oversight, House Committee on Ways and Means, we discussed the classification of workers for federal tax purposes. To determine the Social Security and unemployment taxes they need to pay on employee wages, employers must classify workers as employees or independent contractors. If workers are determined to be employees, employers must (1) withhold and deposit income and Social Security taxes from wages paid and (2) pay unemployment taxes and the employers’ share of Social Security taxes. Independent contractors pay their own Social Security and income taxes. For 1984, the last time IRS made a comprehensive estimate, IRS estimated that about 750,000 of the more than 5 million employers (15 percent) misclassified 3.4 million employees as independent contractors. This noncompliance resulted in an estimated loss of $1.6 billion in Social Security, unemployment, and income taxes. IRS studies indicate that independent contractors, compared with employees, have a much lower level of income tax compliance and account for a higher proportion of the income tax gap. IRS completed 12,983 employment tax examination program audits from 1988 to 1995. Those audits recommended $830 million in employment tax assessments and reclassified 527,000 workers as employees. Costs and unclear classification rules can contribute to this noncompliance. Employers can lower their costs, such as payments of employment taxes or benefits, by using independent contractors. Also, many employers struggle in making the classification decision because of unclear rules. Employers cannot be certain that their classification decisions will withstand challenges by IRS. If not upheld, they risk large retroactive tax assessments. Two approaches that could boost independent contractor compliance within existing classification rules include (1) improving information reporting on payments made to independent contractors and (2) withholding income taxes from such payments. These approaches could also increase, to some extent, the burdens on independent contractors and employers that use them. Tax Research: IRS Has Made Progress But Major Challenges Remain (GAO/GGD-96-109, June 5, 1996). IRS is changing its tax compliance philosophy. Although it will continue to use enforcement to catch noncompliance, IRS is trying to induce compliance through nonenforcement work, such as taxpayer assistance and education. This new approach involves researching ways to improve compliance for specific market segments—groups of taxpayers who share characteristics or behaviors. IRS wants to boost total compliance to 90 percent by 2001 and believes that its new compliance research approach will help meet this goal. Taxpayer compliance in paying taxes owed has remained steady during the past 20 years at about 87 percent, and IRS estimates annual tax losses from noncompliance at more than $100 billion. In a report to the Commissioner of Internal Revenue, we discussed IRS’ new compliance research approach. Our analysis indicated that the success of this new approach would depend on the support it received throughout IRS, the availability of objective compliance data and skilled research staff, the infrastructure for organizing and managing the research, and the measures used to evaluate how well the new approach works. We found only mixed support for the new research approach. Many IRS officials questioned whether it would help achieve the 90-percent compliance goal, and there was disagreement over the initiative’s national focus, with many officials preferring a district-level one. In addition, IRS did not have a reliable source of objective compliance data, did not have research staff with the specialized skills needed to achieve the initiative’s research objectives, and had not completed its development of a management infrastructure or a process for measuring the results of the new approach. We recommended that IRS develop an approach for monitoring the effectiveness of mechanisms established to build support for the new approach as well as for the staff-sharing and training efforts that were under way, devise a method to better ensure that reliable compliance data are available when needed, and establish milestones for completing the management infrastructure and performance measures. Internal Revenue Service: Results of Nonfiler Strategy and Opportunities to Improve Future Efforts (GAO/GGD-96-72, May 13, 1996). At the beginning of fiscal year 1993, IRS had an inventory of about 10 million individual and business nonfilers. IRS estimated that unpaid taxes on nonfiled individual income tax returns for 1992 alone totaled more than $10 billion. Concerned about this noncompliance, IRS began a strategy in fiscal year 1993 to bring nonfilers into the system and keep them there. We reviewed IRS’ Nonfiler Strategy to assess the results and to determine whether there were opportunities to improve future nonfiler efforts. IRS took several positive steps to achieve the goals of the Nonfiler Strategy. Among other things, the Examination function deployed staff to work on nonfiler cases; other IRS functions increased their emphasis on nonfiler activities; and IRS eliminated old cases from inventory, established cooperative working arrangements with states and the private sector, and implemented a refund hold program. According to IRS, the Nonfiler Strategy was generally a success. Among other things, IRS reduced the size of the nonfiler inventory; eliminated unproductive cases, which allowed it to focus enforcement resources more effectively; and increased the number of returns secured from individual nonfilers. However, the results of the Strategy were less conclusive when compared with its goals. IRS achieved its goal of reducing the backlog of nonfiler investigations. However, there was insufficient information with which to judge whether voluntary taxpayer compliance improved as a result of the Strategy, and the absence of comprehensive cost data made it difficult to assess return on investment. We identified several areas in which opportunities existed to improve any future IRS effort directed at nonfilers. Those opportunities related to the time that elapsed before IRS made telephone contact with nonfilers, the use of higher graded staff to perform tasks that might be effectively done by lower graded staff, and procedures for dealing with nonfilers who are brought into compliance and then become nonfilers again. Besides making recommendations on those areas, we recommended that IRS establish measurable goals and develop comprehensive data on program costs to better assess the results of future nonfiler efforts. Tax Administration: Audit Trends and Results for Individual Taxpayers (GAO/GGD-96-91, Apr. 26, 1996). In response to a request from the Chairman, Subcommittee on Oversight, House Committee on Ways and Means, we provided information on the trend in IRS’ audit rates for individual returns (i.e., the percentage of individual income tax returns filed that are audited) and the overall results of IRS’ most recent audits of individual returns. The audit rate for individuals decreased between fiscal years 1988 and 1993 from 1.57 percent to 0.92 percent, which, according to IRS officials, resulted from an increase in the total number of returns filed, the additional time spent auditing complex returns, and an overall reduction in examination staffing. The audit rate rebounded over the next 2 years, increasing to 1.67 percent by fiscal year 1995. IRS officials attributed the increase to the involvement of district office auditors in pursuing nonfilers and an emphasis on reviewing EIC claims—work that historically was not counted as audits. Audit rates varied widely by geographic location, with the rates being the highest in the western regions of the country and lowest in the middle regions. In general, audits of the highest income individuals resulted in more additional recommended tax per return than did audits of the lowest income individuals by as much as 4 to 5 times more. However, the amount of additional recommended tax per audit hour for the highest income individuals was less than twice that for the lowest income filers because of the time needed to audit the more complex returns of the former. Tax Administration: Alternative Strategies to Obtain Compliance Data (GAO/GGD-96-89, Apr. 26, 1996). In October 1995, IRS decided to postpone indefinitely the 1994 Taxpayer Compliance Measurement Program because of budget concerns. In addition, Congress, taxpayer groups, paid preparers, and others exerted considerable pressure to cancel the program because of its cost to and burden on taxpayers. For more than 30 years, this program has been IRS’ primary means for gathering comprehensive and reliable taxpayer compliance data. IRS has used the data to identify areas in which tax law needs to be changed to improve voluntary compliance, estimate the tax gap and its components, and objectively select returns to audit that have the most potential for noncompliance. In this report to the Commissioner of Internal Revenue, we assessed the potential effects on IRS’ compliance programs of postponing the 1994 TCMP and identified some potential short- and long-term TCMP alternatives. IRS officials told us that they did not know how IRS would obtain the taxpayer compliance data it needs, and they recognized that the loss of 1994 TCMP data could increase compliant taxpayers’ burden over the long term because audits may become less targeted. To mitigate data losses over the short term, IRS could employ several alternatives, including doing a smaller survey. A limited survey would reduce the quantity and quality of the data collected, but still provide statistically valid national compliance data. Without TCMP, IRS must determine how it will measure compliance over the long term, since its workload and future revenues depend on taxpayers’ voluntary compliance. Long-term alternatives include conducting small multiyear TCMP-type audits from smaller samples and combining the data from several years to ensure the necessary precision and coverage, using data from operational audits to assess compliance changes, and conducting periodic national mini-TCMP audits to identify emerging issues. Each of these alternatives would be cheaper and less burdensome to IRS and taxpayers than the proposed TCMP sample but would also provide less comprehensive compliance data. We noted that regardless of how IRS decides to replace the information that would have been provided by TCMP, it was important for IRS to begin soon because any alternative is likely to require several years to put into place, and the data will be needed to update information on IRS’ compliance programs. We recommended that IRS identify a short-term strategy to minimize the negative effects of the compliance information that was likely to be lost because TCMP was postponed and develop a cost-effective, long-term strategy to ensure the continued availability of reliable compliance data. Tax System: Issues in Tax Compliance Burden (GAO/T-GGD-96-100, Apr. 3, 1996). Businesses and individuals spend time and money—and sometimes experience frustration—trying to comply with federal, state, and local tax requirements. We refer to this experience as the “taxpayer compliance burden.” In testimony before the Subcommittee on National Economic Growth, Natural Resources, and Regulatory Affairs, House Committee on Government Reform and Oversight, we discussed information that we collected on the federal tax compliance burden from businesses, tax accountants, tax lawyers, representatives of tax associations, and IRS staff. According to those interviewed, the compliance burden is caused by the tax code’s complexity, ambiguous language, and frequent changes. As a result, many businesses are uncertain about what they must do to comply with the code. Recordkeeping, time-consuming calculations, the interplay of state and local tax requirements, and IRS’ administration of the tax code add to the burden. Concerning the latter, the officials who were interviewed identified several problems. Primarily, those problems centered around the tax knowledge of IRS’ auditors, the clarity of IRS’ correspondence and notices, and the amount of time IRS takes to issue regulations. Estimating businesses’ tax compliance burden and costs would be difficult since businesses do not collect the data needed to make reliable estimates. The greatest reduction in the tax compliance burden could be gained by simplifying the tax code. Return-free filing alternatives used in other countries could reduce individual taxpayers’ tax compliance and IRS administrative burdens; but employers, tax preparers, and state tax systems could be further burdened or adversely affected. Reducing businesses’ and individuals’ tax compliance burdens will be difficult because of the tax policy trade-offs, such as revenue, equity, and social and economic issues. Tax Administration: IRS Can Improve Information Reporting for Original Issue Discount Bonds (GAO/GGD-96-70, Mar. 15, 1996). Information reporting is a vital tool for promoting voluntary compliance with U.S. income tax laws. This reporting, which is done through a series of returns designed to report nonwage income on IRS Form 1099, is intended to ensure that taxpayers know of and report investment and other income on their tax returns. In a report to the Commissioner of Internal Revenue, we provided information on IRS’ efforts to ensure that taxpayers report investment income earned from bonds sold at original issue discount (OID), focusing on the completeness and use of IRS Publication 1212, List of Original Issue Discount Instruments. IRS asserted that brokerage firms, banks, and investment managers could rely on Publication 1212 to identify all publicly offered OID bonds and compute OID income; however, many OID bonds were missing from Publication 1212. We identified at least 37 bonds worth billions of dollars that should have been listed but were not. IRS’ source of information for Publication 1212 is Form 8281, Information Return for Publicly Offered Original Issue Discount Instruments, which is filed by bond issuers. IRS primarily relied on its sizable penalty (up to $50,000) to ensure that OID bond issuers file Form 8281. However, no IRS organization had primary responsibility for monitoring such compliance, and there was no evidence that IRS had ever assessed the penalty for failure to file or for late filings. IRS did not use any other information it received, such as corporate tax returns, to help ensure compliance with Form 8281 reporting requirements. Because Publication 1212 was not complete, those who relied on it to determine their information reporting requirements could not be sure they were reporting all OID income to IRS or to bond owners. We recommended that IRS assign responsibility for monitoring and enforcing the OID bond issuance reporting requirements to specific organizational units and that IRS develop procedures, such as computer matching, to help ensure that all OID information is listed in Publication 1212. We also recommended that IRS work with representatives of the securities industry to develop means to inform and remind OID bond issuers of their responsibility to file Form 8281. Tax Administration: Diesel Fuel Excise Tax Change (GAO/GGD-96-53, Jan. 16, 1996). At the request of Senator Daniel Patrick Moynihan, we reported on changes in diesel fuel excise tax collections in 1994, IRS’ responses to concerns about its regulations implementing new diesel fuel taxation requirements, and incentives to evade motor fuel excise taxes or obtain false refunds. IRS data indicated that diesel fuel excise tax collections increased about $1.2 billion, or 22.5 percent, from 1993 to 1994. Treasury estimated that increased tax compliance accounted for between $600 million and $700 million of the additional collections. IRS addressed two major concerns about the dyeing requirements in its regulations by settling on one dye color (red) rather than two and delaying any action on using colorless markers to indicate tax-free fuel until the effectiveness of the dyeing program could be determined. However, IRS had not addressed several concerns, including the taxation of diesel fuel additives, the appropriate concentration level of red dye to avoid possible adverse effects on engines, the potential impact on the availability of diesel fuel for recreational boating use, requests that the state of Alaska be exempted from the dyeing requirements, and the effects of using dyed fuel in intercity buses on companies’ operating cost and their ability to comply with EPA regulations. The new diesel taxation approach appeared to be raising significant additional revenue; however, anyone who wished to defraud the system continued to have significant incentives to do so. For example, evading just the federal tax on an 8,000-gallon truckload of diesel fuel would yield an illicit profit of $1,920. IRS had detected several fraudulent schemes involving refunds of gasoline or diesel fuel excise taxes, but it did not know how extensive this fraud might be. Tax Administration: Audit Trends and Taxes Assessed on Large Corporations (GAO/GGD-96-6, Oct. 13, 1995). We reviewed the results of IRS’ efforts to audit the tax returns of about 45,000 large corporations. IRS’ audits of those returns and the returns of the 1,700 largest corporations in IRS’ Coordinated Examination Program have generated about two-thirds of the additional taxes recommended from all income audits. Using IRS data, we analyzed audit trends for fiscal years 1988 through 1994, computed the rate at which taxes recommended by agents were assessed, and developed profiles of audited large corporations and compared them with large corporations that were not audited. For every dollar invested in large corporation audits, IRS recommended $56 and ultimately assessed $15 in additional taxes for the years 1988 through 1994. IRS invested more hours in directly auditing large corporations but recommended less additional tax per hour invested in 1994 than in 1988. In 1994, IRS spent 25 percent more hours and audited only 3 percent more returns. Further, in 1994 constant dollars, additional taxes recommended decreased 4 percent in total, decreased 7 percent per return audited, and decreased 23 percent per audit hour. In 1994, large corporations appealed 66 percent of the additional taxes that IRS recommended in its audits. Between 1988 and 1994, IRS assessed 27 percent of the recommended additional taxes either after agreement or resolution in appeals. This assessment rate varied widely when disaggregated. For example, the rate reached as high as 103 percent and fell to less than 1 percent at 20 IRS districts that recommended over $100 million in additional taxes during the 7 years we analyzed. The reasons for such disparate rates were not apparent. IRS believed that the assessment rate was not an accurate measure of audit effectiveness since various factors outside the audit (such as economic conditions or net operating losses) could lower the rate. Our profiles of large corporations showed that most were engaged in either manufacturing or the finance and insurance industries in 1992. Audited corporations tended to report higher average incomes, tax liabilities, and other tax amounts than nonaudited corporations. Tax Administration: Information on IRS’ Taxpayer Compliance Measurement Program (GAO/GGD-96-21, Oct. 6, 1995). In response to a request from Representative Joseph K. Knollenberg, we provided information on IRS’ TCMP for tax year 1994. IRS had generally taken appropriate action on the concerns raised in our 1994 report that dealt with meeting milestones for starting TCMP audits, testing TCMP database components, developing data collection systems, and collecting and analyzing data. Because of uncertainties about its fiscal year 1996 budget, IRS had delayed the start of its TCMP audits from October 1 to December 1, 1995. This delay was fortuitous because IRS had not completed testing the TCMP database components and data collection systems. IRS had planned to collect data on partners, shareholders, and misclassified workers as we suggested in our 1994 report. These additional data should have allowed IRS to improve its measure of compliance levels. IRS had also planned to have auditors computerize some of their comments on audit findings, which should have made it easier for researchers to analyze TCMP results. However, IRS had not developed a research plan that could be used to analyze final TCMP results. we knew of no other IRS or third-party data that could be used to develop return selection formulas that would allow IRS to target its audits as effectively as TCMP data and TCMP could be very useful, not only for improving compliance in the existing tax system but also as a tool for designing and administering a new system. Tax Administration: Making IRS’ Telephone Systems Easier to Use Should Help Taxpayers (GAO/GGD-96-74, Mar. 11, 1996). In a report to the Chairman, Subcommittee on Oversight, House Committee on Ways and Means, we discussed IRS’ efforts to develop interactive telephone systems, focusing on IRS’ efforts to make the telephone systems easier to use, meet security requirements for protecting taxpayer data, and assign “process owners” who would be responsible for providing developers with input about systems’ requirements. Three prototype interactive telephone systems—designed to reduce correspondence between IRS and taxpayers and to make IRS more accessible—suffered from too many menu options and other problems. IRS’ telephone-routing system required taxpayers to remember up to eight menu options—even though guidelines called for no more than four—and did not allow taxpayers to return to the main menu when they made a mistake or wanted to resolve other issues. IRS was aware of the menu problems, but it believed that multiple options were necessary because tax issues are complex. IRS had yet to do a cost-benefit analysis of the use of multiple toll-free numbers, which IRS officials had recommended as a solution to the problem of too many menu options. Providing taxpayers with a written, detailed step-by-step description on how to use the menu options might be another way to make the telephone systems more user friendly. IRS complied with government security requirements when developing its first three interactive telephone systems. However, future interactive systems should allow taxpayers greater access to tax information; and more secure features, such as a personal identification number, may be needed to protect taxpayer data. IRS process owners were designated late and had not provided all the requirements information needed for the telephone systems’ development. Successful IRS implementation of interactive telephone systems is critical to improving IRS customer service. IRS expects telephone assistance to double as it tries to move toward a paperless system. Resolving the shortcomings discussed in this report is essential if IRS is to achieve its goal of handling 45 percent of taxpayer calls through interactive telephone systems. We recommended that IRS conduct a cost-benefit analysis of the actions needed to overcome the problems caused by too many menu options, including multiple toll-free numbers and written instructions on how to use the interactive menus. Tax Administration: IRS Faces Challenges in Reorganizing for Customer Service (GAO/GGD-96-3, Oct. 10, 1995). IRS is undergoing a major effort to modernize its information systems and restructure its organization. This effort involves several components, one of which IRS calls its “customer service vision.” This vision is a plan for improving IRS’ interactions with taxpayers and includes folding parts of IRS’ field structure into 23 customer service centers. These centers would work primarily by telephone to provide taxpayer service, distribute forms, collect unpaid taxes, and adjust taxpayer accounts. They would absorb current IRS telephone operations and help to convert much of IRS’ written correspondence work to the telephone. In a report to the Chairmen and Ranking Minority Members of interested congressional committees, we reviewed the progress IRS had made toward its customer service vision. We focused on IRS’ customer service goals and its plans to meet these goals, the gap between IRS’ current operations and its vision, IRS’ progress to date in meeting its goals, current management concerns, and important challenges IRS faces. IRS’ customer service goals are to provide better service to taxpayers, use its resources more efficiently, and improve taxpayers’ compliance with tax laws. IRS plans to serve taxpayers better by improving their access to telephone service and resolving most problems with a single contact. IRS expects to improve its efficiency by having fewer work locations and automated workload management, giving customer service representatives better computer resources and nationwide access to taxpayer accounts, and converting work currently done by correspondence to the telephone. IRS expects to improve compliance by answering more taxpayer inquiries and having more timely data to follow up on compliance problems. There was a large gap between IRS’ current operations and its customer service vision. For example, although IRS planned to improve accessibility, taxpayers who called IRS’ toll-free telephone assistance sites in fiscal year 1994 got busy signals 73 percent of the time. IRS had made some progress toward its vision by initiating limited operations in a few new customer service centers, but only a few staff had been reassigned to those centers, and new computer and telephone systems to support the centers were still in the early stages of development and testing. A lack of clarity in management responsibilities had, to some extent, hampered IRS in implementing its customer service plans. In that regard, we noted that no single individual was responsible for the success of all the work activities and resources that were to be transferred to customer service. We also discussed the absence of owner involvement during project development and the failure of owners to establish the quality measures critical to evaluating the performance of interactive telephone systems. IRS would have to overcome several important challenges to achieve its customer service goals. Those challenges included managing the transition to a different organization while maintaining developing and effectively using new information technology, and devising ways to measure the work of the customer service centers and balance their competing workloads. We recommended that IRS clarify the criteria for assigning ownership for its modernization projects; define roles and responsibilities for those owners; and emphasize to owners the need to provide the business requirements necessary to develop, test, and implement new customer service products and services. Earned Income Credit: IRS’ 1995 Controls Stopped Some Noncompliance, But Not Without Problems (GAO/GGD-96-172, Sept. 18, 1996). At the request of the Chairman, Senate Committee on Finance, we reviewed IRS’ efforts to reduce EIC noncompliance. IRS took several steps to prevent and detect EIC noncompliance in 1995. Those steps stopped some noncompliance; however, a number of problems remained. The up-front controls used by IRS in its Electronic Filing System identified about 1.3 million problems with SSNs on submissions from persons who were claiming EIC and prevented the affected returns from being filed electronically until the problems were corrected. However, IRS’ increased emphasis on validating SSNs on paper returns generated a workload that far exceeded its capabilities. IRS identified about 3.3 million paper returns with missing or invalid SSNs for EIC-qualifying children or dependents and delayed related refunds, but it only had the resources to follow up on 1 million cases. IRS also delayed refunds on another 4 million EIC returns that did not have any SSN problems to give itself time to check for duplicate SSNs, only to release almost all of those refunds without checking. Although IRS’ data provided some evidence of the results of its efforts in 1995, they were not sufficient to allow an overall assessment of the impact of IRS’ initiatives on EIC noncompliance. For example, IRS had not yet released the results of an EIC compliance study that it did in 1995, data on the results of the SSN verification effort for paper returns were not reported in a way that isolated tax year 1994 cases from prior years’ cases or distinguished between EIC cases and cases involving dependents, and IRS did not track what happened to returns rejected by the Electronic Filing System. We recommended that IRS consider cost-effective ways to compile the data needed to better assess the effectiveness of its efforts to combat EIC noncompliance. Tax Refund Timeliness (GAO/GGD-96-131R, June 26, 1996). At the request of the Chairman, House Committee on Ways and Means, we reviewed the processing and issuance of income tax refund payments in 1996 and compared that year’s pattern with the patterns in 1994 and 1995. The Chairman’s request was prompted by a concern that the timing of income tax refund payments might be manipulated to avoid reaching the debt limit. the number and dollar amount of refunds issued as of the end of April 1996 either exceeded or closely approximated the end-of-April figures for 1994 and 1995 and the average number of days for issuing refunds in 1996 was the same as in past years. Also, our tracking of returns processed by the Kansas City Service Center showed nothing unusual. Thus, we concluded that there was no evidence that any special steps were taken in 1996 to delay refunds. However, that did not mean that no refunds were delayed in 1996. As in 1995, but to a much lesser degree, IRS delayed some refunds in 1996 to verify SSNs and EIC claims. IRS Efforts to Control Fraud in 1995 (GAO/GGD-96-96R, Mar. 25, 1996). At the request of the Ranking Minority Member of the Senate Committee on Governmental Affairs, we reviewed the status and results of IRS’ efforts to reduce its exposure to fraud in 1995. IRS introduced new controls and expanded existing controls to both deter the filing of fraudulent returns and identify questionable returns once they had been filed. IRS’ efforts to deter fraud appear to have had a positive effect, but the identification of questionable returns was less successful. To deter the filing of fraudulent returns, IRS established filters to screen electronic submissions for missing, invalid, or duplicate SSNs and to prevent those returns from being filed electronically until the problems were corrected. The electronic filing filters identified 4.1 million SSN problems in 1995. IRS also expanded its process for determining the suitability of tax return preparers and transmitters who wanted to participate in the electronic filing program and eliminated the Direct Deposit Indicator because of concern that it might have encouraged filing fraud by facilitating refund anticipation loans. To improve the identification of noncompliance on returns after they had been filed, IRS placed an increased emphasis on validating SSNs on paper returns and delayed refunds to give itself time to do those validations and to check for possible fraud and upgraded the Questionable Refund Program. As a result, IRS prevented the issuance of millions of dollars in questionable refunds. However, IRS also identified many more SSN problems than it was able to deal with and ended up releasing the refunds without resolving the problems and delayed millions of refunds with valid SSNs to check for duplication, but ended up releasing those refunds after several weeks without checking the SSNs. The 1995 Tax Filing Season: IRS Performance Indicators Provide Incomplete Information About Some Problems (GAO/GGD-96-48, Dec. 29, 1995). In response to a request from the Chairman, Subcommittee on Oversight, House Committee on Ways and Means, we reviewed IRS’ performance during the 1995 tax filing season. We focused on the processing of individual income tax returns and refunds, the ability of taxpayers to reach IRS by telephone, and the performance of a new IRS computer system for processing returns. IRS’ indicators showed that it generally met its 1995 filing season goals. For example, IRS received more individual income tax returns than the year before; answered 11 percent more telephone calls than expected; issued refunds faster, on average, than its 40-day goal; and accurately filled 97 percent of taxpayers’ orders for forms and publications. However, these indicators did not provide a complete assessment of the filing season and masked several serious problems. IRS’ efforts to combat fraud, which resulted in over 7 million refunds being delayed so that IRS could verify SSNs, generated much adverse publicity that might have been alleviated if IRS had better forewarned taxpayers of potential refund delays; our tests and IRS data showed that taxpayers continued to have serious problems trying to reach IRS by telephone; and a new document-imaging system did not perform as expected, leading to increased return processing costs and lower-than-expected productivity. We recommended that IRS, if it planned to continue validating SSNs and delaying refunds in 1996, adjust its methodology for assessing refund timeliness. We also recommended that after IRS develops a measure of the accessibility of its telephone assistance, which IRS was working on at the time of our review, it add that measure to its key filing season performance indicators. Tax Administration: Electronic Filing Falling Short of Expectations (GAO/GGD-96-12, Oct. 31, 1995). Electronic filing is a cornerstone of IRS’ plan to move away from the traditional filing of paper returns. IRS established a goal of receiving 80 million tax returns electronically in 2001. In a report to the Ranking Minority Member of the Senate Committee on Governmental Affairs, we discussed IRS’ progress in broadening the use of electronic filing, the availability of data needed to develop an electronic filing strategy, and the implications for IRS if it does not significantly reduce its paper-processing workload. From 1992 through 1994, the number of returns filed electronically grew from 12.6 million to 16.4 million, an annual growth rate of 14 percent. In 1995, the number of electronic returns was expected to drop to about 14.8 million, which IRS attributed to various actions it took to crack down on refund fraud. Assuming 1995 was an aberration and the 14 percent annual growth rate of the preceding years resumed, we estimated that only about 33 million returns would be filed electronically by 2001. A major impediment to the expansion of electronic filing is its cost to the public. Taxpayers who file an electronic return through a preparer or electronic filing transmitter have to pay as much as $40 for these services. One aspect of IRS’ current electronic filing program that contributes to its cost (not only to the public but also to IRS) is the need for taxpayers to submit a paper signature document to affirm that information on the electronic return is accurate. Most of the electronic returns IRS received in 1994 were individual income tax returns that could have been filed on Form 1040A or 1040EZ—forms that are among the least costly paper returns to process. In addition, IRS had made little headway in increasing the number of electronically filed business returns, which are generally more complex and thus more costly to process on paper than individual returns. The fact that less complex returns comprise a disproportionate share of electronic filing may be, at least in part, because of IRS’ goal of 80 million returns. Focusing on that goal could cause IRS to target its efforts at groups of taxpayers or types of returns that will boost the number of electronic returns but not necessarily yield the greatest reductions in IRS’ paper-processing workload and operating costs. Although IRS has some comparative data on the cost to process electronic and paper returns, it does not have comparative data on other costs, such as storage and retrieval, that can vary depending on how a return is filed. IRS also does not have adequate data on why taxpayers do not file electronically and what it would take to get them to do so and estimates of the number of electronic returns IRS can expect to receive from those taxpayers after some market intervention. Without a significant increase in electronic filing, IRS’ customer service and paper-processing workloads may overwhelm its planned staffing and require changes to various aspects of its modernization efforts. IRS did not have contingency plans for that eventuality. We recommended that IRS (1) identify those groups of taxpayers who offer the greatest opportunity to reduce IRS’ paper-processing workload and operating costs if they were to file electronically and (2) develop strategies that seek to eliminate impediments that inhibit those groups from filing electronically. We also recommended that IRS adopt electronic filing goals that focus on reducing IRS’ paper-processing workload and operating costs and prepare contingency plans for the possibility that electronic filings will fall short of expectations. IRS Tax Collection Reengineering (GAO/GGD-96-161R, Sept. 24, 1996). In a letter to the Chairman, Senate Committee on Finance, we provided information on IRS’ efforts to reengineer its enforcement action program, which included the delinquent tax collection process. The enforcement action reengineering effort started in June 1994 and was suspended in November 1995. When the effort was suspended, few results had been achieved in changing work processes or addressing IRS’ long-standing collection problems. An independent consultant’s report identified several factors that hampered the reengineering effort: IRS did not (1) have the organizational commitment and support needed to achieve the level of change desired, (2) fully implement the reengineering methodology needed, and (3) integrate its reengineering efforts with the existing systems modernization program. In October 1995, IRS established a separate office to coordinate all future strategic change initiatives, like reengineering. In January 1996, IRS announced plans to initiate a new reengineering project focusing on the tax settlement process, which was expected to include aspects of enforcement action and the tax collection processes. IRS had also undertaken several projects to change the role of revenue officers in the collection process by modernizing their jobs. One project, for example, was to automate many manual work processes and link revenue officers to IRS’ computer databases through the use of laptop computers. Tax Administration: IRS Tax Debt Collection Practices (GAO/T-GGD-96-112, Apr. 25, 1996). IRS confronts major hurdles in collecting tens of billions of dollars in delinquent taxes. As Congress tries to balance the federal budget, these unpaid taxes have become increasingly important as have IRS’ collection efforts. We discussed IRS’ tax debt collection practices before the Subcommittee on Oversight, House Committee on Ways and Means. We expressed the belief that IRS could do more to improve its collection practices. The challenges facing IRS’ improvement efforts include a lack of accurate and reliable information on either the makeup of its accounts receivable or the effectiveness of the collection tools and programs it uses, an aged inventory of receivables, outdated collection processes, and antiquated technology. IRS is attempting to modernize its information and processing systems, but these actions will not be completed for several years. Without reliable information on the accounts they are trying to collect and the taxpayers who owe the debts, IRS employees generally do not know whether they are resolving cases in the most efficient and effective manner and may spend time pursuing invalid or unproductive cases. This lack of reliable performance data also affects IRS’ ability to target its collection efforts to specific taxpayers or types of debts. IRS needs a comprehensive strategy to guide its efforts to improve tax debt collections, starting with having accurate and reliable information. We recommended in May 1993 that IRS test the use of private debt collectors to support its collection efforts. Such a test could provide useful insight into the effectiveness of the techniques and technologies used in the private sector to collect older accounts. For example, IRS could learn which actions are most productive based on the type of case, type of taxpayer, and age of the account. Insular Areas Update (GAO/GGD-96-184R, Sept. 13, 1996). In a letter to the Chairman, Senate Committee on Energy and Natural Resources, we provided information on the fiscal arrangements between the U.S. government and American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, the Commonwealth of Puerto Rico, and the U.S. Virgin Islands. The letter updated information presented in our 1995 testimony. Since our 1995 testimony, the only significant change in how federal taxes apply to the territories related to the Puerto Rico and possessions tax credit—the tax credit was repealed, although existing claimants may continue to earn credits during a 10-year transition period. We also reported the following information for fiscal year 1995: IRS collected $3.3 billion from Puerto Rico for individual and corporation income taxes, unemployment taxes, estate and gift taxes, and excise taxes. The Bureau of Alcohol, Tobacco, and Firearms collected $232.4 million in excise taxes on rum shipped from Puerto Rico to the United States and $47.8 million on rum shipped from the U.S. Virgin Islands to the United States. The Treasury transferred $204.9 million and $41.7 million in rum excise tax revenue to the governments of Puerto Rico and the U.S. Virgin Islands, respectively. The Customs Service collected $138.2 million in duties in Puerto Rico, of which $96 million was transferred to the Puerto Rican government. Customs also collected $9.2 million in duties in the U.S. Virgin Islands, of which $4.2 million was transferred to the U.S. Virgin Islands government. Federal expenditures in the five territories totaled $11.4 billion. The expenditures included grants and other payments to the territorial governments as well as salaries and wages for federal employees. Profile of Indian Gaming (GAO/GGD-96-148R, Aug. 20, 1996). In a letter to the Chairman, House Committee on Ways and Means, we provided information on the Indian gaming industry. As of July 1996, according to data from the National Indian Gaming Commission, 177 of the 557 officially recognized Indian tribes were operating 240 gaming facilities. An additional 41 tribes had authorization to operate but had not yet opened their gaming facilities. The 85 tribes that had filed financial statements for either 1994 or 1995 reported about $3.5 billion in total gaming revenues (defined as dollars wagered minus payouts) for 110 gaming facilities. Of the 110 facilities, the 6 largest accounted for 40 percent of the revenues. On the basis of our analysis of the financial statements, we determined that about $1.2 billion had been transferred from gaming facilities to 74 of the 85 tribes. Tax Policy: Analysis of Certain Potential Effects of Extending Federal Income Taxation to Puerto Rico (GAO/GGD-96-127, Aug. 15, 1996). In response to a request from the Chairman of the House Committee on Resources and the Chairman of its Subcommittee on Native American and Insular Affairs, we provided information on some potential effects of extending the IRC to residents of the Commonwealth of Puerto Rico. Our estimates relating to federal tax liabilities were based on the income and demographic characteristics of Puerto Rican taxpayers in 1992, the latest year for which detailed data were available. If IRC tax rules had been applied to residents of Puerto Rico and if there were no behavioral responses to this, the residents would have owed around $623 million in federal income tax before taking into account the EIC. The total amount of EIC would have been about $574 million, leaving a net aggregate federal tax liability of about $49 million. We estimated that about 59 percent of the population who filed individual income tax returns in 1992 would have earned some EIC. The average EIC earned by eligible taxpayers would have been $1,494. We estimated that 41 percent of the households filing income tax returns would have had positive federal income tax liabilities, 53 percent would have received net transfers from the federal government because their EIC would have more than offset their precredit liabilities, and 6 percent would have had no federal tax liability. If application of federal income tax resulted in an additional $49 million in tax liability after subtracting EIC as we estimated and if the Puerto Rican government wanted to keep constant the aggregate amount of combined federal and Puerto Rican individual income tax levied on its residents, it would have had to reduce its individual income tax revenue by about 5 percent. We also accounted for the probability that some Puerto Rican residents who were not required to file Puerto Rican tax returns in 1992 would file federal tax returns because they qualified for the EIC. We estimated, as an upper limit, that those additional EIC claims could total about $64 million, and we discussed how additional EIC claims in that amount would affect the data discussed above. Program Expenses of Charities (GAO/GGD-96-125R, July 10, 1996). In a letter to Senator Daniel R. Coats, we provided information on program service expenses reported by organizations that are exempt from taxation under IRC 501(c)(3). These organizations are collectively known as charities. Program service expenses are those expenses directly related to the exempt purposes of the organization. Analysis of the organizational expense data was based on information collected by IRS from the long Form 990 returns filed by 121,627 charitable organizations that reported expenses. We estimated that the percentage of expenses allocated to program services for those organizations that filed was about 86 percent of total expenses for 1992. As a percentage of their total expenses, more than 60 percent of these charitable organizations had program service expenses of 80 percent or greater. Earned Income Credit: Profile of Tax Year 1994 Credit Recipients (GAO/GGD-96-122BR, June 13, 1996). At the request of the Chairman, Senate Committee on Finance, we provided information on participation in the EIC program for tax years 1990 to 1994 and the characteristics of taxpayers who received the credit in tax year 1994. Total EIC program costs increased 150 percent from 1990 to 1994, and the number of EIC recipients increased by about 50 percent, to 19.1 million. Extension of the EIC to certain childless adults in 1994 was the main reason for the recent growth in the number of EIC claimants. In tax year 1994, about 15 million families with children received $20.5 billion in EIC while 4 million childless adults received EIC totaling $700 million. Most of the taxpayers claiming the EIC for families with children filed as heads of households, and nearly 90 percent of childless adult claimants were single. The majority of EIC claimants were 25 to 44 years old, and in tax year 1994, 1.2 million EIC recipients also claimed $507 million in child and dependent care credits. EIC recipients, compared with filers who did not claim EIC, were more likely to file their returns electronically, use simpler tax forms, and use a tax preparer. EIC recipients derived their income primarily from earnings rather than investment income. EIC is structured so that the credit amount increases as income increases, plateaus at a maximum credit amount, and then phases out as income exceeds a certain amount. About 60 percent of taxpayers claiming EIC for families with children had income in the credit’s phase-out range. In 1995, Congress enacted an indirect wealth test to eliminate from the EIC program taxpayers with investment income over $2,350. If that investment income threshold had been in place in tax year 1994, about 284,000 taxpayers, who had claimed $212 million of EIC, would not have been able to do so. Tax Policy and Administration: Review of Studies of the Effectiveness of the Research Tax Credit (GAO/GGD-96-43, May 21, 1996). During a congressional hearing in 1995, we were asked to evaluate recent studies of the research tax credit to determine whether the evidence was adequate to conclude that available evidence suggests that each dollar taken of the research tax credit stimulates at least one dollar of research spending in the short run and about two dollars of research spending in the long run. In response to a request from Representative Robert T. Matsui, we reviewed eight studies of the research tax credit, focusing on the adequacy of the studies’ data and methods to determine the amount of research spending stimulated per dollar of foregone tax revenue and other factors that determine the credit’s value to society. We found that four studies supported the claim that, during the 1980s, the research credit stimulated research spending that exceeded its revenue cost, but the other four studies did not support the claim or were inconclusive. All of the studies had significant data and methodological limitations that made it difficult to evaluate industry’s true responsiveness to the research tax credit. Because the authors did not have access to confidential tax return data, they used publicly available data to determine the credit’s incentive. Public data were not a suitable substitute for tax return data because public data used different definitions of taxable income and research spending. Furthermore, the studies’ analytical methods, such as use of industry aggregates and failure to incorporate important tax code interactions, made their findings imprecise and uncertain. We also found little research on the latest design of the credit to determine its effect on incentives and costs. As a result, the studies’ evidence was not adequate to conclude that a dollar of research tax credit would stimulate a dollar of additional short-term research spending or about two dollars of additional long-term research spending. Moreover, the value of the research tax credit to society cannot be determined simply by comparing the amount of research spending stimulated by the credit versus the credit’s revenue cost. A comparison would have to be made between the total benefits gained by society from research stimulated by the credit and the estimated costs to society resulting from the collection of taxes required to fund the credit. Budget Issues: Selected GAO Work on Federal Financial Support of Business (GAO/AIMD/GGD-96-87, Mar. 7, 1996). The federal government provides financial benefits to businesses as a way to fulfill a wide range of public policy goals. At the request of Representative Charles F. Bass, we summarized our previously issued work on spending programs and tax benefits available to businesses. Our work showed that these benefits are spread throughout the budget, including programs in international affairs, energy, natural resources and environment, agriculture, and transportation. In cases where programs are poorly designed, including those benefiting businesses, the federal government may spend more money or lose more revenue than needed to reach its intended audience and achieve program or service goals. The problems with these financial support programs can be grouped into several categories: ineffective or inefficient transfers, wherein some businesses receive federal funds or tax benefits to do things they would have done anyway; preemption of market forces, wherein artificially increasing the price of goods to consumers can encourage inefficient production and increase costs; moral hazards, wherein federal programs provide incentives to businesses to undertake more, risky activities than they would without the program; and duplication and working at cross-purposes, wherein several federal programs address the same problem or where one program may counteract the beneficial effects of another program. Tax Consequences of Offsets (GAO/NSIAD-96-74R, Dec. 22, 1995). In a letter to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, Senate Committee on Veterans’ Affairs, House Committee on National Security, and House Committee on Veterans’ Affairs, we provided information on the tax consequences to veterans of the required offset of certain types of Department of Defense (DOD) separation pay and Department of Veterans Affairs (VA) disability compensation. In 1980, Congress authorized DOD to provide lump-sum separation pay to service members for involuntary separation. In 1991, to assist DOD in downsizing, Congress authorized DOD to pay a higher level of lump-sum separation pay or an annual annuity to those who separate voluntarily. After separating, some veterans qualify for service-connected disability compensation from VA. Federal income taxes are withheld from separation pay. Disability compensation is tax-exempt. Federal law requires the recoupment of the gross amount of separation pay (known as an offset) from those who also receive disability compensation for the same period of service. We were asked if separation pay could be reclassified as disability compensation for tax purposes and whether veterans could deduct recouped separation pay from gross income. According to IRS, veterans may not reclassify separation pay as disability compensation and may not deduct recouped separation pay from gross income. IRS did conclude, however, that in one limited situation, veterans may retroactively exclude part of their lump-sum separation pay from gross income if they also qualify for disability compensation. Tax Exempt (GAO/GGD-96-47R, Nov. 8, 1995; GAO/GGD-96-46R, Nov. 8, 1995; and GAO/GGD-96-29R, Oct. 10, 1995). In November 8, 1995, letters to Representatives Ernest Istook and David M. McIntosh, we analyzed IRS Statistics of Income data for certain tax exempt organizations. We noted that these data were limited in that they covered grants from all levels of government, not just the federal did not include all federal grants received by exempt organizations. For Representative Istook, we provided information on grant receipts, lobbying expenditures, and political expenditures for certain organizations that were exempt from tax under IRC sections 501(c)(3) through 501(c)(9). IRS data showed that there were 259,502 organizations exempt under those code sections of which 44,274 reported receiving government grants totaling about $42.6 million. Of those organizations receiving grants, 1,029 reported lobbying expenditures totaling about $43.2 million, and 41 reported political expenditures totaling about $2.4 billion. For Representative McIntosh, we compared the average lobbying expenditures for tax-exempt charitable organizations that received government grants with the average expenditures for charities that did not receive government grants in tax year 1992. IRS data showed that out of 122,563 charities, 2,132 reported lobbying expenditures that totaled about $75.9 million for tax year 1992. Of the organizations that reported lobbying expenditures, 48 percent reported receiving government grants. Grantees reported average lobbying expenditures of about $41,940 per organization, and nongrantees reported average lobbying expenditures of about $29,701 per organization. In an October 10, 1995, letter to Representative Ernest Istook, we reviewed the methodology used by Dr. William Duncan to prepare his analysis entitled Non-Profit Lobbying Statistics. In his analysis, Dr. Duncan attempted to quantify the lobbying expenditures of 501(c)(3) organizations that received government grants in tax year 1992. We noted that his approach should produce reasonably accurate results if the methodology he described to us was applied to the data he identified. Summary of Selected GAO Reports (GAO/GGD-96-193R, Sept. 26, 1996). In a letter to the Co-Chairmen, National Commission on Restructuring the Internal Revenue Service, we summarized selected GAO reports about IRS that were issued in fiscal years 1991 through 1995 and in the first three quarters of fiscal year 1996. We noted that the reports identified areas that are particularly problematic to IRS, such as tax return processing, customer service, collection efforts, financial management, and information technology and indicated the formidable challenges IRS faces in making its organization, operations, and processes more effective and efficient while improving service to taxpayers. Tax Policy and Administration: 1995 Annual Report on GAO’s Tax-Related Work (GAO/GGD-96-61, Mar. 8, 1996). Pursuant to a legislative requirement, we summarized our work on tax policy and administration during fiscal year 1995. This report highlights notable reports and testimony from fiscal year 1995, discusses actions taken on our recommendations as of the end of 1995, discusses recommendations that we made to Congress before and during fiscal year 1995 that had not been acted upon, and lists the assignments for which we were given access to tax information under the law. Our key recommendations related to improving compliance with the tax laws, better assisting taxpayers, enhancing the effectiveness of tax incentives, improving IRS management, and improving the processing of returns and receipts. Tax Administration: Information on IRS’ Taxpayer Compliance Measurement Program (GAO/GGD-96-21) Tax Administration: IRS Faces Challenges in Reorganizing for Customer Service (GAO/GGD-96-3) Tax Exempt (GAO/GGD-96-29R) Tax Administration: Audit Trends and Taxes Assessed on Large Corporations (GAO/GGD-96-6) Tax Administration: Electronic Filing Falling Short of Expectations (GAO/GGD-96-12) Tax Exempt (GAO/GGD-96-46R) Tax Exempt (GAO/GGD-96-47R) Tax Consequences of Offsets (GAO/NSIAD-96-74R) The 1995 Tax Filing Season: IRS Performance Indicators Provide Incomplete Information About Some Problems (GAO/GGD-96-48) Tax Administration: Diesel Fuel Excise Tax Change (GAO/GGD-96-53) IRS Staffing Trends (GAO/GGD-96-73R) IRS Operations: Significant Challenges in Financial Management and Systems Modernization (GAO/T-AIMD-96-56) Budget Issues: Selected GAO Work on Federal Financial Support of Business (GAO/AIMD/GGD-96-87) Tax Policy and Administration: 1995 Annual Report on GAO’s Tax-Related Work (GAO/GGD-96-61) Tax Administration: Making IRS’ Telephone Systems Easier to Use Should Help Taxpayers (GAO/GGD-96-74) Status of Tax Systems Modernization, Tax Delinquencies, and the Potential for Return-Free Filing (GAO/T-GGD/AIMD-96-88) Tax Administration: IRS Can Improve Information Reporting for Original Issue Discount Bonds (GAO/GGD-96-70) IRS Efforts to Control Fraud in 1995 (GAO/GGD-96-96R) Tax Systems Modernization: Management and Technical Weaknesses Must Be Overcome to Achieve Success (GAO/T-AIMD-96-75) Tax Administration: IRS’ Fiscal Year 1996 and 1997 Budget Issues and the 1996 Filing Season (GAO/T-GGD-96-99) Tax System: Issues in Tax Compliance Burden (GAO/T-GGD-96-100) Tax Administration: IRS Tax Debt Collection Practices (GAO/T-GGD-96-112) Tax Administration: Audit Trends and Results for Individual Taxpayers (GAO/GGD-96-91) Tax Administration: Alternative Strategies to Obtain Compliance Data (GAO/GGD-96-89) (continued) Tax Systems Modernization: Progress in Achieving IRS’ Business Vision (GAO/T-GGD-96-123) Internal Revenue Service: Results of Nonfiler Strategy and Opportunities to Improve Future Efforts (GAO/GGD-96-72) Tax Policy and Administration: Review of Studies of the Effectiveness of the Research Tax Credit (GAO/GGD-96-43) Tax Research: IRS Has Made Progress But Major Challenges Remain (GAO/GGD-96-109) Financial Audit: Actions Needed to Improve IRS Financial Management (GAO/T-AIMD-96-96) Tax Systems Modernization: Actions Underway But IRS Has Not Yet Corrected Management and Technical Weaknesses (GAO/AIMD-96-106) Earned Income Credit: Profile of Tax Year 1994 Credit Recipients (GAO/GGD-96-122BR) Tax Administration: Issues in Classifying Workers as Employees or Independent Contractors (GAO/T-GGD-96-130) Tax Refund Timeliness (GAO/GGD-96-131R) Program Expenses of Charities (GAO/GGD-96-125R) Financial Audit: Examination of IRS’ Fiscal Year 1995 Financial Statements (GAO/AIMD-96-101) Managing IRS: IRS Needs to Continue Improving Operations and Service (GAO/T-GGD/AIMD-96-170) Tax Policy: Analysis of Certain Potential Effects of Extending Federal Income Taxation to Puerto Rico (GAO/GGD-96-127) Profile of Indian Gaming (GAO/GGD-96-148R) Tax Systems Modernization: Cyberfile Project Was Poorly Planned and Managed (GAO/AIMD-96-140) Tax Administration: Tax Compliance of Nonwage Earners (GAO/GGD-96-165) Tax Administration: IRS Is Improving Its Controls for Ensuring That Taxpayers Are Treated Properly (GAO/GGD-96-176) Tax Administration: Income Tax Treatment of Married and Single Individuals (GAO/GGD-96-175) Internal Revenue Service: Business Operations Need Continued Improvement (GAO/AIMD/GGD-96-152) IRS Operations: Critical Need to Continue Improving Core Business Practices (GAO/T-AIMD/GGD-96-188) Tax Systems Modernization: Actions Underway But Management and Technical Weaknesses Not Yet Corrected (GAO/T-AIMD-96-165) Insular Areas Update (GAO/GGD-96-184R) Earned Income Credit: IRS’ 1995 Controls Stopped Some Noncompliance, But Not Without Problems (GAO/GGD-96-172) (continued) IRS Financial Audits: Status of Efforts to Resolve Financial Management Weaknesses (GAO/T-AIMD-96-170) IRS Tax Collection Reengineering (GAO/GGD-96-161R) Summary of Selected GAO Reports (GAO/GGD-96-193R) David J. Attianese, Assistant Director Charles W. Woodward III, Evaluator-in-Charge Elizabeth W. Scullin, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO summarized the studies it issued during fiscal year 1996 to Congress and the Internal Revenue Service (IRS) and the statements it made before Congress and the National Commission on Restructuring the Internal Revenue Service. GAO noted that it published 50 reports in 6 broad areas: (1) IRS management and budget; (2) individual and business tax issues; (3) customer service; (4) submission processing; (5) accounts receivable/collections; and (6) tax expenditures and preferences.
|
Most U.S. families access the health care system through health insurance coverage. Without health insurance, many families face difficulties getting preventive and basic care for their children. Uninsured women, for example, are less likely to get early and adequate prenatal care. Late and inadequate prenatal care is associated with higher rates of low birth weight and prematurity, serious illness, and handicap for children.Children without health insurance are less likely to have routine doctor visits, get care for injuries, or have a regular source of medical care. When they do seek care, they are more likely to get it through a clinic rather than a private physician or health maintenance organization (HMO).Uninsured children are also less likely to be appropriately immunized—important in preventing childhood illness. Insured children in the United States either have privately or publicly funded health insurance. In 1993, 89 percent of children with private insurance got coverage through their parents’ employment. A small percentage of children have private, individually purchased policies. An even smaller percentage are children of military personnel who get publicly funded insurance through their parents’ employment. Most children with publicly funded insurance get coverage through Medicaid. The Medicaid program is a jointly funded federal-state entitlement program that provides health insurance for both children and adults. It is implemented through 56 separate programs (including the 50 states, the District of Columbia, Puerto Rico, and the U.S. territories). States are required to cover some groups of children and adults and may extend coverage to others. Children and their parents must be covered if they receive benefits under the AFDC program. In the past, most children received Medicaid because they were on AFDC. Children and adults may also be eligible for the program if they are disabled and have low incomes or if their medical expenses are extremely high relative to family income. Beginning in 1986, the Congress passed a series of laws that expanded Medicaid eligibility for pregnant women, infants, and children. (See table I.1 in app. I.) Before 1989, coverage expansions were optional, although many states had expanded coverage. Starting in July 1989, states had to begin covering pregnant women and infants with family incomes at or below 75 percent of the federal poverty level. The Omnibus Budget Reconciliation Acts of 1989 and 1990 added additional requirements that states had to implement in 1990 and 1991. By 1993, states were required to cover (1) pregnant women, infants, and children up to age 6 with family income at or below 133 percent of the federal poverty level and (2) children aged 6 to 10 (born after September 30, 1983) with family income at or below 100 percent of the federal poverty level. Current law also requires that the group of poor children over age 6 eligible for Medicaid continue to expand year by year until all poor children up to age 19 are eligible in the year 2002. In addition, states may expand Medicaid eligibility for infants and children beyond these requirements by either phasing in coverage of children up to age 19 more quickly than required or by increasing eligibility income levels or both. As of April 1995, 37 states and the District of Columbia had expanded coverage for children beyond federal requirements. (See app. I.) Children represent a large proportion of Medicaid recipients but a small proportion of Medicaid expenditures. In 1993, 49 percent of Medicaid recipients were children under age 21, but only 16 percent of Medicaid medical vendor payments were for their care. Nonetheless, Medicaid’s overall cost and the rate of cost increases have raised concerns about the program’s impact on the federal budget. Medicaid costs are projected to increase from about $131 billion to $260 billion by the year 2000, according to the Congressional Budget Office. The Congress is currently considering different options to lower the cost of the program, including removing guaranteed eligibility and giving capped funding to the states as block grants. Medicaid has become an increasingly important source of health insurance for low-income children as employment-based insurance has declined for both children and adults. Between 1989 and 1993, the number of children covered by Medicaid increased 54 percent—from 13.6 percent of U.S. children in 1989 (8.9 million children) to 19.9 percent in 1993 (13.7 million children). This could have led to a major decrease in the percentage of children uninsured. It did not, however, because the decrease in children covered by employment-based insurance offset the increase in U.S. children insured through Medicaid. (See fig. 1.) Comparing trends between children and adults clarifies Medicaid’s role. Between 1989 and 1993, the percentage of children with employment-based health insurance decreased 9 percent. During the same period, the percentage of adults aged 18 to 64 with such insurance decreased 7 percent. Both children and adults lost employment-based coverage. Because of the Medicaid expansion, however, the decline in employment-based insurance for children did not lead to an increase in the proportion of uninsured children. Adults had a different experience. (See fig. 2.) Between 1989 to 1993, the proportion of adults who were uninsured rose 16 percent. In contrast, the proportion of children who were uninsured was similar in 1989—13.3 percent—and 1993—13.5 percent. Comparing the experience of adults and children suggests that expanding Medicaid for children did not displace privately purchased individual insurance. The proportions of children and adults with privately purchased insurance were similar in 1989 and changed little in 1993. An increased proportion of adults did not purchase individual policies as more adults became uninsured. If Medicaid had displaced privately purchased insurance for children, the proportion of children with privately purchased insurance would have decreased, but it did not. The question of whether parents who could have employment-based insurance for their families chose to drop or refuse coverage to get Medicaid coverage for their children is more complicated. The longitudinal data that would be needed to directly answer this question are not available. Two researchers attempted to overcome this limitation by developing an economic model using CPS data from 1987 through 1992. They estimated that expanding Medicaid coverage for pregnant women and children did partially displace employment-based coverage, being responsible for about 17 percent of the decline in private insurance coverage between 1987 and 1992. The rest of the decline in coverage was due to macroeconomic factors, changes in the demographic mix of population, or changes in employers’ offering or generosity in covering health insurance for workers and their families. For children, their analysis leads to an estimate that 37 to 47 percent of children’s increase in Medicaid coverage was linked to a reduction in employment-based insurance coverage. Although the Medicaid expansion offset the decrease in employment-based insurance, an increasing number of children either have no health insurance or depend on publicly funded health insurance. In 1993, 9.3 million children were uninsured, and 13.7 million were on Medicaid. These totals represent over one-third of U.S. children. The effect of the Medicaid expansion is clear when considering how poor children, near-poor children, and higher income children fared in the health insurance marketplace between 1989 and 1993. During this period, the percentage of children of all income levels with employment-based insurance declined, and Medicaid coverage for all income levels expanded but to different degrees. Employment-based insurance declined most for near-poor children. (See table 1.) Meanwhile, an additional 11 percent of these children obtained coverage under Medicaid. The proportion of near-poor children who were uninsured did not change significantly. These children were still most likely to be uninsured in 1993 and most likely to be on Medicaid. Poor children had a smaller decline in employment-based insurance than near-poor children and a large increase in Medicaid coverage. As a result, the percentage of poor children who were uninsured actually declined from 25 percent in 1989 to 20 percent in 1993. Unlike poor and near-poor children, children with family incomes above 150 percent of the federal poverty level were more likely to be uninsured in 1993 than 1989. This group of children had the highest rates of employment-based coverage in 1989 (80.2 percent) and the smallest decrease in such coverage. The Medicaid expansion cushioned this group less since few of these children depend on Medicaid for their coverage. However, the percentage of these children covered by Medicaid had a small and statistically significant increase—from 7.5 percent to 9.1 percent. Because of policy changes to expand children’s eligibility, Medicaid has become a more important source of insurance for children who were less likely to get Medicaid in the past, such as children in working families. The proportion of Medicaid children who had a working parent increased between 1989 and 1993. By 1993, more than half the children on Medicaid had a working parent, and almost half did not depend on AFDC. Medicaid coverage also increased more during this period among other children less likely to receive Medicaid in the past—children in two-parent families, children of more educated parents, white non-Hispanic and Hispanic children, and children living in the South. Changing Medicaid eligibility policies so that children are eligible on the basis of income and age, even if they are not on AFDC, allowed uninsured children of low-income working families to get health insurance through Medicaid. A significant increase in the number of children with a working parent on Medicaid has resulted. Over half of Medicaid children had a working parent in 1993. (See fig. 3.) The proportion of children in working families on Medicaid grew, and the number of Medicaid children with a working parent increased from 4 million in 1989 to 7.3 million in 1993—an 83 percent increase. Most working parents with children on Medicaid worked less than full time for the entire previous year. However, the percentage of Medicaid families with a full-time worker increased. By 1993, 1 child in 5 on Medicaid had a parent who worked full time all year. Expanding Medicaid eligibility on the basis of income and age was a major reason for the increase in Medicaid enrollment, but not the only reason. AFDC enrollment also increased between 1989 and 1993. The number of children in Medicaid on AFDC or AFDC combined with other assistance increased by 1.3 million children—a 25 percent increase. But the expansion in non-AFDC children on Medicaid was much greater. Between 1989 and 1993, the number of non-AFDC children on Medicaid doubled—from 3.2 million to 6.4 million children. This greater increase in non-AFDC children increased the proportion of non-AFDC children on Medicaid. (See fig. 4.) The percentage of children not receiving AFDC or other assistance in 1993 increased from 36 percent of all children on Medicaid in 1989 to 47 percent—almost half. The Medicaid program now serves more children in two-parent families, another group of children less likely to receive Medicaid in the past. The percentage of Medicaid children in two-parent families grew from 29.7 percent in 1989 to 35.6 percent in 1993. Children on Medicaid in two-parent families were more likely to have a working parent in 1993 than in 1989 (78.5 percent compared with 72 percent) and also more likely to have at least one parent who worked full time (40.6 percent compared with 29.2 percent). In addition to an increase in the proportion of working and two-parent families on Medicaid, Medicaid enrollment also increased more for other groups of children less likely to receive Medicaid in the past. Children whose parents lack a high school diploma are most likely to receive Medicaid, since lower education and lower income are related. However, enrollment increased for some children less likely to be on Medicaid—those whose most educated parent had a high school diploma up through a bachelor’s degree. Meanwhile, employment-based insurance decreased for children whose parents’ education varied from less than high school to those whose parents had some college education. (See app. II.) Also, although a higher proportion of African American children receive Medicaid coverage than children in other racial/ethnic groups, Medicaid coverage expanded more for white, non-Hispanic children, and Hispanic children than for African American children. In 1989, a child in the South was less likely to receive Medicaid than a child in any other region, even though the South had the highest percentage of poor, uninsured children (47 percent). With the Medicaid expansion, enrollment increased most in the South, so by 1993 the percentage of poor, uninsured children in the South had declined. Despite this decline, the South remains the region with the highest percentage of uninsured children and the largest number of poor, uninsured children. (For more detail on these changes, see app. II.) Despite the Medicaid expansion, children of working parents with lower than average income still predominate among uninsured children. Living with a full-time working parent was even less of a guarantee that children would have health insurance coverage in 1993 than it was in 1989. Most uninsured children live with a full-time working parent, generally in a two-parent family. They differ from most children and especially from insured children because many more of them are poor or near poor. In 1989 and 1993, 89 percent of uninsured children had at least one working parent. The expansion in Medicaid coverage for children in working families did not decrease the percentage of uninsured children with a working parent. In some respects, the status of children in working families worsened. This is because the percentage of uninsured children with a full-time working parent grew between 1989 and 1993—from 57.2 percent (5 million) to 61.4 percent (5.7 million), respectively. (See fig. 5.) Unlike children on Medicaid, uninsured children generally resemble most U.S. children because they live in two-parent families, generally with a working parent. (See fig. 6.) In comparison with all U.S. children, uninsured children in two-parent families are less likely to live with a parent who worked full time the entire previous year. Uninsured children are also slightly more likely to live in one-parent families than are U.S. children in general. The families of uninsured children have less income than families of insured children. In 1993, 24.7 percent of all U.S. children lived in poor families. A larger proportion of uninsured children—36.7 percent—lived in poor families while only 6 percent of children with employment-based insurance did (see fig. 7). About 57 percent of uninsured children have family income at or below 150 percent of the federal poverty level, compared with 14 percent of children with employment-based insurance. In previous work, we found that poor workers may not have employment-based insurance for several reasons: Health insurance is a more substantial share of total employee compensation for low-wage workers than for higher wage workers, so firms with predominantly low-wage workers are less likely to offer health insurance—or, if they do, may not offer dependent coverage since doing so is more expensive than covering the worker alone. A large share of small firms that employ low-wage workers are less likely to offer health insurance. Small firms pay higher health insurance premiums because insurers incur higher administrative costs to serve small firms. Even if coverage is offered, health insurance cost sharing represents more of the household budgets of poor and near-poor families so that families may decide they cannot afford health insurance even if available. Although many uninsured children could be covered by Medicaid, they are not. At least one-quarter of uninsured children in 1993—2.3 million—had family incomes that should have made them eligible for Medicaid. That year, at least 13.8 million children met the federal age and poverty income eligibility requirements for the program. Of these children, 2.4 million (17.3 percent) had employment-based insurance. Almost 8.4 million (60.9 percent) were on Medicaid, and about 675,000 (5 percent) either had individually purchased private insurance or Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) insurance. Another 16.9 percent—2.3 million children—were uninsured. More than half of these uninsured Medicaid-eligible children—over 1.4 million—were under age 6. Compared with children on Medicaid, these children are more likely to live in working, two-parent families. In 1993, 79.5 percent had at least one working parent, and 42.5 percent had at least one parent working full time. A total of 1.3 million (57.6 percent) of the Medicaid-eligible uninsured children are in two-parent families; 1.2 million of these children have a working parent. The South has more uninsured Medicaid-eligible children than any other region—1.4 million. In comparison to all uninsured children, uninsured Medicaid-eligible children are more likely to be African American or Hispanic. Several possible reasons may explain why these families might not have enrolled their children in Medicaid. Low-income families may not know that their children could be eligible for Medicaid even if a parent works full time or if the family has two parents. In a study that interviewed AFDC recipients and former recipients who had begun working but were still receiving Medicaid (so-called Transitional Medicaid) in Charlotte, North Carolina, and Nashville, Tennessee, researchers found that 41 percent of AFDC recipients and 23 percent of former recipients did not understand that a parent could work full time and receive Medicaid for his or her children. Sixty-two percent of the AFDC recipients and 37 percent of the Transitional Medicaid recipients did not know that children could be eligible for Medicaid if they lived in an intact, two-parent family. Another reason so many uninsured children are not on Medicaid may be that getting enrolled in Medicaid is difficult for low-income families. Many people who are potentially eligible for Medicaid never complete the application process, and about half the denials are for procedural reasons—that is, applicants did not or could not provide the basic documentation needed to verify their eligibility or did not appear for all the eligibility interviews. Finally, some families may not seek Medicaid until they face a medical crisis because they are not used to regular or preventive medical care. In addition, medical and social service providers report that some families do not want to enroll in Medicaid because they consider it a welfare program and consider it stigmatizing. Expanding children’s Medicaid eligibility has significantly increased the number of children with Medicaid as their health insurance. It has also helped cushion the effect of declining employment-based health insurance coverage for children. Because of expanded eligibility, the proportion of children on Medicaid in working and in two-parent families has grown. The Congress is currently considering legislation to reform AFDC to encourage low-income mothers to work. However, work for many lower income families does not include the benefit of health insurance that it more often does for higher income families. Clearly, having a full-time working parent and being in a two-parent family does not ensure that a child will have health insurance. Although Medicaid has begun to help close that gap for some families, many more uninsured children are eligible for Medicaid than have been enrolled. Changes to the Medicaid program that remove guaranteed eligibility and change the financing and responsibilities of the federal and state governments may strongly affect health insurance coverage for children in the future. Children account for only a small portion of Medicaid costs. Because they represent almost half the participants, however, any changes to Medicaid disproportionately affect children. Changes to Medicaid that result in reducing the number of children covered, without any accompanying changes in the health insurance marketplace either to encourage employers to provide dependent health insurance coverage, or to encourage families to purchase insurance, or to provide other coverage options for children, could lead to a significantly increased number of uninsured children in the future. We did not seek written agency comments because this report does not focus on agency activities. We discussed a draft of this report with responsible Department of Health and Human Services officials in the Health Care Financing Administration and included their comments where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others on request. Please contact me at (202) 512-7125 if you or your staff have any further questions. This report was prepared by Rose Marie Martinez, Sheila Avruch, Paula Bonin, and Frank Ullman. The Congress passed a series of laws beginning in 1986 that substantially expanded Medicaid eligibility for pregnant women and children. Some of these laws required eligibility expansions, and others allowed states options to expand eligibility. (See table I.1.) The Omnibus Budget Reconciliation Act of 1986 (OBRA-86) OBRA-86 (P.L. 99-509) gave states the option to expand Medicaid income eligibility thresholds above AFDC levels up to the federal poverty level for pregnant women and infants, effective April 1, 1987. It also gave states the option of phasing in coverage for poor children up to age 5, effective October 1, 1990. The Omnibus Budget Reconciliation Act of 1987 (OBRA-87) OBRA-87 (P.L. 100-203) allowed states to raise Medicaid income thresholds for pregnant women and infants as high as 185 percent of the federal poverty level, effective July 1, 1988. It also amended the statute to give states the option of phasing in coverage of poor children up to age 8, effective October 1, 1988. The Medicare Catastrophic Care Amendments of 1988 (MCCA) MCCA (P.L. 100-360) mandated minimum coverage of pregnant women and infants at the federal poverty level, with a 2-year phase-in period, effective for calendar quarters beginning on or after July 1, 1989. Affected states were to raise income limits to 75 percent of poverty by July 1, 1989, and to poverty level by July 1, 1990. MCCA also added Section 1902 (r) (2) to the Social Security Act, which allows states to use more liberal criteria for Medicaid than is used for the AFDC program to determine Medicaid financial eligibility, effective July 1, 1988. States can disregard specific amounts of income and other resources and allow certain categories of eligible populations to qualify for Medicaid. The Omnibus Budget Reconciliation Act of 1989 (OBRA-89) OBRA-89 (P.L. 101-239) superseded MCCA’s mandate schedule by requiring states to cover, at a minimum, pregnant women and children up to age 6 at 133 percent of the federal poverty level, effective for calendar quarters beginning on or after April 1, 1990. The Omnibus Budget Reconciliation Act of 1990 (OBRA-90) OBRA-90 (P.L. 101-508) required states to begin (effective on or after July 1, 1991) to phase in coverage of children born after September 30, 1983, until all children living below poverty up to age 19 are covered; the upper age limit will be reached by October 2002. Many states have taken advantage of the options to expand Medicaid eligibility for infants and children beyond federally required minimum eligibility levels—either by increasing the ages of children covered more quickly than the phase-in requires or increasing eligibility income levels or both. As of April 1995, 33 states and the District of Columbia had increased coverage for infants beyond federal requirements by generally expanding coverage up to 185 percent of the federal poverty level. Eight states expanded coverage for children aged 1 through 5, and 20 states expanded coverage for children aged 6 and older beyond federal requirements. In all, 37 states and the District of Columbia have expanded eligibility for either infants or children or both. (See table I.2.) Percent of federal poverty level Children (1-5) Children (6 and older) under 15 (born after 6/30/79) (continued) Percent of federal poverty level Children (1-5) Children (6 and older) under 12 (born after 6/30/83) under 12 (born after 6/30/83) In addition, several states have recently received special waivers allowing them to undertake statewide Medicaid demonstration projects, several of which extend health insurance coverage to portions of the uninsured, including children. Authorized by section 1115(a) of the Social Security Act (42 U.S.C. 1315), these waivers typically enable states to place all or some of their Medicaid population in managed care arrangements. The waivers commonly require higher income families to pay premiums or copayments, often on a sliding scale. In addition to the states with approved waivers, other states have waivers pending. Since 1991 and as of June 15, 1995, Delaware, Florida, Hawaii, Kentucky, Massachusetts, Minnesota, Ohio, Oregon, Rhode Island, and Tennessee have had their section 1115 demonstration waivers approved. To date, only Oregon, Hawaii, Rhode Island, and Tennessee have implemented their 1115 demonstration waiver programs. Waiver applications for seven other states are pending: Illinois, Missouri, Nevada, New Hampshire, New York, Oklahoma, and Vermont. The states that operate 1115 waiver programs have generally expanded eligibility: Hawaii expanded Medicaid eligibility to all persons with income up to 300 percent of the federal poverty level, with cost sharing for most residents with incomes above the federal poverty level. Oregon expanded Medicaid eligibility to all persons with income up to the federal poverty level while limiting health coverage to a ranked list of services. Rhode Island expanded coverage to pregnant women and children up to age 6 with family incomes at or below 250 percent of the federal poverty level. Tennessee expanded coverage to uninsured people without regard to income level, but cost sharing is required for people who are not Medicaid eligible or have family income above the federal poverty level. To manage the program within its planned enrollment levels, Tennessee is now only enrolling people who are Medicaid eligible or considered uninsurable. Several states have developed other types of programs to insure children not eligible for Medicaid. Seven states have statewide programs using state and other funds to expand coverage for children beyond Medicaid eligibility levels. Some of these programs provide only limited benefits compared with the Medicaid program. For example, they may not cover inpatient care. We will issue a report on some of these programs and other nonstatewide programs later this year. California covers children under age 2 with family income up to 250 percent of the federal poverty level. Maryland covers children under 11 who have family income up to 185 percent of the federal poverty level with a limited benefit package. Massachusetts has an insurance buy-in program with fees based on a sliding scale for children under 13 with no family income limit. Minnesota has an insurance buy-in program with fees based on a sliding scale for children and adults with family income up to 275 percent of the federal poverty limit. New Jersey covers children up to age 1 with family income up to 300 percent of the federal poverty limit. New York covers children under 15 (born on or after June 1, 1980) with a limited benefit package. The program is open to all income levels, but only children with family income below 160 percent of the federal poverty limit are fully subsidized. Children with family income between 160 percent and 222 percent of the federal poverty level are partially subsidized. Pennsylvania covers children under 14 with family income up to 185 percent of the federal poverty limit for fully subsidized insurance; families with income between 185 and 235 percent of the federal poverty level can buy partially subsidized insurance for their children; in some parts of the state, these children get their partial premiums paid by their insurers. The Medicaid expansion has increased Medicaid coverage more for some groups of children—those whose parents had more than minimal education, whites and Hispanics, and children in the South. Although the Medicaid expansion had a greater impact on the South, children living in the South are still most likely to be uninsured in 1993. Medicaid coverage expanded for children whose parents’ education ranged from less than high school to college degrees. (See fig. II.1.) The largest percentage increase in Medicaid coverage—92 percent—was among children whose most highly educated parent had some college education but not a 4-year college degree. Children who had a parent with a graduate education were the only group with little change in percentage covered by Medicaid. The Medicaid expansion for children of all but the most educated parents coincided with a decline in employment-based coverage for most of the same groups. Since 1989, the percentage of children with employment-based insurance declined for children whose parents’ education ranged from less than high school to some college. (See table II.1.) The decline was greatest for children of parents with less than a high school diploma. Children with a parent who had a bachelor’s degree or more education had no decrease in employment-based coverage. Medicaid continued to insure higher proportions of children with the least educated parents. (See fig. II.2 and table II.1.) Higher education is strongly correlated with higher income, and Medicaid predominantly serves poor children. Children with less educated parents are also more likely to be uninsured. Almost 80 percent of children whose most educated parent lacks a high school diploma were either uninsured or on Medicaid in 1993. Less than a high school diploma (continued) The Medicaid expansion increased the number of children on Medicaid more for whites and Hispanics between 1989 and 1993 than for African Americans. (See fig. II.3.) The number of white children on Medicaid increased 75 percent—from 3.1 million to 5.5 million—and the number of Hispanic children increased by 79 percent—from 1.8 million to 3.2 million—since 1989. In contrast, the number of African American children on Medicaid increased 30 percent—from 3.4 million to 4.4 million. In 1993, 41 percent of African American children, 35 percent of Hispanic children, and 12 percent of white children were on Medicaid. White children are less likely to be on Medicaid, but, of all children on Medicaid, they represent the largest segment (40.2 percent). Nevertheless, children of racial and ethnic minorities are still more likely to be uninsured. (See fig. II.4.) While only 10.5 percent of white children were uninsured in 1993, 25.6 percent of Hispanic children and 15.2 percent of African American children were uninsured. Minority children have higher rates of being uninsured, but white children make up about half (51.6 percent) of all uninsured children. Between 1989 and 1993, the number of children on Medicaid increased in all regions, but the greatest increase occurred in the South. (See fig. II.5.) Compared to the Northeast and Midwest, the South had higher percentages of uninsured children in poverty in both 1989 and 1993—38.7 percent in 1993. Despite this, the South had the lowest percentage of its children on Medicaid in 1989—12.7 percent in the South compared with 15.3 percent in the West. Southern states historically have had stricter AFDC eligibility requirements relative to the federal poverty level than other regions. Thus, Southern poor children were less likely to be on AFDC and covered by Medicaid through AFDC. When Medicaid coverage became mandated by age and poverty, the greatest number of children benefiting were in the South. Medicaid coverage increased to 20.5 percent of southern children in 1993. Not surprisingly, the four states with the highest Medicaid coverage of children are all in the South—the District of Columbia (45.4 percent), Louisiana (29.9), Mississippi (27.9), and Tennessee (26.8); 8 of the top 15 states for Medicaid coverage of children are southern states. (See table II.1 for percentages of Medicaid, uninsured, and employment-based insured children by state.) Overall, the percentage of children on Medicaid increased for all regions, and the disparities between regions decreased between 1989 and 1993. Nevertheless, despite the Medicaid expansion, more uninsured children live in the South than in any other region of the country. The South has 43 percent of uninsured children—almost 4 million children. Businesses in the South are less likely to offer health insurance than businesses in other regions. Regional differences in health insurance coverage among the employed may also reflect the greater degree of industrialization and unionization in other parts of the country and higher incidence of small and service-sector businesses in the South. Table II.2: Children Uninsured, on Medicaid, or With Employment-Based Insurance, by State, 1993 95% S. E. 95% S.E. 95% S.E.(continued) 95% S. E. 95% S.E. 95% S.E. S.E. represents sampling error. Each reported percent and number estimate from the Current Population Survey has an associated sampling error, the size of which reflects the precision of the estimate. Sampling errors for percentage estimates were calculated at the 95-percent confidence level, which means that the chances are about 19 out of 20 that the actual percentage being estimated falls within the range defined by our estimate, plus or minus the sampling error. For example, we estimate that 13.5 percent of U.S. children are uninsured; a 95-percent chance exists that the actual percentage is between 12.9 percent and 14.1 percent. To examine the impact of the Medicaid expansion on children, we analyzed the Current Population Survey (CPS). The method we used to define insurance status and to match children to parents resulted in a conservative estimate of the number of children uninsured and on Medicaid. In addition, two other aspects of our analysis affected the results in different ways. First, we counted parental work effort on the basis of whichever parent had the highest level of work (such as working full time as opposed to part time), and we counted parental education on the basis of whichever parent had attained the highest educational level. Second, we used recently released sample weights for the March 1990 CPS, which makes the data more equivalent to the March 1994 CPS, although it would differ slightly from previously published analyses of the March 1990 CPS. The CPS is the source of official government statistics on employment and unemployment. Although the main purpose of the survey is to collect information on employment, an important secondary purpose is to collect information on the demographic status of the population, such as age, sex, race, marital status, educational attainment, and family structure. The CPS survey conducted every March also collects additional data on work experience, income, noncash benefits, and health insurance coverage of each household member at any time during the previous year. The CPS sample is based on the civilian, noninstitutionalized population of the United States. About 57,000 households with approximately 112,000 persons 15 years old and older and approximately 33,000 children aged 0 to 14 years old are included. It also includes Armed Forces members living in households with civilians either on or off base. The households sampled by the CPS are scientifically selected on the basis of area of residence to represent the United States as a whole, individual states, and other specified areas. We defined insurance status using a hierarchy. If a child had multiple coverage during the year, we counted that child under only one type of coverage. (See table III.1.) We counted employment-based insurance, which is the most common type for children to have, as the primary insurance. If a child had both employment-based insurance and Medicaid, that child was counted as having employment-based insurance. Since most Medicaid children with multiple coverage had Medicaid and employment-based coverage, our count of Medicaid children better represents children who depended entirely on Medicaid for any insurance coverage. Did not have any health insurance during the entire year. Had health insurance purchased through a parent’s employer or union for at least part of the year. Did not have employment-based health insurance at all, but had Medicaid or Medicare coverage for at least part of the year. Did not have employment-based insurance or Medicaid or Medicare coverage at all, but had CHAMPUS coverage for at least part of the year. Did not have employment-based insurance, Medicaid or Medicare coverage, or CHAMPUS at all, but had private/individually purchased health insurance coverage for at least part of the year. The Census has published a table with information from the CPS that reports multiple coverage. It has different numbers and percentages for insurance status of children with Medicaid, CHAMPUS, and private/individually purchased health insurance than we reported because it reports multiple coverage on the unmatched data set. (See table III.2.) We matched children with parents to analyze family characteristics. The Census considers a family to be two or more persons residing together and related by birth, marriage, or adoption. The Census develops family records for the householder (a person in whose name the housing unit is owned, leased, or rented, or if no such person, an adult in the household), other relatives of the householder with their own subfamilies, and unrelated subfamilies. If the house is owned, leased, or rented jointly by a married couple, the householder may be either the husband or wife. We paired children to an adult (aged 18 through 64) in their immediate family whom we call a parent. After this pairing, we matched the adult family member to a spouse, if any, to get “parents” in our file. We were not able to match all children with parents. Because data in this report are based only on the matched files, the number of children reported in every insurance category is conservative. The estimates of Medicaid and uninsured children are more conservative than the estimate of children with employment-based insurance because we were able to pair fewer Medicaid and uninsured children with an adult than children with employment-based insurance. (See table III.2.) The way we matched parents with children to analyze the association of work effort, education, and insurance for children helped develop a more accurate picture of uninsured and Medicaid children with working and more highly educated parents. We analyzed parent work status on the basis of information about the parent who worked the most. (See table III.3.) We also reported educational status on the basis of whichever parent had the highest educational status—graduate education, bachelor’s degree or 4 years of college, some college, high school diploma, or less than a high school diploma. This allowed us to more accurately portray the work status or education of parents in two-parent families. Either parent worked full time/full year. No parent worked full time/full year, but at least one worked full time part of the year. No parent worked full time, but at least one parent worked part time for the entire year. No parent worked either full time or full year, but at least one parent worked part time for part of the year. No parent worked at all during the entire year. Conducting the analysis in this way allowed us to search for a parent more likely to have insurance—either because they worked more or were more educated. We found some interesting results from this analysis. For example, we found more uninsured and Medicaid children living with at least one parent who worked full time than if we had not searched for employment status of both parents in two-parent families. We also found fewer uninsured and Medicaid children who lived with a parent who had less than a high school diploma. The CPS is based on a sample of the U.S. population, and weights are used to compute the estimates for the total population. The basic weight represents the probability that individuals will be included in the survey. The weights are computed on the basis of information from the decennial censuses. We used weights based on information from the 1990 census for both 1989 and 1993 to make them more equivalent. Information from the 1990 census was not available when the March 1990 CPS public use survey tapes were first released so those tapes were originally released with weights established through information from the 1980 decennial census. Since then, the Census Bureau released adjusted weights for the March 1990 CPS that can be used in analyzing that CPS file. We used weights adjusted to the 1990 census provided by the Census Bureau for both the 1989 and 1993 data to make the data more comparable and to make the 1989 data more accurate. We also did a sample run on the 1989 data using the earlier census weights to compare the differences. Using the more recent weights yields slightly different results, such as a small increase in the number and percentage of children uninsured or on Medicaid. Medicaid: Spending Pressures Drive States Toward Program Reinvention (GAO/HEHS-95-122, Apr. 4, 1995). Medicaid: Restructuring Approaches Leave Many Questions (GAO/HEHS-95-103, Apr. 4, 1995). Medicaid: Experience With State Waivers to Promote Cost Control and Access Care (GAO/HEHS-95-115, Mar. 23, 1995). Uninsured and Children on Medicaid (GAO/HEHS-95-83R, Feb. 14, 1995). Health Care Reform: Potential Difficulties in Determining Eligibility for Low-Income People (GAO/HEHS-94-176, July 11, 1994). Medicaid Prenatal Care: States Improve Access and Enhance Services, but Face New Challenges (GAO/HEHS-94-152BR, May 10, 1994). Employer-Based Health Insurance: High Costs, Wide Variation Threaten System (GAO/HRD-92-125, Sept. 22, 1992). Access to Health Insurance: State Efforts to Assist Small Businesses (GAO/HRD-92-90, May 14, 1992). Mother-Only Families: Low Earnings Will Keep Many Children in Poverty (GAO/HRD-91-62, Apr. 2, 1991). Health Insurance Coverage: A Profile of the Uninsured in Selected States (GAO/HRD-91-31FS, Feb. 8, 1991). Health Insurance: An Overview of the Working Uninsured (GAO/HRD-89-45, Feb. 24, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the status of health insurance for children, focusing on: (1) the impact of the Medicaid expansion on children's health insurance coverage since 1989; (2) changes in the demographic profile of children enrolled in the Medicaid program and uninsured children since the Medicaid expansion; and (3) the number of uninsured children who might be eligible for Medicaid. GAO found that: (1) policy changes helped increase the number of children enrolled in Medicaid by 4.8 million between 1989 and 1993, but the overall number of uninsured children did not decline because employment-based coverage for adults and children declined during the same period; (2) children were not as affected by the loss of employment-based insurance as adults because of expanded Medicaid coverage; (3) the percentage of poor children who were uninsured declined from 25 percent in 1989 to 20 percent in 1993, while the percentage of near-poor children who were uninsured increased during that time; (4) the Medicaid expansion increased the enrollment of children less likely to be on Medicaid and by 1993, more than half of Medicaid children had a working parent and almost half were not receiving Aid to Families with Dependent Children benefits; (5) the greatest increase in coverage was among children with at least one full-time working parent; (6) the South region had the greatest increase in the number of children enrolled in Medicaid, although it still has the greatest number of uninsured children; and (7) at least 2.3 million uninsured children were eligible but not enrolled in Medicaid because their parents were unaware of their eligibility or had difficulty in applying for Medicaid coverage.
|
HUD operates a variety of project-based rental assistance programs through which it pays subsidies, or housing assistance payments, to private owners of multifamily housing that help make this housing affordable for lower income households. In some cases, HUD subsidized the construction of the housing (or substantial rehabilitation of existing properties) through means such as discounted mortgages insured by HUD’s Federal Housing Administration; in others, such as the Section 202 Supportive Housing for the Elderly Program, HUD provided grants to construct the housing. HUD entered into long-term contracts, often 20 to 40 years, committing it and the property owners to providing long-term affordable housing. Under these contracts, tenants generally pay 30 percent of their adjusted income toward their rents, with the HUD subsidy equal to the difference between what the tenants pay and the contract rents that HUD and the owners negotiate in advance. In the mid- to late-1990s, Congress and HUD made several important changes to the duration of housing assistance contract terms (and the budgeting for them), the contract rents owners would receive relative to local market conditions, and the manner in which HUD administers its ongoing project-based housing assistance contracts. Specifically: Because of budgetary constraints, HUD shortened the terms of subsequent renewals, after the initial 20- to 40-year terms began expiring in the mid-1990s. HUD reduced the contract terms to 1 or 5 years, with the funding renewed annually subject to appropriations. Second, in 1997, Congress passed the Multifamily Assisted Housing Reform and Affordability Act (MAHRA), as amended, in an effort, among other things, to ensure that the rents HUD subsidizes remain comparable with market rents. Over the course of the initial longer term agreements with owners, contract rents in some cases came to substantially exceed local market rents. MAHRA required an assessment of each project when it neared the end of its original contract term to determine whether the contract rents were comparable to current market rents and whether the project had sufficient cash flow to meet its debt as well as daily and long-term operating expenses. If the expiring contract rents were below market rates, HUD could increase the contract rents to market rates upon renewal (i.e., “mark up to market”). Conversely, HUD could decrease the contract rents upon renewal if they were higher than market rents (i.e., “mark down to market”). Finally, in 1999, because of staffing constraints (primarily in HUD’s field offices) and the workload involved in renewing the increasing numbers of rental assistance contracts reaching the end of their initial terms, HUD began an initiative to contract out the oversight and administration of most of its project-based contracts. The entities that HUD hired— typically public housing authorities or state housing finance agencies— are responsible for conducting on-site management reviews of assisted properties; adjusting contract rents; reviewing, processing, and paying monthly vouchers submitted by owners; renewing contracts with property owners; and responding to health and safety issues at the properties. These performance-based contract administrators (PBCA) now administer the majority of contracts—over 13,000 of approximately 23,000 contracts in fiscal year 2004. According to HUD officials, the department has not yet transferred all of its rental assistance contracts to the PBCAs. HUD plans to have traditional, or nonperformance-based contractors, continue to administer the approximately 5,000 contracts that they were administering until these contracts expire; at which time, these contracts will be assigned to the PBCAs. The traditional contract administrators are, often, local public housing authorities handling a very limited number of contracts. HUD itself also administers the contracts under certain programs, such as the Section 202 Supportive Housing for the Elderly Program and the Section 811 Supportive Housing for Persons with Disabilities Program. HUD announced in April 2004 that it was conducting a competitive sourcing effort to determine the most efficient and cost-effective means to administer some of these contracts. At the conclusion of this effort, HUD will seek new budget authority to pay for contract administration services. Consequently, while the PBCAs handle most of HUD’s project-based housing, three types of administrators are involved in day-to-day program oversight and administration, including tasks involved in processing monthly housing assistance payments. To receive their monthly housing assistance payments, owners must submit monthly vouchers to account for changes in occupancy and tenants’ incomes that affect the actual amount of subsidy due. However, the manner in which the owners submit these vouchers and the process by which they get paid varies depending on which of the three types of contract administrators handles their contract (see fig. 1). For HUD-administered contracts, the owner submits a monthly voucher to HUD for verification, and HUD in turn pays the owner based on the amount in the voucher. For PBCA-administered contracts, the owner submits a monthly voucher to the PBCA, which verifies the voucher and forwards it to HUD for payment. HUD then transfers the amount verified on the voucher to the PBCA, which in turn pays the owner. In contrast, for traditionally administered contracts, HUD and the contract administrator develop a yearly budget, and HUD pays the contract administrator set monthly payments. The owner submits monthly vouchers to the contract administrator for verification, and the contract administrator pays the amount approved on the voucher. At the end of the year, HUD and the contract administrator reconcile the payments HUD made to the contract administrator with the amounts the contract administrator paid to the owner, exchanging payment as necessary to settle any difference. HUD has an ongoing effort to improve its rental assistance programs’ business processes and make better use of information technology related to those programs. In 2004, HUD launched a Business Process Reengineering (BPR) initiative to, among other things, improve inefficient and redundant processes, as identified by HUD’s contractor for this effort, and to integrate HUD’s data systems. HUD expects its contractor to identify its recommended changes by June 2006. According to HUD officials, HUD does not currently have the funding in place to implement the BPR. Between fiscal years 1995 and 2004, HUD disbursed three-fourths of its monthly housing assistance payments by the due date, but thousands of payments each year were late, affecting many property owners. For this 10-year period, about 8 percent of all payments were delayed by 2 weeks or more, a time frame we characterize as significant. On average, about one- third of housing assistance contracts experienced at least 1 payment per year that was delayed by 2 weeks or more. Furthermore, the timeliness of housing assistance payments has varied, with a decrease in 1998, but with a gradual improvement since 2001. Timeliness also varied by type of contract administrator, with payments for HUD-administered contracts more likely to be late, based on our analysis of fiscal year 2004 payment data. Timeliness varied considerably by state as well. Overall, from fiscal years 1995 through 2004, HUD disbursed by the due date 75 percent of the 3.2 million monthly housing assistance payments on all types of contracts (see fig. 2). However, 8 percent of payments, averaging 25,000 per year, were significantly late—that is, they were delayed by 2 weeks or more and therefore could have negative effects on owners who relied on HUD’s subsidy to pay their mortgages. During this period, 6 percent of the total payments (averaging 18,000 per year) were 4 weeks or more late, including about 10,000 payments per year that were 8 weeks or more late. HUD does not have an overall timeliness standard by which it makes payments to owners or its contract administrators, based in statute, regulation, or HUD guidance. However, HUD contractually requires the PBCAs (which administer the majority of contracts) to pay owners no later than the 1st business day of the month. HUD officials said that they also use this standard informally to determine the timeliness of payments on HUD-administered and traditionally administered contracts. Therefore, we considered payments to be timely if they were disbursed by the 1st business day of the month. Based on our discussions with project owners who reported that they relied on HUD’s assistance to pay their mortgages before they incurred late fees (generally, after the 15th day of the month), we determined that a payment delay of 2 weeks or more was significant. The timeliness of housing assistance payments over the 10-year period (fiscal years 1995 through 2004) has shown some variation (see fig. 3). The percentage of payments that were significantly late increased in 1998, which HUD and PBCA officials indicated likely had to do with HUD’s initial implementation of MAHRA and new contract renewal procedures and processing requirements for project owners. Timeliness has gradually improved since 2001, shortly after HUD first began using the PBCAs to administer contracts. The percentage of contracts experiencing significantly late payments over the course of the year showed a similar variation over the 10-year period, rising to 43 percent in fiscal year 1998 and decreasing to 30 percent in fiscal year 2004 (see fig. 4). As with the percentage of late payments, the percentage of contracts with late payments increased in fiscal year 1998 when HUD implemented requirements pursuant to MAHRA. Over the 10-year period, about one-third of approximately 26,000 contracts experienced at least 1 payment per year that was delayed by 2 weeks or more. Although HUD data showed a gradual decline in the percentage of late payments and the number of contracts affected by late payments, in the most recent 3-year period (fiscal years 2002 through 2004), the percentage of payments that were 2 weeks or more and 4 weeks or more late was nearly as high (7 percent and 5 percent) as over the 10-year period (see fig. 2). Payments on HUD-administered contracts were more likely to be delayed than those on contracts administered by the PBCAs and traditional contract administrators, based on HUD’s fiscal year 2004 payment data (see fig. 5). Payments on PBCA- and HUD-administered contracts have more elaborate monthly processing requirements than do the payments on traditionally administered contracts that HUD processes. Payments on PBCA- and HUD-administered contracts require that the monthly vouchers be reviewed and processed by a PBCA or HUD field office before a payment is approved. As previously noted, for traditionally administered contracts, HUD creates an annual budget, amortizes the budget over 12 payments for the year, disburses the set monthly payments, and makes any necessary adjustments through a year-end settlement based on voucher information submitted to the traditional contract administrators. The percentage of chronically late payments also varied by contract administrator. In fiscal year 2004, 9 percent of HUD-administered contracts experienced chronic late payments, while 3 percent of PBCA-administered contracts and 1 percent of the traditionally administered contracts had chronic late payments. We analyzed HUD’s payment data by state and found that timeliness varied considerably for both PBCA- and HUD-administered contracts, although the reasons for this variation are not clear. The percentage of payments on PBCA-administered contracts that were 2 weeks or more late in fiscal year 2004 ranged from 1 percent in North Dakota to 13 percent in the District of Columbia (see fig. 6). With some exceptions, a single PBCA administers all of the PBCA-administered contracts for a single state. However, late payments may be attributable to a number of factors, and the HUD payment data do not provide an explanation for the variations among the states. For HUD-administered contracts, 19 states had 15 percent or more of their payments that were 2 weeks or more late in 2004. The percentage of payments 2 weeks or more late ranged from 2 percent in North Dakota to 35 percent in Wyoming. Again, the HUD payment data do not provide an explanation for the state variation in payment delays. The percentage of contracts that experienced chronic late payments also varied by state. In fiscal years 2002 through 2004, 17 percent of the contracts in Delaware and 13 percent in Connecticut and the District of Columbia had 6 or more payments per year that were 2 weeks or more late (see fig. 7). In contrast, less than 3 percent of contracts in most states had chronic late payments. The contract renewal process, HUD’s internal processes for funding and monitoring contracts, and owners’ erroneous or untimely voucher submissions affected payment timeliness. For instance, owners were more likely to receive late monthly payments when their contracts with HUD had not been renewed by their expiration dates. Moreover, HUD’s process of estimating how much funding it needs to obligate to contracts; HUD’s inconsistent approach to monitoring contracts to determine when additional funding should be obligated; and lack of staff access to, and training on, HUD payment databases also may have affected the timeliness of housing assistance payments. Additionally, HUD’s interpretation of legislative restrictions on its ability to use recaptured funds may have exacerbated payment delays. Finally, owners’ erroneous or untimely submissions of monthly vouchers could have caused some of the untimely payments from HUD. Late monthly voucher payments were more likely to occur when a contract had not been renewed by its expiration date, according to many of the HUD officials, contract administrators, and property owners with whom we spoke. HUD’s accounting systems require that an active contract be in place with funding obligated to it before it can release payments for that contract. Therefore, an owner cannot receive a monthly voucher payment on a contract that HUD has not renewed. Our analysis of HUD data from fiscal years 2002 through 2004 shows that 60 percent of the payments that were 2 weeks or more late was associated with pending contract renewals, among late payments on PBCA- and HUD-administered contracts for which HUD recorded the reason for the delay (see fig. 8). A contract renewal may be “pending” when one or more parties involved in the process—HUD, the PBCA, or the owner—have not completed the necessary steps to finalize the renewal. Based on our interviews with HUD officials, contract administrators, and owners, pending contract renewals may result from owners’ failing to submit their renewal packages on time. Often the delay occurs when owners must submit a study of market rents, completed by a certified appraiser, in order to determine the market rent levels. However, late payments associated with contract renewals may also occur because HUD has not completed its required processing. For example, according to a HUD official, at one field office we visited, contract renewals were delayed because HUD field staff were behind in updating necessary information, such as the new rent schedules associated with the renewals and the contract execution dates in HUD payment systems. HUD’s contract renewal process itself also may take longer than expected, contributing to late housing assistance payments, because the process is largely manual and paper driven and requires multiple staff in the PBCAs and HUD to complete (see fig. 9). Upon receipt of renewal packages from owners, the PBCAs then prepare and forward signed contracts (in hard copy) to HUD field offices, which execute the contracts; in turn, the field offices send hard copies of contracts to a HUD accounting center, which activates contract funding. In order to allow sufficient time to complete the necessary processing, HUD’s policy currently requires owners to submit a renewal package to their PBCAs 120 days before a contract expires, and gives the PBCAs 30 days to forward the renewal package to HUD for completion (leaving HUD 90 days for processing). However, some of the owners with whom we spoke told us that their contract renewals had not been completed by the contract expiration dates, even though they had submitted their renewal packages on time. While initial contract renewals (upon expiration of the owner’s initial long- term contract) often exceeded the 120-day processing time, subsequent renewals were less time-consuming and resulted in fewer delays, according to HUD officials, the PBCAs, and owners. Initial renewals could be challenging for owners because they often involved HUD’s reassessment of whether the contract rents were in line with market rents. Additionally, the initial renewal represents the first time that owners have to provide HUD with the extensive documentation required for contract renewals in order to continue receiving housing assistance payments. Our analysis of the most recent 3 years of HUD data (fiscal years 2002 through 2004) shows that while 25 percent of initial contract renewals exceeded the 120-day processing time frame set by HUD, 17 percent of subsequent renewals exceeded that time frame, as shown in figure 10. Increased timeliness on subsequent renewals might be explained partly by owners’ gaining competency—that is, the PBCAs and owners described a “learning curve” when owners renewed their contracts for the first time. The processing times for contract renewals that HUD’s data show do not include some interactions between the PBCAs and owners. More specifically, HUD’s data systems capture the dates on which it receives completed renewal packages from the PBCAs, but do not capture the dates for earlier steps in the process. For instance, the data systems do not capture the dates when owners initially submit renewal packages to the PBCAs and, thus, the amount of time it may take the PBCAs and owners to go “back and forth” to assemble completed packages. According to our analysis, the processing time for the contract renewals also was likely to exceed HUD’s 120-day standard when owners chose or were subject to one of two options at their initial renewals. First, for properties with contract rents lower than comparable market rents, owners had the option to request contract renewals under the “mark-up-to- market” option, which required (1) owners to obtain an appraiser’s determination of comparable market rents and (2) HUD to reassess the contract rents in order to raise them to applicable market level rents. For fiscal years 2002 through 2004, 60 percent of the 471 contract renewals using the mark-up-to-market option took more than the expected 120 days. Second, for expiring contracts with rents higher than comparable market rents, contract administrators referred the owners to HUD’s Office of Multifamily Housing Assistance Restructuring (OMHAR), a process that can lead to rents in renewed contracts that are lower than those in the expiring contracts (the “mark-to-market” option). For fiscal years 2002 through 2004, 56 percent of the 1,276 contract renewals referred to OMHAR to reduce rents—and, in many cases, to restructure the property owners’ debt—took more than the expected 120 days to process. Recognizing that contract renewal is lengthy and cumbersome, HUD’s goal is to automate the renewal process and reduce the 120-day time frame through a BPR effort for its rental assistance programs. HUD launched this initiative in 2004 to, among other things, develop plans to improve what it characterizes as “inefficient or redundant processes” and to integrate data systems. For example, according to a senior HUD official, the department’s goal is to automate the entire contract renewal process by 2007, eliminating the need for HUD and owners to physically sign the contracts. According to HUD officials, this effort would eventually include a more streamlined and automated contract renewal process. However, this effort is in its early stages, and is currently not focused on streamlining the contract renewal process or addressing the problem of late housing assistance payments. HUD does not have concrete plans regarding how it will accomplish these goals, nor does it have funding in place to implement any of the recommendations the reengineering effort might develop. The methods HUD uses to estimate the amount of funds needed for the term of each of its project-based assistance contracts and the way it monitors the funding levels on those contracts may also affect the timeliness of housing assistance payments. When HUD renews a contract, and when it obligates additional funding for each year of contracts with 5-year terms, it obligates an estimate of the actual subsidy payments to which the owner will be entitled over the course of a year. However, those estimates are often too low, according to HUD headquarters and field office officials and contract administrators. For example, an underestimate of rent increases or utility costs or a change in household demographics or incomes at a property will affect the rate at which a contract exhausts its funds, potentially causing the contract to need additional funds obligated to it before the end of the year. If HUD underestimates the subsidy payments, the department needs to allocate more funds to the contract and adjust its obligation upwards to make all of the monthly payments. Throughout the year, HUD headquarters uses a “burn-rate calculation” to monitor the rate at which a contract exhausts or “burns” the obligated funds and to identify those contracts that may have too little (or too much) funding. According to some HUD field office and PBCA officials, they also proactively monitor contract fund levels. Based on the rate at which a contract exhausts its funds, HUD obligates more funds if needed. However, based on our analysis of available HUD data and our discussions with HUD field office officials, owners, and contract administrators, payments on some contracts were still delayed because they needed to have additional funds allocated and obligated before a payment could be made. As shown in figure 8, our analysis of HUD’s payment data shows that, where the reasons for delayed payments on PBCA- and HUD-administered contracts were available, 11 percent of delays of 2 weeks or more were due to contracts needing additional funds obligated. That is, those payments were delayed because, at the time the owners’ vouchers were processed, HUD had not allocated and obligated enough funding to the contracts to cover the payments. One potential factor contributing to payment delays related to obligating contract funding is likely that staff at some HUD field offices—unlike their counterparts in other field offices and staff at some of the PBCAs—did not have access to data systems or were not trained to use them to monitor funding levels. At some of the field offices we visited, officials reported that they did not have access to the HUD data systems that would allow them to adequately monitor contract funding levels. For example, one field office official told us that he needed access to one of HUD’s accounting data systems to more accurately monitor contract funding. According to this official, he requested “read-only” access to this system, which requires a security clearance, but never received information on the status of his application from HUD headquarters. HUD field offices reported, and headquarters confirmed, that some field officials have not received training to carry out some functions critical to monitoring the burn rate. One field office official reported that none of the staff in her office had received training in a payment processing database, which is critical for monitoring the status of monthly payments. A HUD headquarters official reported that changes in the agency’s workforce demographics posed challenges because not all of the field offices have staff with an optimal mix of skill and experience. According to a senior HUD official, HUD’s BPR is intended to provide a systematic, agencywide solution to the contract funding issues that field office officials have been trying to address on an ad hoc basis to prevent payment delays. If this effort successfully addresses contract funding monitoring agencywide through automation, as this official suggested, HUD may not have to rely solely on the intervention of its field officials. Prior to fiscal year 2003, HUD used funds that it had recaptured from some contracts to augment other contracts that required additional funds. Based on HUD’s interpretation of its appropriations acts for fiscal years 2003 and 2004, the agency determined that recaptured funds were not available in those years to fund contract amendments. According to HUD officials, this made it difficult to budget for amendments in those years and could have affected the timeliness with which HUD funded some contracts and made related housing assistance payments. HUD’s fiscal year 2005 appropriation specifically authorized the use of recaptured funds for contract amendments. According to HUD headquarters officials, operating under a continuing resolution rather than an appropriation should not affect the timeliness of housing assistance payments. According to HUD budget officials, under a continuing resolution, HUD has funding available to meet its contractual obligations to pay project owners and, if need be, to renew rental assistance contracts. The PBCAs with which we met estimated that 10 to 20 percent of owners submit late vouchers each month. For example, one PBCA reported that about 20 percent of the payments it processed in 2004 were delayed due to late owner submissions. However, the PBCAs also reported that they generally could process vouchers in less than the allowable time—20 days—agreed to in their contracts with HUD and resolve any errors with owners to prevent a payment delay. According to PBCA officials, there are often several “back-and-forth” interactions with owners to resolve errors or inaccuracies. Typical owner submission errors include failing to account correctly for changes in the number of tenants or tenant income levels, or failing to provide required documentation. As we previously noted, because HUD’s data systems do not capture the back-and-forth interactions PBCA officials described to us, we could not directly measure the extent to which owners’ original voucher submissions may have been late, inaccurate, or incomplete. HUD officials and the PBCAs reported that owners had a learning curve when contracts were transferred to the PBCAs because the PBCAs reviewed monthly voucher submissions with greater scrutiny than HUD had in the past. The timeliness of payments may also be affected by a PBCA’s own internal policies for addressing owner errors. For example, in order to prevent payment delays, some of the PBCA officials with whom we spoke told us that they often process vouchers in advance of receiving complete information on the owners’ vouchers. In contrast, at one of the PBCAs we visited, officials told us that they will not process an owner’s voucher for payment unless it fully meets all of HUD’s requirements. HUD’s payment delays have had negative financial effects on project owners, but they are unlikely to result in owners opting out of HUD’s programs. Owners with whom we spoke reported that they have incurred late fees on their mortgages and other bills and have had difficulty operating their properties as a result of payment delays. The severity of the effects depended on the financial condition of the property owner and the extent to which the owner relied on HUD’s subsidy to make the mortgage payment and operate the property. HUD did not notify owners when payments would be late, and owners said that this lack of notice exacerbated the effect of late payments. However, delayed payments alone were unlikely to result in opt outs, although they could have been a contributing factor, according to owners as well as officials from industry groups and HUD. Finally, our analysis of HUD payment data indicated that there was little difference in payment delays between properties that have opted out of, and those that still participate in, HUD’s programs. Some owners report that they have not been able to pay their mortgages or other bills on time as a result of HUD’s payment delays. Three of the 16 owners with whom we spoke reported having to pay their mortgages or other bills late as a result of HUD’s payment delays. One owner reported that he was in danger of defaulting on one of his properties as a direct result of late housing assistance payments. Another owner reported having paid $4,000 in late fees to a utility company because she was unable to pay the bill on time. Another owner was unable to provide full payments to vendors, including utilities, telephone service, plumbers, landscapers, and pest control services during a 3-month delay in receiving housing assistance payments. According to this owner, her telephone service was interrupted during the delay and her relationship with some of her vendors suffered. For example, the pest control and plumbing vendors would continue to provide services only if they received cash in advance. This owner also expressed concern about how the late and partial payments to vendors would affect her credit rating. Industry groups with whom we spoke also raised concerns about their members’ inability to pay mortgages and other bills when HUD’s housing assistance payments were delayed. If owners are unable to pay their vendors or their staff, services to the property and the condition of the property could suffer. At one affordable housing project for seniors that we visited, the utility services had been interrupted because of the owner’s inability to make the payments. At the same property, the owner told us that she could not purchase cleaning supplies and had to borrow supplies from another property. One of the 16 owners with whom we spoke told us that they were getting ready to furlough staff during the time that they were not receiving payments from HUD. According to one HUD field office official, owners have complained about not being able to pay for needed repairs or garbage removal while they were waiting to receive a housing assistance payment. According to one industry group official, payment delays could result in the gradual decline of the condition of the properties in instances where owners were unable to pay for needed repairs. According to owners as well as industry group and HUD officials, owners who are heavily reliant on HUD’s subsidy to operate their properties are more severely affected by payment delays than other owners. Particularly, owners who own only one or a few properties and whose operations are completely or heavily reliant on HUD’s subsidies have the most difficulty weathering a delay. Two of the 16 owners with whom we spoke reported that they could not pay their bills and operate the properties during a payment delay. These owners were nonprofits, each operating a single property occupied by low-income seniors. In both cases, the amount of rent they were receiving from the residents was insufficient to pay the mortgage and other bills. Neither of these owners had additional sources of revenue. In contrast, owners with several properties and other sources of revenue were less severely affected by HUD’s payment delays. Three of the owners with whom we spoke reported that they were able to borrow funds from their other properties or find other funding sources to cover the mortgage payments and other bills. All 3 of these owners had a mix of affordable and market rate properties. According to HUD and PBCA officials, owners who receive a mix of subsidized and market rate rents from their properties would not be as severely affected by a payment delay as owners with all subsidized units. For example, representatives of 2 of the owners stated that they did not have to take any measures to address delays in housing assistance. One owner is an investment firm for a pension fund that maintains a large portfolio of mostly market rate properties. According to a representative of the firm, delayed housing assistance payments had not caused financial difficulties, but the delay had presented accounting difficulties for the firm. The other owner is a nonprofit with several properties. According to a representative of the owner, the rents paid by the residents of all of the properties were a larger part of the nonprofit’s revenue than the HUD subsidy, so the nonprofit was not negatively affected by an occasional delay in housing assistance payments. HUD allows owners to borrow from their reserve accounts to help mitigate the effects of delayed housing assistance payments, but some owners either do not have reserves or their reserves are not sufficient to cover the period of the delay. HUD requires HUD-insured properties and properties with HUD-held mortgages to set aside funds in a reserve account, which is designed primarily to help fund capital improvements on the properties. HUD also allows owners to withdraw funds from this account in the event of HUD’s payment delays, so that owners are able to make their mortgage payments. However, properties that are not insured by HUD and do not have a HUD-held mortgage may not have a reserve account, and, according to HUD and industry group officials, owners with small or newer properties may not have sufficient reserves to cover delays. Even if the reserves were sufficient, industry group officials have pointed out that owners might have to defer capital improvements during payment delays, and also lose interest that they would otherwise accrue in the reserve account. Some projects also have a residual receipts account from which owners may borrow. HUD requires nonprofits and limited dividend multifamily projects that are HUD-insured or have a HUD-held mortgage to maintain a residual receipts account for monies beyond the owner’s maximum allowable distribution or profit. HUD has no system for notifying owners when a payment delay will occur or when it will be resolved, which industry associations representing many owners as well as the owners with whom we met indicated impedes their ability to adequately plan to cover expenses until receiving the late payment. Most of the owners with whom we spoke reported that they received no warning from HUD that their payments would be delayed. Several of the owners told us that notification of the delay and the length of the delay would give them the ability to decide how to mitigate the effects of a late payment. For example, owners could then immediately request access to reserve accounts if the delay were long enough to prevent them from paying their mortgages or other bills on time. Industry group officials with whom we met agreed that a notification of a delayed payment would benefit their members. Project owners, industry group officials, contract administrators, and HUD officials we interviewed generally agreed that market factors primarily drove an owner’s decision to opt out of HUD programs. Owners generally opt out when they can receive higher market rents or when it is financially advantageous to convert their properties to condominiums. In previous work, we reported that financial and market considerations were factors likely to affect owners’ decisions to opt out of HUD’s programs. For profit-motivated owners, this decision can be influenced by the condition of the property and the income levels of the surrounding neighborhood. Owners were more likely to opt out if they could upgrade their properties at a reasonable cost to convert them to condominiums or rental units for higher income tenants. Most of the owners with whom we spoke, including some profit-motivated owners, reported that they would not opt out of HUD programs because of their commitment to providing affordable housing. Industry group officials also stated that most of their members are “mission driven,” or committed to providing affordable housing. According to some owners with whom we spoke, owners have accepted payment delays as the price of doing business with HUD. However, industry group and HUD officials stated that delayed payments could be a contributing factor in some opt outs. According to HUD officials, owners with primarily market rents in their buildings were more likely to opt out because the owners felt that the rents from subsidized units were not worth the burden of HUD’s documentation and reporting requirements. Only 1 (a real estate investment firm for a pension fund) of the 16 owners we interviewed stated that the firm would opt out of HUD programs if the payment delays were longer. According to representatives of this firm, their company has a fiduciary responsibility to the pension fund. If they began losing money on their affordable housing projects, they would have to sell them. Our analysis of HUD’s monthly payment data for fiscal years 1995 through 2004 revealed little difference in the percentage of late payments for those contracts that opted out and those still participating in HUD’s programs (9.6 percent and 9.2 percent, respectively). In addition, we found that over the 10-year period, 1,764 housing assistance contracts out of the 13,051 that were eligible to do so opted out of HUD’s programs. These opt outs represented 1,460 affordable housing projects (a project may have more than 1 contract, hence the number of contracts exceeds the number of projects). The number of contracts opting out over this period peaked in fiscal year 1998, with 392 contracts opting out, and gradually declined to 54 in fiscal year 2004 (see fig. 11). The number of opt outs likely declined after the passage of MAHRA and HUD’s subsequent efforts to preserve affordable housing by allowing owners to increase the contract rents with HUD to market rates, thereby making it more financially viable for owners to continue participating in HUD’s programs. HUD plays an important role in ensuring the continued availability of affordable housing by providing subsidies to owners of multifamily rental properties and encouraging owners to remain in its programs. Over the 10-year period we examined, HUD made most payments on time—that is, by the 1st business day of the month. However, a significant percentage of HUD’s payments were late. The delays, particularly those of 2 weeks or more, can cause financial hardships for property owners. For example, the subsidies not only help pay mortgages, but also the daily operating expenses of many owners. In retrospect, new requirements under MAHRA and the transition to a new system of contract administration likely increased delays, particularly in the late 1990s. The initial difficulties in implementing MAHRA requirements have abated, and HUD largely has completed the transition to performance-based contract administration. However, while the timeliness of housing assistance payments has improved in recent years, the number of significantly late payments remains a concern. Although HUD has made changes to improve contract administration, it has not comprehensively addressed the factors that most affect the timeliness of payments—that is, its contract renewal and contract funding and monitoring processes. HUD has recognized that its contract renewal process is cumbersome and inefficient and wants to cut contract processing time as one goal of a broader BPR effort. However, that effort has just gotten under way and currently is not closely focused on the housing assistance payment process. As a result, if HUD were to rely solely on the reengineering effort, it would miss opportunities to effect more immediate improvements to the processing of contract renewals. In addition, HUD effectively could prevent many delayed payments by better estimating the amounts it needs to obligate to contracts each year, more systematically monitoring contract funding levels on an ongoing basis, and promptly allocating and obligating additional funding to contracts when necessary. Currently, while contract funding needs can increase for unforeseen reasons, HUD often underestimates how much funding a contract will need when it obligates funds at the beginning of a year. Furthermore, HUD’s existing monitoring has not prevented payment delays associated with contracts needing additional funding obligated in order for HUD to pay the owner. As previously noted, HUD has opportunities to improve its contract processes and avoid the often damaging disruptions late payments could cause. While project owners and industry groups have indicated that late housing payments alone would not lead them to opt out of HUD programs, late housing assistance payments have serious consequences for owners and potentially for the residents they serve. But, HUD also has opportunities to mitigate the effects of payments that it cannot make on time. More specifically, if HUD were to notify project owners of delays and their likely duration, owners could make contingency plans or otherwise address the delayed payments. To improve the timeliness of housing assistance payments and mitigate the effects on owners when payments are delayed, we recommend that the Secretary of Housing and Urban Development take the following three actions: streamline and automate the contract renewal process to prevent processing errors and delays and eliminate paper/hard-copy requirements to the extent practicable; develop systematic means to better estimate the amounts that should be allocated and obligated to project-based housing assistance payment contracts each year, monitor the ongoing funding needs of each contract, and ensure that additional funds are promptly obligated to contracts when necessary to prevent payment delays; and notify owners if their monthly housing assistance payments will be late and include in such notifications the date by which HUD expects to make the monthly payment to the owner. We provided a draft of this report to HUD for its review and comment. In a letter from the Assistant Secretary for Housing, Federal Housing Commissioner (see app. II), HUD stated that it concurred with our conclusions and agreed that the implementation of our recommendations would improve payment timeliness. Specifically, HUD agreed to review its process for renewing and amending rental assistance contracts to identify areas that can be streamlined and automated. HUD also agreed that developing a more systematic means to estimate contract funding needs would further improve payment timeliness. HUD stated that it has obtained a contractor to determine how to improve its system of estimating contract funding needs. Additionally, HUD agreed that notification to owners when payments will be late is desirable and that it will examine the feasibility of providing such notification. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs, and the Chairman and Ranking Minority Member of its Subcommittee on Housing and Transportation. We will also send copies to the Secretary of Housing and Urban Development and the Director of the Office of Management and Budget. We will make copies available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or woodd@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Housing and Urban Development’s (HUD) housing assistance payments are timely, we obtained from HUD and analyzed 10 years of monthly payment data (fiscal years 1995 through 2004). We identified the timeliness of each payment within HUD’s data systems by comparing the date that the U.S. Treasury disbursed the payment with the date that the payment was due—the 1st business day of the month. We did not look at the dollar amount of these payments. For contracts administered by performance-based and traditional (or nonperformance-based) administrators, the Treasury payment is disbursed to the administrator, which in turn makes payments to the project owners. In contrast, for HUD-administered contracts, the Treasury disburses payments directly to the owners. We analyzed trends in timeliness over the 10-year period as well as the most recent 3-year period (fiscal years 2002 through 2004) for a more current picture of payment timeliness. We also calculated the percentage of payments that had various degrees of lateness (such as 1 to 6 days or 4 weeks or more). For fiscal year 2004, we compared timeliness for payments processed by the different types of contract administrators involved in this process (i.e., HUD field offices, performance-based contract administrators (PBCA), and traditional contract administrators, for which the HUD Financial Management Center processes payments). We limited our analysis to fiscal year 2004 because the data we obtained from HUD do not allow us to identify for prior fiscal years which type of contract administrator was responsible for each contract, and, over the course of these years, HUD was in the process of transferring contract administration responsibilities to the PBCAs. To better understand the payment process, we interviewed officials from both HUD’s Office of Multifamily Housing and HUD’s Financial Management Center and reviewed relevant documentation on the payment process. We used various HUD databases to analyze the timeliness of housing assistance payments. Specifically, we used data from HUD’s Program Accounting System (PAS) for payments on contracts administered by HUD and the PBCAs and data from the HUD Central Accounting and Program System for contracts administered by nonperformance-based contract administrators (traditionally administered contracts). We also used these data to determine the percentage of significantly late payments (i.e., 2 weeks or more late), including the distribution by type of contract administrator. We also used the PAS data to analyze differences in payment timeliness by state for PBCA- and HUD-administered contracts. In order to assess the reliability of the data previously described, we reviewed related documentation and interviewed agency officials who work with these databases. In addition, we performed internal checks to determine the extent to which the data fields were populated and the reasonableness of the values contained in the fields. During our internal checks, we excluded from our analysis 7 percent of the payments recorded in PAS due to unreasonable values for the payment date. We concluded that the data we used were sufficiently reliable for the purposes of this report. To determine the factors that affect the timeliness of HUD’s housing assistance payments, we interviewed HUD headquarters officials responsible for managing and budgeting for the project-based assistance contracts and payments as well as officials from industry groups representing a variety of property owners and management agents. We also conducted site visits to eight locations that we selected by including those with high and low percentages of late payments. For these site visits, we interviewed the relevant field office officials involved in processing housing assistance payments, renewing housing assistance contracts, and conducting oversight of the PBCAs. We interviewed officials of the PBCAs for each of the states we visited. In each of the eight locations, we also interviewed 2 project owners with some experience with payment delays. We randomly selected 15 of the 16 owners we interviewed; HUD field office officials identified 1 of the project owners during the phase of our work when we were gathering initial background information. For all of our interviews for these site visits, we used a semistructured interview guide to ensure consistency. We also reviewed relevant documentation provided by HUD field officials, the PBCAs, and project owners. We used available HUD data to characterize the reasons for some payment delays for fiscal years 2002 through 2004. We matched PAS payment data on PBCA- and HUD-administered contracts with data on reasons for payment delays from HUD’s Tenant Rental Assistance Certification System (TRACS). We could only determine the reason for delays for 55 percent of the late payments. For almost all of the remaining 45 percent of the payments, HUD’s data systems did not accept the voucher—for these payments there was no error code associated with the delay. Although the data on reasons for delays are thus not representative of all late payments in these years, the testimonial evidence we obtained though our discussions with property owners, contract administrators, and HUD officials corroborated the results of our data analysis. HUD did not collect data on the reasons for delayed payments on traditionally administered contracts. We also analyzed data to examine the timeliness of contract renewals with the various types of rent adjustments that owners may seek. To determine the extent to which HUD renewed or adjusted its contracts with property owners within the 120-day time frame that the agency has established, we used data from HUD’s Real Estate Management System covering fiscal years 2002 through 2004. In order to assess the reliability of the data we used to determine reasons for late payments and delays in contract renewals, we reviewed related documentation. In addition, we performed internal checks to determine the extent to which the data fields were populated and the reasonableness of the values contained in the fields. We concluded that the data we used were sufficiently reliable for the purposes of this report. To assess the effects of housing assistance payment delays on project owners and their willingness to continue providing affordable housing, we compared available HUD payment data on projects that have opted out of HUD’s programs with those currently receiving assistance to determine if these projects had experienced more payment delays. We tested for statistically significant differences in the timeliness of payments among properties that had and had not opted out. We held meetings with a variety of industry groups to obtain their views on how late payments may affect project owners and their willingness to continue providing affordable housing. We also spoke with HUD field office officials, the PBCAs, and project owners on our eight site visits, as previously mentioned, regarding the effects of late payments on project owners. We conducted our work between October 2004 and September 2005 in Baltimore, Maryland; Boston, Massachusetts; Chicago, Illinois; Des Moines, Iowa; Kansas City, Kansas; Kansas City, Missouri; Los Angeles, California; Manchester, New Hampshire; Seattle, Washington; and Washington, D.C., in accordance with generally accepted government auditing standards. In addition to those named above, Bill MacBlane, Assistant Director; Jackie Garza; Patty Hsieh; Jane Kim; Don Porteous; Julianne Stephens Dieterich; Alison Martin; Marc Molino; Linda Rego; Barbara Roesmann; and Stu Seman made key contributions to this report.
|
The Department of Housing and Urban Development (HUD) provides subsidies, known as housing assistance payments, under contracts with privately owned, multifamily projects so that they are affordable to low-income households. Project owners have expressed concern that HUD has chronically made late housing assistance payments in recent years, potentially compromising owners'ability to pay operating expenses, make mortgage payments, or set aside funds for repairs. GAO was asked to discuss the timeliness of HUD's monthly housing assistance payments, the factors that affect payment timeliness, and the effects of delayed payments on project owners. From fiscal years 1995 through 2004, HUD disbursed three-fourths of its monthly housing assistance payments on time, but thousands of payments were late each year, affecting many property owners. Over the 10-year period, 8 percent of payments were delayed by 2 weeks or more. Payments were somewhat more likely to be timely in more recent years. The process for renewing HUD's subsidy contracts with owners can affect the timeliness of housing assistance payments, according to many owners, HUD officials, and contract administrators that HUD hires to work with owners. HUD's renewal process is largely a manual, hard-copy paper process that requires multiple staff to complete. Problems with this cumbersome, paper-intensive process may delay contract renewals and cause late payments. Also, a lack of systematic internal processes for HUD staff to better estimate the amounts that HUD needs to obligate to contracts each year and monitor contract funding levels on an ongoing basis can contribute to delays in housing assistance payments. Although HUD allows owners to borrow from reserve accounts to lessen the effect of delayed housing assistance payments, 3 of 16 project owners told GAO that they had to make late payments on their mortgages or other bills--such as utilities, telephone service, or pest control--as a result of HUD's payment delays. Owners who are heavily reliant on HUD's subsidy to operate their properties are likely to be more severely affected by payment delays than other, more financially stable, owners. Owners reported receiving no warning from HUD when payments would be delayed, and several told GAO that such notification would allow them to mitigate a delay. Nonetheless, project owners, industry group officials, and HUD officials generally agreed that late housing assistance payments would be unlikely to cause an owner to leave HUD's housing assistance programs, because such a decision is generally driven primarily by local market factors.
|
Head Start was created in 1965 as part of President Johnson’s War on Poverty. It was built on the premise that effective intervention in the lives of children can be best accomplished through family and community involvement. Fundamental to this notion was that communities should be given considerable latitude to develop their own Head Start programs. Head Start’s primary goal is to improve the social competence of children in low-income families. Social competence is the child’s everyday effectiveness in dealing with both the present environment and later responsibilities in school and life. Because social competence involves the interrelatedness of cognitive and intellectual developmental, physical and mental health, nutritional needs, and other factors, Head Start programs provide a broad range of services. Another essential part of every program is parental involvement in parent education, program planning, and operating activities. Head Start is administered by HHS’ Administration for Children and Families (ACF), which includes the Head Start Bureau—one of several under ACF. Agencies that deliver Head Start services at the local level may be either grantees or delegate agencies. Unlike some other federal social service programs that are funded through the states, HHS awards Head Start grants directly to local grantees. Grantees numbered about 1,460 in fiscal year 1997. They may contract with organizations—called delegate agencies—in the community to run all or part of their local Head Start programs. Grantees and delegate agencies include public and private school systems, community action agencies and other private nonprofit organizations, local government agencies (primarily cities and counties), and Indian tribes. and grantees typically must contribute 20 percent of program costs from nonfederal funds. These funds can be cash, such as state, county, and private money, or in-kind contributions such as building space and equipment. The average amount of funds available per child in Head Start programs in the 1996-97 program year was $5,186; an average of $4,637 of this amount came from Head Start grant funds. Total funds per child varied widely by program, however, ranging from $1,081 to $17,029 per child. Before using Head Start funds for services, local agencies are required by Head Start regulations to identify, secure, and use community resources to provide services to children and their families. Consequently, Head Start programs have established many agreements for services. Head Start targets children from poor families, and regulations require that at least 90 percent of the children enrolled in each local agency program be low income. As shown in figure 1, Head Start families are poor as indicated by several measures. During the 1996-97 program year, more than one-half of the heads of Head Start households were either unemployed or worked part time or seasonally, and about 60 percent had family incomes under $9,000 per year. Furthermore, only 5 percent had incomes that exceeded official poverty guidelines, and 46 percent received Temporary Assistance for Needy Families (TANF) benefits. similar demographic characteristics. Most of the children—79 percent— spoke English as their main language. Spanish-speaking children constituted the next largest language group—18 percent. About 38 percent of the children were black, 33 percent were white, and 25 percent were Hispanic. About 13 percent of Head Start children had some sort of disability. The Congress has recently acted to strengthen Head Start’s emphasis on achieving program purposes by, for example, requiring the program to develop performance measures. In reauthorizing the Head Start Act in 1994, the Congress required HHS to develop specific performance measures for Head Start so that program outcomes could be determined. This requirement is consistent with the Results Act, which seeks to shift the focus of federal management away from inputs and processes and toward outcomes. Under the Results Act, agencies are required to develop goals and performance measures that will be assessed annually to show progress toward reaching the goals. Agencies are also expected to conduct specific evaluation studies as needed to obtain additional information about what federal programs are achieving. In response to this emphasis on performance assessment, Head Start has developed a framework that links program activities of local Head Start grantees to the program’s overall strategic mission and goal. This framework emphasizes the importance not only of complying with statutes and regulations, but also of achieving demonstrable outcomes. Head Start has developed five measurable, performance-based objectives. Two of these focus on outcomes: (1) enhancing children’s growth and development and (2) strengthening families as the primary nurturers of their children. The other three focus on program activities that the agency believes are critical to achieving the two outcome objectives: (1) providing children with educational, health, and nutritional services; (2) linking children and families to needed community services; and (3) ensuring well-managed programs that involve parents in decision-making. has established one or more performance indicators by which to track the percentage of change. Because data on many of these indicators were not previously available, HHS has designed initiatives to collect the data. Head Start intends to assess progress toward these goals mainly through the Family and Child Experiences Survey (FACES). This survey will collect data from families with children enrolled in a random sample of Head Start centers (3,200 families were selected when the survey began in fall 1997), assessing them on a wide range of characteristics at the beginning of program participation, at the end of each year they participate, and at the end of kindergarten. Thus, Head Start will know, for example, if participants’ physical health and emergent literacy and math and language skills have improved. The FACES survey, however, will collect information only at the national level. At the local level, HHS does not require individual Head Start agencies to demonstrate that they have achieved program outcomes. They are only held accountable for achieving the objectives linked specifically to activities, such as providing a developmentally appropriate educational environment. HHS officials told us, however, that they intend in the future to require local agencies to assess what outcomes they have achieved, as some agencies already do. HHS has no specific plan or timetable yet for when this transition will take place. In addition, these HHS initiatives will not address the need for information on Head Start’s impact, limiting its ability to assess how well the program is achieving its purpose. That is, the initiatives will not explain what caused any improved outcomes—whether the same outcomes would have occurred if children and families were in other kinds of early childhood programs or none at all. Although we acknowledge the difficulty of conducting impact studies of programs such as Head Start, we believe that research could be done that would assure the Congress and HHS that the current $4 billion federal investment in Head Start is achieving its purpose. program impact will be unclear. The most reliable way to determine program impact is to compare a group of Head Start participants with an equivalent group of nonparticipants. Comparable groups of participants are important to determining impact because they prevent mistakenly attributing outcomes to program effects when these outcomes are really caused by other factors. For instance, a recent evaluation of the Comprehensive Child Development Program, a demonstration project involving comprehensive early childhood services like those of Head Start, found positive changes in the families participating. Because the study could compare participants with a comparable group not in the program, however, researchers discovered that families that had not participated also had similar positive changes. They concluded, therefore, that the positive changes could not be attributed to the program. Because of the importance of being able to attribute outcomes to Head Start rather than to other experiences children and their families might have had, we recommended in our 1997 report that HHS include in its research plan an assessment of the impact of regular Head Start programs. Head Start operates in a social environment that differs greatly from that of 30 years ago when the program was established: more parents are working full time, either by choice or necessity, and many more social service programs exist to address the needs of disadvantaged children and their families. These circumstances raise policy questions relevant to any consideration of the Head Start program’s future. financial penalties. The required participation rate rises to 50 percent in fiscal year 2002. Head Start’s own data show that about 38 percent of Head Start families needed full-day, full-year child care services in 1997. About 44 percent of the families that needed full-day, full-year child care services left their children at a relative’s or unrelated adult’s home when the children were not in Head Start. Because Head Start is predominantly a part-day, part-year program, the full-day needs of families conflict with the way program services have traditionally been delivered. In program year 1996-97, most Head Start children (90 percent) attended programs at group centers, rather than in home settings; about half of them (51 percent) attended centers that operated 3 to 4 hours per day. Only 7 percent of the children attended centers that operated 8 or more hours a day (see fig. 3). Almost two-thirds of the children attended centers that operated 9 months of the year; only one-fourth (27 percent) of the children attended centers that operated 10 to 11 months. And even fewer—7 percent—attended centers that operated year round. hours of care for their children. The director stated that the need for part-day services is “evaporating.” Other aspects of the program may also conflict with the priorities of working parents. For example, Head Start’s emphasis on strong parental involvement, its requirement that staff visit children’s homes, and its home-based service delivery option may be more difficult to implement given the schedules of working parents. Head Start program officials told us that welfare reform was already seriously affecting their programs’ makeup. For example, a Head Start director in Montana reported that the program eliminated some of the home-based slots so that more children could attend centers. According to a Head Start director in Pennsylvania, the changed environment presents considerable obstacles to the home-based program. This program will try to accommodate families’ schedules and perhaps conduct home visits in the evening, but the director acknowledged that sometime in the future home visits may no longer be feasible. In 1997, the Congress appropriated additional funds to, among other things, increase local Head Start enrollment by about 50,000 children. The Head Start Bureau’s priorities for allocating these funds differed from those of the past. In the past, priorities for allocating funds to expand Head Start emphasized part-day, part-year, or home-based services. In recognition of the increasing proportion of Head Start families needing full-day programs for their children, however, the Head Start Bureau announced that programs providing more full-day, full-year Head Start services will receive special priority for the new funds. Head Start has urged local agencies to consider combining these new Head Start expansion funds with other child care and early childhood funding sources and to deliver services through partnerships, such as community-based child care centers. According to HHS officials, this shift in emphasis was responsible for the fact that more than 30,000 of the 36,000 new enrollment opportunities for 3- to 5-year-olds will be for full-day, full-year Head Start. often facilitates its participants’ access to services, such as immunizations, rather than provide them directly. For example, when we asked Head Start programs the main methods used to provide medical services for enrolled children, 73 percent of survey respondents said that they referred participants to services, and some other entity or program, such as Medicaid’s Early and Periodic Screening, Diagnosis, and Treatment Program, primarily paid for the services. Dental services were also mainly provided by entities other than Head Start programs. Although the number of other programs that provide educational services has also grown in the past 30 years, education is the one service that local Head Start agencies typically provide by delivering it directly rather than facilitating access to it from another source. Some Head Start program officials who contracted with private preschools or child care centers to provide education services described the arrangement as offering benefits to both Head Start and the other program. For example, the arrangement eliminated the need to find a facility for the Head Start program as well as to provide the facility startup costs. The private center benefited from the arrangement as well because the Head Start funds allowed the center to do some repair work and purchase computers and playground equipment. We do not know the numbers of community programs that may provide education services, their capacity, or the overall quality of these programs. Head Start programs reported, however, that an array of early childhood programs operate in their communities and serve Head Start-eligible children. For example, 70 percent of Head Start program respondents reported to us that their area had state-funded preschools; 90 percent had other preschools and child development and child care centers in their area; and 71 percent reported that family day care homes served Head Start-eligible children in their area. Just as Head Start is not the only community program providing specific services to disadvantaged children and their families, it is also not the only program that uses a community’s network of services to facilitate access to a comprehensive set of services. In a 1995 report (which used 1990 data from a nationally representative sample of early childhood centers), we concluded that most disadvantaged children did not receive a full range of services from early childhood centers in part because of the limited number that could be served and limited subsidies and in part because of such centers’ limited missions. More recent evidence, however, suggests growth in the availability of such services for children. HHS has no information about the number of community programs providing comprehensive services, nor did we obtain this information in our recent study; we plan to explore this further in another study. about 11 percent of the local Head Start agencies served some children who were eligible for Head Start through other early childhood programs. (Respondents reported serving about 14,000 such children in program year 1996-97.) These children received some or most—but not all—of the services typically provided to children in Head Start programs. These programs were more likely to provide education services, meals, social services, and immunizations; dental and medical services were least often provided. In addition, some states offer preschool programs that emulate Head Start’s comprehensive model. In fact, some states provide services that are seemingly identical to those provided through Head Start. For example, in 1993, Georgia initiated its first statewide prekindergarten program. The program coordinates services for families, and children receive basic health and dental screenings and meals. In addition, Ohio has a state-funded Head Start initiative that coordinates closely with the federal Head Start program. The state-funded initiative offers children services that are identical to Head Start’s. In addition, Ohio has a state- funded preschool program for disadvantaged children that operates according to Head Start performance standards. While recognizing that these social changes may significantly affect Head Start now and in the future, the Congress and Head Start lack information needed to decide what specific actions to take in response to them. Information is lacking about families’ needs for services, how well Head Start’s current structure can respond to those needs, and the array of options available to disadvantaged children and their families. For example, although we expect the need for full-day services to grow, we do not know the extent to which families will choose Head Start—a predominantly part-day educational program—over full-day programs that offer child care, even if the Head Start program has an arrangement with another provider for child care for the rest of the day. Moreover, evidence suggests that more states, for example, are investing in child care and prekindergarten initiatives. The number of such initiatives is not known, however, nor do we have information on their quality. In addition, only limited anecdotal information exists about Head Start agencies’ initiatives for responding to these trends and the success of those initiatives. Additional information on family service needs and the options available to them would be valuable to Head Start and the Congress in ensuring that the significant investment of federal dollars is used to the greatest advantage to improve the social competence of children in low-income families. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed its work on the Head Start program, focusing on: (1) how well the Department of Health and Human Services (HHS) ensures that the Head Start program is achieving its purpose; and (2) how well Head Start is structured to meet the needs of program participants in today's social context, which differs significantly from that of 30 years ago. GAO noted that: (1) Head Start has, through the years, provided a comprehensive array of services and, as envisioned by the Government Performance and Results Act, has in recent years substantially strengthened its emphasis on determining the results of those services; (2) its processes still provide too little information, however, about how well the program is achieving its intended purposes; (3) HHS has developed a performance assessment framework that effectively links program activities with the program's overall strategic mission and goal; (4) this framework also includes measurable objectives for how the program will be implemented and what outcomes will be achieved; (5) HHS has new initiatives that will, in the next few years, provide information not previously available on outcomes such as gains made by children and their families while in the program; (6) currently, however, these initiatives are limited to assessing outcomes at the national level, not at the local agency level; (7) in addition, GAO is not convinced that these initiatives will provide definite information on impact, that is, whether children and their families would have achieved these gains without participating in Head Start; (8) although obtaining this kind of impact information would be difficult, the significance of Head Start and the sizeable investment in it warrant conducting studies that will provide answers to questions about whether the program is making a difference; (9) in addition to questions about the program's impact, questions exist about whether Head Start is structured to meet the needs of today's participants who live in a society much changed since the mid-1960s when the program was created; (10) families' needs have changed as more parents are working full time either by choice or necessity; (11) in addition, children and their families can now receive services similar to Head Start's from a growing number of other programs; (12) these social trends raise questions about how well Head Start is structured to meet participants' needs and, if changes are needed, what those changes should be; and (13) a lack of information about the array of community programs available and about actions local Head Start agencies have already taken hinders decisionmakers' ability to respond to these trends.
|
VA operates one of the nation’s largest health care systems. In fiscal year 2004, VA provided health care to approximately 5.2 million veterans at 157 VAMCs and almost 900 outpatient clinics nationwide. In fiscal year 2004, DOD provided health care to approximately 8.3 million beneficiaries, including active duty personnel and retirees, and their dependents. DOD health care is provided at more than 530 Army, Navy, and Air Force MTFs worldwide and is supplemented by TRICARE’s network of civilian providers. Through its TRICARE contracts, DOD uses civilian managed health care support contractors to develop networks of primary and specialty care providers and to provide other customer service functions, such as claims processing. DOD’s policy encourages inclusion of all VA health care facilities in its networks. Health care expenditures for VA and DOD are increasing. VA’s expenditures have grown—from about $12 billion in fiscal year 1990 to about $26.8 billion in fiscal year 2004—as an increasing number of veterans look to VA to meet their health care needs. DOD’s health care spending has gone from about $12 billion in fiscal year 1990 to about $30.4 billion in fiscal year 2004—in part, to meet additional demand resulting from congressional actions to expand program eligibility for military retirees, reservists, members of the National Guard, and their dependents, along with the increased needs of active duty personnel involved in conflicts in Afghanistan (Operation Enduring Freedom) and in Iraq (Operation Iraqi Freedom). Today, VA and DOD officials are reporting that many of their facilities are at capacity or exceeding capacity. The nature of sharing has shifted from one of utilizing untapped resources to one of partnering and gaining efficiencies by leveraging resources or buying power jointly. For example, VA and DOD have achieved efficiencies and cost avoidance through a concerted effort to jointly procure pharmaceuticals. The Congress has had a long-standing interest in expanding VA and DOD health care resource sharing. In 1982, the Congress passed the Veterans’ Administration and Department of Defense Health Resources Sharing and Emergency Operations Act (Sharing Act). The act authorizes VA and DOD to enter into sharing agreements to buy, sell, and barter health care resources to better utilize excess capacity. The head of each VA and DOD medical facility can enter into local sharing agreements. However, VA and DOD headquarters officials review and approve agreements that involve national commitments, such as joint purchasing of pharmaceuticals. VA and DOD sharing activities have typically fallen into three categories. Local sharing agreements allow VA and DOD to take advantage of their facilities’ capacity to provide health care by being providers of health services, receivers of health services, or both. Health services shared under these agreements can include inpatient and outpatient care; ancillary services, such as diagnostic and therapeutic radiology; dental care; and specialty care services, such as treatment for spinal cord injuries. Other examples of services shared under these agreements include support services, such as administration and management; research; education and training; patient transportation; and laundry. The goals of local sharing agreements are to allow VAMCs and MTFs to capitalize on their combined purchasing power, exchange health services to maximize use of resources, and provide beneficiaries with greater access to care. Joint venture sharing agreements, as distinguished from local sharing agreements, aim to avoid costs by pooling resources to build a new facility or jointly use an existing facility. Joint ventures require an integrated approach, as two separate health care systems must develop multiple sharing agreements that allow them to operate as one system at one location. National sharing initiatives are designed to achieve greater efficiencies, that is, to lower cost and improve access to goods and services when they are acquired on a national level rather than by individual facilities—for example, VA and DOD’s efforts to jointly purchase pharmaceuticals and surgical instruments for nationwide distribution. Later, in January 2002, the Congress passed legislation requiring VA and DOD to conduct a comprehensive assessment that would identify and evaluate changes to their health care delivery policies, methods, practices, and procedures in order to provide improved health care services at reduced cost to the taxpayer. To facilitate this, VA and DOD hired a contractor (at a cost of $2.5 million) to conduct the Joint Assessment Study that was completed on December 31, 2003. Unlike previous studies conducted by VA and DOD, the Joint Assessment Study combined VA and DOD beneficiary populations into a single market by geographic site. The contractor examined collaboration and sharing opportunities in three VA and DOD market areas: Hawaii; the Gulf Coast (Mississippi to Florida); and Puget Sound, Washington. Specifically, the study included a detailed independent review of options to colocate or share facilities and care providers in areas where duplication and some excess capacity may exist; optimize economies of scale through joint procurement of supplies and services; and partially or fully integrate VA and DOD systems to provide tele-health services, provider credentialing, cardiac surgical programs, rehabilitation services, and administrative services. The NDAA, passed in December 2002, required that VA and DOD implement two programs—JIF and DSS—to increase the amount of health care resource sharing taking place between VA and DOD. Under JIF, the departments are to identify and provide incentives to implement, fund, and evaluate creative health care coordination and sharing initiatives. Under DSS, the departments are to select projects to serve as a test for evaluating the feasibility, advantages, and disadvantages of programs designed to improve the sharing and coordination of health care resources. The NDAA also required VA and DOD jointly to develop and implement guidelines for a standardized, uniform payment and reimbursement schedule for selected health care services. In response, the departments established a standardized reimbursement methodology effective October 2003, between VA and DOD medical facilities through a memorandum of agreement implementing standardized outpatient billing rates based on the discounted Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) Maximum Allowable Charges (CMAC) schedule. The NDAA also required VA and DOD to develop and publish a joint strategic plan to shape, focus, and prioritize the coordination and sharing efforts within the departments and incorporate the goals and requirements of the joint strategic plan into the strategic plan of each department. We have reported that there is no more important element in results-oriented management than an agency’s strategic planning effort. This is the starting point and foundation for defining what the department seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in reaching goals and achieving objectives. We also previously reported that traditional management practices involve the creation of long-term strategic plans and regular assessments of progress toward achieving the plans’ stated goals. Moreover, the Government Performance and Results Act of 1993 (GPRA) requires agencies to set goals, measure performance, and report on their accomplishments. Performance measures are a key tool to help managers assess progress toward achieving the goals or objectives stated in their plans. They are also an important accountability tool to communicate department progress to the Congress and the public. Program performance measurement is commonly defined as the regular collection and reporting of a range of data, including a program’s inputs, such as dollars, staff, and materials; workload or activity levels, such as the number of applications that are in process, usage rates, or inventory levels; outputs or final products, such as the number of children vaccinated, number of tax returns processed, or miles of road built; outcomes of products or services, such as the number of cases of childhood illnesses prevented or the percentage of taxes collected; and efficiency, such as productivity measures or measures of the unit costs for producing a service. Other data might include information on customer satisfaction, program timeliness, and service quality. Managers can use the data that performance measures provide to help them manage in three basic ways: to account for past activities, to manage current operations, or to assess progress toward achieving planned goals and objectives. When used to look at past activities, performance measures can show the accountability of processes and procedures used to complete a task, as well as program results. When used to manage current operations, performance measures can show how efficiently resources, such as dollars and staff, are being used. Finally, when tied to planned goals and objectives, performance measures can be used to assess how effectively a department is achieving the goals and objectives stated in its long-range strategic plan. OMB, through the PMA released in the summer of 2001, has emphasized improving government performance through governmentwide and agency- specific initiatives. OMB is responsible for overseeing the implementation of the PMA and tracking its progress. According to OMB’s mission statement, its role is to help improve administrative management, develop better performance measures and coordinating mechanisms, and reduce any unnecessary burdens on the public. For each initiative, OMB has established “standards for success” and rates agencies’ progress toward meeting these standards. Among the PMA initiatives, one specifically focuses on improving coordination of VA and DOD programs and systems by increasing the sharing of services that will lead to reduced cost and increased quality of care. While JIF projects experienced challenges caused by delays resulting from the initial absence of funding mechanisms and, in some cases, the need for additional acquisition and construction approvals, as of December 2005, 7 of 11 selected 2004 projects were operational. DSS also experienced challenges as some sites reported difficulty putting together project submission packages, noting confusion over the timelines and approval process as well as frustration with the amount of paperwork and rework required. Nonetheless, as of December 2005, 7 of the 8 DSS projects were operational. The JIF program is to identify, fund, and evaluate creative health care coordination and sharing initiatives. Under the program, VA and DOD solicit proposals from their program offices, VAMCs, or MTFs for project initiatives at least annually. Legislation requires that the Secretaries of VA and DOD each contribute a minimum of $15 million from each department’s appropriation into a no-year account established in the U.S. Treasury for each of fiscal years 2004 through 2007. From December 2002 through May 2005, VA and DOD developed JIF program guidelines, solicited and reviewed proposals, established an account within the U.S. Treasury for funding projects, and selected and funded projects. A memorandum of agreement entered into by VA and DOD assigned the Financial Management Workgroup—a group established by HEC—as the administrator of JIF. The Financial Management Workgroup has oversight responsibility for the implementation, monitoring, and evaluation of the JIF program. The members of the workgroup review concept proposals for selection and provide their recommendations to HEC for final approval. They developed the following criteria to be used for evaluating the concept proposals and selecting the final projects: support DOD and VA’s joint long-term approach to meeting the health care needs of their beneficiary populations; improve beneficiary access; ensure exportability to other facilities; maximize the number of beneficiaries who would benefit from the initiative; result in cost savings or cost avoidance; develop in-house capability at a lesser cost for services now obtained by demonstrate that the project would be self-sustaining within 2 years. If funding is needed beyond 2 years, the local facility, the Surgeon General’s office, or the Veterans Integrated Service Network must agree to provide it. VA and DOD officials completed their review of 58 concept proposals that were submitted for the fiscal year 2004 funding cycle and ultimately selected 12 projects (subsequently reduced to 11) for funding in November 2004. VA and DOD issued a request for project proposals for the fiscal year 2005 funding cycle in November 2004. Submissions were due by January 2005, and according to VA and DOD officials, 56 concept proposals were submitted. VA and DOD reviewed the concept proposals in September 2005 and selected 18 for funding (subsequently reduced to 17). See figure 1 for a timeline and associated events affecting the implementation of the JIF program. Beginning in fiscal year 2004, each department as required by law, began contributing $15 million annually into the U.S. Treasury account established for funding JIF. VA and DOD report that as of January 2006, $54.3 million of the $90 million they contributed has been allocated to specific projects, and $5.3 million has been obligated. (See table 1.) For the 2004 JIF projects, project selection took place in August 2004. Initial funding for some of the projects began in November 2004. However, it was not until May 2005—about 2½ years after the program was established— that initial funding was provided to the last of the approved projects. According to officials from both departments, funding delays occurred for a number of reasons. VA and DOD needed time to set up the U.S. Treasury account and to establish funding mechanisms to facilitate the transfer of funds from the account to individual VAMCs or MTFs. Further, funding could not be provided until project officials and the surgeons general for DOD’s Departments of the Army, Navy, and Air Force completed required administrative actions. These actions included obtaining assurance from the surgeons general that service-specific department protocols for disbursing funds were followed and obtaining certification from project officials that projects would be self-sustaining within 2 years. While all approved fiscal year 2004 projects have now received funding, those still in the development phase are in the process of acquiring needed equipment, staff, or space. In addition to the delays caused by VA and DOD administrative processes to fund projects, the individual projects experienced delays for other reasons. For example, officials from both departments reported that additional approvals for acquisition of equipment and minor construction were needed before some projects could be initiated. Specifically, VA and DOD officials in North Chicago, Illinois, stated that in addition to the approvals required from HEC’s Financial Management Workgroup and the Navy Surgeon General’s Office, they were also required to seek and obtain acquisition approval from the National Acquisition Center for the mammography unit requested in their project. The officials stated that these three distinct approval processes for their JIF project should have been merged into a single approval process. Further, VA and DOD officials in Honolulu, Hawaii, reported that because of delays in obtaining acquisition approvals, pricing increases occurred, resulting in increased cost to the government. Initial project approval occurred in August 2004; however, final contract approval was not granted as of December 2005, over a year later. As of December 2005, 4 of the 11 JIF fiscal year 2004 projects were still in the development stage, with 7 of 11 operational. Some of the projects that were operational include a joint dialysis unit located at Travis Air Force Base, Fairfield, California, that according to VA and DOD officials, improves access for VA and DOD beneficiaries and lessens the cost to the government by reducing purchased services from the private sector; a tele- radiology unit located at the VAMC in Spokane, Washington, that is providing tomography scans for DOD beneficiaries; and an imaging services unit at Elmendorf Air Force Base in Anchorage, Alaska, that allows VA and DOD to pool their imaging needs and provide services in- house instead of contracting for them at very expensive fees charged by providers in this remote area. See appendix II for details about JIF projects selected in fiscal years 2004 and 2005. DSS projects are piloting different approaches to sharing health care resources in three areas—budget and financial management, coordinated staffing and assignment, and medical information and information technology. Further, each DSS project contains individual goals that have the potential to promote VA and DOD health care resource sharing and collaboration. The objective of each project is aligned with VA’s and DOD’s strategic goal to jointly acquire, deliver, and improve health care services. From July 2003 through August 2004, VA and DOD developed DSS program guidelines, solicited and reviewed proposals, and began funding projects. Eight projects were approved by HEC in October 2003; project funding began in August 2004; and as of December 2005, seven projects were operational. The DSS program is to serve as a test for evaluating the feasibility and the advantages and disadvantages of projects designed to improve sharing. The Joint Facility and Utilization Workgroup—a group established by HEC—is responsible for DSS project selection and oversight. Projects selected by the workgroup must be approved by HEC. As required by the statute, there must be a minimum of three VA and DOD demonstration sites (projects) selected. Also, at least one project was required to be tested in each area. As required by law, each department was required to make available at least $3 million in fiscal year 2003, at least $6 million in fiscal year 2004, and at least $9 million for each subsequent year in fiscal years 2005 through 2007 to fund DSS projects. During fiscal year 2003 no funds were allocated or obligated to projects because, according to VA and DOD officials, the business plans for the sites had not been finalized. During fiscal years 2004 and 2005, approximately $6.2 million and $12.7 million, respectively, of the $36 million made available by VA and DOD, were allocated to specific DSS projects, and $14.4 million was obligated. See table 2 for the amount of funds made available, allocated, and obligated for the DSS program. From July 2003 through October 2003, VA and DOD developed program guidelines and solicited and reviewed project proposals. Each proposal was reviewed and scored by members of the Joint Facility and Utilization Workgroup for each category for which it had been submitted. For example, according to VA and DOD officials, under budget and financial management, one of the criteria for selection included whether a project allowed managers to assess the advantages and disadvantages—in terms of relative costs, benefits, and opportunities—of using resources from either department to provide or enhance the delivery of health care services to beneficiaries of either department. For coordinated staffing and assignment projects, criteria included whether the project could demonstrate agreement on staffing responsibilities in providing joint services and the development of a plan to provide adequate staffing in the event of deployment or contingency operation. Criteria related to medical information and information technology included whether a project could communicate medical information and incorporate minimum standards of information quality and information assurance related to either credentialing, consolidated mail outpatient pharmacy, or laboratory data sharing. According to VA and DOD officials, upon selection DSS projects are to be monitored via periodic progress assessments to ensure that project activities align with the cost, schedule, and performance parameters outlined in the submitted business plan. The Joint Facility and Utilization Workgroup forwarded eight DSS project proposals to HEC, which approved them in October 2003. However, sites reported some difficulty putting together the project submission packages. For example, one site noted there was initial confusion over the timelines and approval process as each department had differing requirements. Another site expressed frustration with the amount of paperwork and rework required. Nevertheless, by June 2004 the sites developed and submitted for VA and DOD approval proposed implementation and business plans for their projects, in August 2004 VA and DOD began project funding, and in May 2005 VA and DOD reported that they had approved all the proposed project business plans. As of December 2005, VA and DOD reported that the following seven DSS projects were operational: A project at San Antonio, referred to as the Laboratory Data Sharing Initiative (LDSI), has been successful in enabling each department to conduct laboratory tests and share the results with each other. This project allows a VA provider to electronically order laboratory tests and receive results from a DOD facility, and conversely, a DOD provider can electronically order laboratory tests and receive results from a VA facility. An early version of what is now LDSI was originally tested and implemented at a joint VA and DOD medical facility in Hawaii in May 2003. The San Antonio LDSI demonstration project built on the Hawaii version and enhanced it. According to the departments, a plan to export LDSI to additional sites has been approved. An electronic data exchange project at El Paso successfully exchanged laboratory orders and results as well as limited patient information— demographic, outpatient pharmacy, radiology, laboratory, and allergy data. An electronic data exchange project at Puget Sound has also achieved similar results by exchanging limited patient information—demographic, outpatient pharmacy, radiology, allergy data, and discharge summaries. The results of the project are scheduled to be replicated at five additional VA and DOD sites during the first quarter of fiscal year 2006. A project at Augusta to coordinate the staffing and sharing of nurses at VA and DOD facilities has yielded savings in terms of cost, time, and training resources. A project in Alaska is producing itemized bills for each individual VA patient seen at the DOD facility. The cost for each patient visit is then credited in VA’s accounting system to capture the workload. A project at San Antonio has successfully shared credentialing data for licensed VA and DOD providers through an interface between the two departments’ individual credentialing systems. A project at Hampton is using an automated tool to evaluate staffing shortfalls and mitigate identified gaps in the resources needed to provide health care services to VA and DOD beneficiaries. According to VA and DOD officials, they plan to evaluate whether the eight projects were successful and if they can be replicated at other VA and DOD medical facilities. However, as of November 2005, VA and DOD had not developed an evaluation plan for making these assessments. See appendix III for additional details about the DSS projects. See figure 2 for a timeline and associated events affecting the implementation of the DSS program. VA and DOD have taken steps to create interagency councils and workgroups to facilitate the sharing and collaboration of information, establish working relationships among their leaders, and develop communication channels to further health care resource sharing. However, JEC and HEC have not seized upon a number of opportunities to further collaboration and coordination. In addition to the development of congressionally mandated JIF and DSS programs, VA and DOD have created mechanisms to enhance health care resource sharing by forming JEC and through a proposed federal health care facility in North Chicago. The two departments have also worked together to develop a Joint Strategic Plan outlining six goals. In February 2002, VA and DOD established JEC to enhance VA and DOD collaboration; ensure the efficient use of federal services and resources; remove barriers and address challenges that impede collaborative efforts; assert and support mutually beneficial opportunities to improve business practices; facilitate opportunities to enhance sharing arrangements that ensure high-quality, cost-effective services for both VA and DOD beneficiaries; and develop a joint strategic planning process to guide the direction of joint sharing activities. JEC is co-chaired by the Deputy Secretary of Veterans Affairs and the Under Secretary of Defense for Personnel and Readiness. Membership consists of senior leaders from both VA and DOD, including VA’s Under Secretary for Benefits and Under Secretary for Health and DOD’s Principal Deputy Under Secretary of Defense for Personnel and Readiness and Assistant Secretary for Health Affairs. JEC established two interagency councils and two interagency committees to facilitate collaboration: (1) Benefits Executive Council, (2) HEC, (3) VA/DOD Construction Planning Committee (CPC), and (4) Joint Strategic Planning Committee. HEC was placed under the purview of JEC specifically to advance VA and DOD health care resource sharing and collaboration. Through HEC, VA and DOD have developed policies and procedures for facilitating health care resource-sharing activities. Together, the two departments are working to create, implement, and adhere to joint standards in the areas of clinical guidelines, information technology, deployment health policies, and purchasing of medical and surgical supplies. HEC has organized itself into 11 workgroups—on subjects such as financial management, pharmacy, and deployment health—in order to carry out its mission (see fig. 3). HEC’s mission includes formulating VA and DOD joint policies that relate to health care, facilitating the exchange of patient information, and ensuring patient safety. HEC membership includes senior leaders from VA and DOD. HEC is co-chaired by VA’s Under Secretary for Health and DOD’s Assistant Secretary of Defense for Health Affairs. DOD membership also includes the surgeons general for the military services. See appendix IV for a description of VA’s and DOD’s councils, committees, and workgroups. HEC workgroups, such as Joint Facility Utilization/Resource Sharing, Deployment Health, and Evidence-Based Practice Guidelines, develop and implement changes in policy and guidance approved by HEC. For example, the Deployment Health Workgroup has developed medical and public health policy for active duty service members who have been exposed to tuberculosis, to be treated by VA without co-payment. This policy allows separating service members to continue to receive antituberculosis prophylactic treatment at a VA facility following their separation from active duty military service. Further, the Deployment Health Workgroup has developed a roster identifying Operation Enduring Freedom and Operation Iraqi Freedom veterans who are separating or who have separated from active duty military service. VA is using this roster to mail letters to individuals thanking them for their service and advising them of their VA benefits based on their service in a combat theater. VA is also using this roster to determine postdeployment VA health care utilization by this population of veterans. Other efforts include the Evidence-Based Practice Guidelines Workgroup’s development of standardized guidelines to improve patient outcomes for both VA and DOD beneficiaries. In fiscal year 2005, the workgroup began revising four of its guidelines, including rehabilitation for servicemembers with amputations. Completed guidelines are presented at various national meetings. Tools such as CD-ROMs, pocket cards, and patient brochures are made available for VA and DOD providers in order to enhance communications with their patients. JEC and HEC are also promoting integration through the establishment of a combined VA and DOD federal health care facility in North Chicago. According to VA and DOD, it was through discussions during JEC and HEC meetings that the combined federal facility in North Chicago was envisioned. According to a DOD official, the combined facility will be a hospital. The current plan is to build an ambulatory care clinic that will be attached to the current VA medical center. According to the DOD official, for the first time VA and DOD will operate a facility under a single chain of command that would integrate the budget and management for providing medical services from both departments to achieve one cohesive medical facility that serves VA and DOD beneficiaries. This management structure differs significantly from joint ventures in which separate VA and DOD management structures coexist. The North Chicago Federal Health Care Facility is scheduled to be operational in fiscal year 2010. VA and DOD also developed a strategic plan in December 2004 that includes six joint goals. Each of JEC’s councils and committees and HEC’s workgroups has been assigned responsibility for meeting some aspects of the goals outlined in the joint strategic plan. For example, according to VA and DOD officials, the Financial Management Workgroup developed a standardized business case analysis template for the JIF program to increase efficiency of operations. VA and DOD staff utilize this template when requesting funding for joint projects. Previously, the individual branches of the service had their own templates, all of which were slightly different. The departments’ joint goals are as follows: Goal 1: Leadership Commitment and Accountability. Promote accountability, commitment, performance measurement, and enhanced internal and external communication through a joint leadership framework. Goal 2: High-Quality Health Care. Improve the access, quality, effectiveness, and efficiency of health care for beneficiaries through collaborative activities. Goal 3: Seamless Coordination of Benefits. Promote coordination of benefits to improve understanding of and access to benefits and services earned by servicemembers and veterans through each stage of life, with a special focus on ensuring a smooth transition from active duty to veteran status. Goal 4: Integrated Information Sharing. Ensure that appropriate beneficiary and medical data are visible, accessible, and understandable through secure and interoperable information management systems. Goal 5: Efficiency of Operations. Improve management of capital assets, procurement, logistics, financial transactions, and human resources. Goal 6: Joint Medical Contingency/Readiness Capabilities. Ensure the active participation of both departments in federal and local incident and consequence response through joint contingency planning, training, and exercising. While progress has been made, JEC and HEC—which are responsible for advancing VA and DOD health care resource sharing and collaboration— have not seized upon a number of opportunities to promote sharing and collaboration. For example, during the course of our audit work, we found that JEC and HEC have not developed a system for jointly collecting, tracking, and monitoring information on the health care services that VA and DOD contract for from the private sector; directed that a joint nationwide market analysis be conducted that contains information on what the departments’ combined future workloads will be in the areas of services, facilities, and patient needs; disseminated in a timely manner the information or the tools developed by a congressionally required study (the Joint Assessment Study) for assessing collaboration and sharing opportunities; or established standardized inpatient reimbursement rates—initiatives that would be useful for maximizing health care resource-sharing opportunities and promoting systemwide cost savings and efficiencies. Though the Army, Air Force, and Navy each record the amount of care that is purchased from the private sector, they do not collectively merge that information or combine it with VA’s total expenditures for services purchased from the community. As a result, a systematic approach for collecting, tracking, and monitoring information on the services that each department contracts for from the private sector is lacking. Such an approach could help VA and DOD achieve systemwide cost savings and efficiencies, as has been demonstrated at the local level where officials at certain sites compare their analyses and seek to exchange services from one another or possibly obtain better contract pricing through joint purchasing of services. For example, for fiscal year 2003, a VA official at one site estimated that VA reduced its cost by $1.7 million as compared to acquiring the same services in the private sector through its agreements with the Army; he also estimated that the Army reduced its cost by about $1.25 million as compared to acquiring the same services in the private sector. For instance, the site jointly leased a magnetic resonance imaging (MRI) unit. The unit eliminated the need for beneficiaries to travel to more distant sources of care. According to a VA official, the purchase reduced MRI cost by 20 percent as compared to acquiring the same services in the private sector. The availability of such information would be helpful to VA and DOD sites at the local level for sharing information on services they have independently contracted for from the private sector. For example, VA and the Air Force at a northern California site were able to create efficiencies after recognizing that they had been independently contracting for the same services. Both VA and the Air Force had been sending patients to private providers for dialysis services—information that is not stored in a database to be shared with all VA and DOD health care facilities. During discussions, local VA and Air Force officials recognized they were paying a high cost for dialysis services, got together to analyze their costs and determine the best approach for obtaining these services, and worked together to open a joint dialysis clinic. In this case, had VA and the Air Force known about their individual contracting arrangements, they could have combined their contracting needs and negotiated services at a lower cost or opened a joint clinic earlier. In response to our concerns and those of the Congress, VA initiated a review of its capital assets under the Capital Asset Realignment for Enhanced Services (CARES) program. The review was to provide a comprehensive, long-range assessment of VA’s health care system’s capital asset requirements. In May 2004, the Secretary’s CARES decision document was issued and, according to VA, serves as a road map for aligning its facilities with the health care needs of 21st century veterans. The CARES report addresses partnering with DOD. It outlines existing and potential areas of sharing at the local level and opportunities for joint ventures. DOD was authorized to assess its infrastructure and provide base realignment and closure (BRAC) recommendations in 2005 to an independent commission for its review. An objective of the 2005 BRAC Commission, in addition to realigning DOD’s base structure to meet post- Cold War force structure, was to examine and implement opportunities for greater sharing with VA. Joint cross-service groups were tasked with analyzing common business-oriented functions, such as health care. The Medical Joint Cross-Service Group was chartered to review DOD’s health care functions and to provide BRAC recommendations based on that review. As we reported in July 2005, our examination of the BRAC process found that while the medical group examined the capacity and proximity of VA facilities to existing MTFs in its analysis, it did not coordinate with VA to determine whether military beneficiaries who normally receive care at MTFs could also receive care at VA facilities in the vicinity. Each department has individually analyzed its health care needs—in part through VA’s efforts to realign its capital assets under the CARES process and through DOD’s BRAC process. Each department issued reports, which contained references to sharing or partnering with one another in the future. However, JEC and HEC have not conducted a nationwide integrated review and market analysis that would provide information on what their combined future health care workloads and needs may be. Such information is necessary to fully evaluate, and maximize the potential for, health care resource-sharing opportunities. In its February 27, 2006, comments DOD stated that HEC has established a BRAC Impact and Opportunity Ad Hoc Workgroup to explore and identify opportunities for local collaboration and health care partnerships between VA and DOD in areas potentially affected by BRAC action. The work of this group would be a step in obtaining information on VA’s and DOD’s combined future health care workloads and needs. Furthermore, JEC and HEC have not disseminated in a timely manner the information or the tools developed by the DOD/VA Joint Assessment Study that examined the collaboration and health care sharing opportunities for three VA and DOD sites. For example, officials at one site stated that they did not receive the study findings until almost a year after it was completed. At that point, the officials stated that the market information was outdated and of little use to the site in forecasting and planning for future work. In addition, the study also produced a tool for combining VA and DOD beneficiary populations by geographic site. Utilizing this information, the contractor was able to forecast local market demand for health services—potentially allowing VA and DOD officials to plan and provide services to their “combined market.” Further, the contractor formulated “crosswalk” tables to assist VA and DOD in matching similar health care services. Historically, VA and DOD have captured health services information in varying formats and could not always account for their workloads in the same manner. The tool would provide VA and DOD health care managers within geographic areas with information on the health care needs of the combined beneficiary populations—information that could be useful to them for sharing and joint purchase decisions. However, 2 years after development of the tool, it is currently being utilized at one site. During the course of our audit work, we also found instances in which HEC could have asserted itself in local decision making to maximize resource-sharing opportunities as well as to help ensure continuity of care for beneficiaries. For example, see the following: In Honolulu, Hawaii, we were informed by DOD that Tripler Army Medical Center (Tripler) had resources available to meet the health care needs of certain VA beneficiaries, yet VA chose to send them to its medical center in Palo Alto, California, for their care. Hawaii VA officials told us it does this because the cost of care is borne by Palo Alto and not by the Hawaii VA medical center, which would have to reimburse Tripler for the care. Under this scenario, the federal government is paying for underutilized resources and providers at Tripler. We believe HEC has an opportunity to step in and ensure that Tripler resources are fully maximized—an initiative that would ultimately result in overall savings to the government. More important, beneficiaries treated at Palo Alto return to Hawaii and require follow-up care, and in some cases emergency care, that is often provided by Tripler—a situation that could raise continuity of care issues. By fully maximizing resources at Tripler, HEC would be helping to ensure that initial treatments are provided closer to a beneficiary’s home and that continuity of care is maintained. In San Antonio, Texas, we found that VA contracts out approximately $1.5 million for diagnostic services to various private sector laboratories even though local MTFs have the capacity to provide these services. According to VA, it contracts out to the private sector because the costs are less than what DOD facilities charge. While it is understandable that VA would seek to purchase services at the best prices possible, this practice may result in greater costs to the government as it is incurring VA’s costs as well as the costs to maintain underutilized DOD facilities. In this case, JEC and HEC have not taken the initiative to determine the most cost-effective strategy for meeting VA’s and DOD’s laboratory service needs—information that would be useful for VA and DOD to ensure good stewardship of federal resources. Finally, we found that HEC could be more proactive in establishing joint policies or guidance in a timely manner that facilitates health care resource sharing. For example, in December 2002 legislation required VA and DOD to establish a national standardized uniform payment and reimbursement schedule for selected health care services. In 2003, VA and DOD established a reimbursement rate for outpatient services. However, VA and DOD have not yet established an inpatient reimbursement rate. Though HEC reports it is in the process of soliciting input and developing guidance for an inpatient rate, we found that without an established inpatient rate local officials were forced to negotiate rates among themselves—an activity that consumed staff time and often created tension between partners. In addition to our observations on opportunities for VA and DOD to strengthen health care resource sharing, OMB, the agency responsible for improving administrative management in the executive branch, also sees room for improvement in achieving the President’s goal to increase VA and DOD health care resource-sharing activities. OMB evaluates VA and DOD’s health care resource-sharing activities by providing an overall or composite score on their ability and progress to exchange patient medical record information between VA and DOD adopt governmentwide information technology standards for health develop a plan for VA to use DOD’s enrollment and eligibility data, establish the DSS program, develop a graduate medical education pilot program, increase nongraduate medical education training and education opportunities, utilize one examination for separating servicemembers that meets the needs of VA and DOD, and purchase medical supplies and equipment jointly. OMB uses a color code—green, yellow, and red—to score the current status and progress of health care resource-sharing activities. A score in the green status would indicate that the departments are achieving the degree of health care resource sharing agreed upon by the departments and the administration. Yellow status means the coordination of VA and DOD health care resource-sharing activities are yielding mixed results and not meeting their timelines. A red score would indicate that the departments are not achieving the degree of health care resource sharing agreed upon by the departments and the administration. Since OMB first began scoring the departments in 2001, the score for “current status” of health care resource sharing has remained yellow and the score for “progress in implementation” has dropped from the best score of green to a score of yellow. VA and DOD health care resource-sharing activities are guided by a joint strategic plan—the VA/DOD Joint Strategic Plan, December 2004. However, the plan does not contain performance measures that are useful for evaluating how well the departments are achieving their health care resource-sharing goals. For example, the plan mentions 30 measures that could be used to assess the departments’ progress in sharing health care resources. We reviewed the plan and found that the measures could be placed into one of three categories: (1) a measurement that would be developed in the future, (2) a measurement that took place only once, and (3) a measurement that was taken periodically. We placed 5 of the 30 measures in the first category because the plan states that these measures will be developed in the future. For example, the plan states that a communication effectiveness measure will be developed as part of the communication strategy. The plan also states that VA and DOD will develop performance measures related to joint education and training opportunities by December 2006. Further, we placed 11 of the 30 measures in the second category because they call for a single event measurement, such as “increase the number of collaborative research projects completed by VA and DOD by December 2007,” or they state a goal, such as a system “will be fully operational and providing VA benefit eligibility information by December 2008.” While measurements of this type may provide useful snapshot information of output for a point-in-time prospective, they are not periodic and thus do not provide long-term or longitudinal information for evaluating the usefulness of specific activities. Finally, in the third category we placed the plan’s remaining 14 measures that call for periodic measurement. We found there was variation in the rigor or specificity in the types of data to be collected or the analysis to be performed. For example, CPC is tasked with reporting to JEC quarterly; however the tasking does not specify the types of data to be collected or the analytical assessments to be performed. Another performance measure from the plan states that the “Amount of electronic health data available to the other department is higher each quarter reported.” The lack of specificity with this performance measure raises questions about the usefulness of the information for evaluating how well the departments are achieving their health care resource-sharing goals. Furthermore, VA and DOD have not established a performance measure that would track their progress in jointly obtaining health care services— such as difficult-to-fill occupations, laboratory tests, and diagnostic equipment. For example, while VA and DOD are in the process of jointly acquiring five MRI units to help with their diagnostic needs through the JIF program, other opportunities for sharing MRI units may exist. During our review, we did not find evidence that VA and DOD top management set an expectation for their medical facility managers to consider partnering prior to purchasing MRI equipment. Without such an expectation and a specific measurement tool or metric to track the joint acquisition and utilization of MRI services, VA and DOD are not in a position to determine on a nationwide basis the most cost-efficient way to obtain and deliver MRI services. When the idea of health care resource sharing was originally conceived and sanctioned by the Congress in the early 1980s, it was based on the premise of excess capacity. However, the set of circumstances that confront VA and DOD today are quite different, as both departments strive to serve an increasing number of beneficiaries. VA and DOD officials state that many of their facilities are at capacity or exceed capacity. The nature of sharing has shifted from one of utilizing untapped resources to one of partnering and gaining efficiencies by leveraging resources or buying power jointly. Implementing such a process across all components involved with the delivery of VA and DOD health care should yield positive results as resource sharing becomes an integral part of a systemwide decision-making process. However, while VA and DOD, through JEC and HEC, have created mechanisms that support the potential to increase collaboration, sharing, and coordination of management and oversight of health care resources and services, more can be done to capitalize on this relationship throughout the departments. The Congress provided additional sharing opportunities for local entities through the establishment of JIF and DSS. These programs have laid the foundation for new sharing relationships and, in other cases, have deepened existing relationships. The goals of each of the projects are aligned with VA’s and DOD’s goals to jointly acquire, deliver, and improve health care services. Both the JIF and DSS programs provide a congressionally driven mechanism to help increase the number of new sharing agreements between VA and DOD partners. However, VA and DOD have not yet developed a standardized evaluation plan for documenting and recording the advantages and disadvantages of each project and whether they can be replicated at other VA and DOD medical facilities. Without an established evaluation plan to measure and determine the results of the projects, VA and DOD may lose an opportunity to obtain information that will be useful for determining whether projects can be replicated systemwide. The Joint Strategic Plan is a positive first step toward outlining VA and DOD sharing goals and measures. However, useful specific quantitative performance measures for VA and DOD to track the progress of their health care resource-sharing activities have not been established. Such measures would be a useful tool for VA and DOD to help ensure that health care sharing is optimized and that the departments are cost efficiently achieving their resource-sharing goals. To further advance health care resource sharing within VA and DOD, the Secretaries of Veterans Affairs and Defense should direct JEC and HEC to take the following two actions: develop an evaluation plan for documenting and recording the reasons for the advantages and disadvantages of each DSS project, an activity that will assist VA and DOD in replicating successful projects systemwide, and develop performance measures that would be useful for determining the progress of their health care resource-sharing goals. We received comments from VA and DOD on a draft of this report. The departments concurred with our recommendations and also provided technical comments that we have incorporated as appropriate. VA’s comments are included as appendix V and DOD’s comments are included as appendix VI. VA and DOD agreed with our recommendation to develop a DSS evaluation plan and described their plans and timelines for implementing it. The departments stated they have modified an in-progress review template to strengthen department information on the advantages and disadvantages of each project and whether they can be replicated systemwide. According to the departments, the template was distributed to the DSS sites in January 2006 and will be operational in the second quarter of fiscal year 2006. VA and DOD also agreed with our recommendation to develop performance measures that would be useful for determining the progress of achieving health care resource-sharing goals. In their comments, the departments stated that they have, since the work was completed for this report, issued the VA/DOD Joint Executive Council Strategic Plan, Fiscal Years 2006-2008 (signed by VA and DOD on January 26, 2006)—a plan that revises and updates the VA/DOD Joint Strategic Plan, December 2004 and contains performance measures that demonstrate measurable progress relative to specific strategic milestones. VA included a copy of the updated plan with its comments and noted that action on this recommendation has been completed as performance measures have been identified for each of the health care resource-sharing goals. We do not agree that the January 2006 plan fully addresses the concerns raised in the report, and maintain our recommendation that useful measures—those that provide specifics regarding time frames, implementation strategies, and the type of information that will be reported to program managers— need to be developed. For example, our review of the Joint Strategic Plan, Fiscal Years 2006-2008, showed that while goal 6—Joint Medical Contingency/Readiness Capabilities—has strategies and key milestones, it contained no performance measures for monitoring progress toward achieving the stated goal. Furthermore, 6 of the plan’s 22 performance measures call for one point-in-time measurement and thus do not provide longitudinal information for evaluating the usefulness of specific activities. We are sending copies of this report to the Secretaries of Veterans Affairs and Defense, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7101 or ekstrandl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Michael T. Blair, Jr., Assistant Director; Aditi Archer; Jessica Cobert; Kevin Milne; and Julianna Williams made key contributions to this report. To assess the Department of Veterans Affairs’ (VA) and Department of Defense’s (DOD) progress in implementing the Joint Incentive Fund (JIF) and Demonstration Site Selection (DSS) programs, including whether they are operational, we visited VA and DOD medical facilities at six sites— Augusta, Georgia; Honolulu, Hawaii; North Chicago, Illinois; El Paso, Texas; San Antonio, Texas; and Puget Sound, Washington, and interviewed department officials responsible for the development and implementation of each of the projects and conducted site visits at select sites. In addition, we contacted VA and DOD officials from seven additional sharing sites. For all of the sites, we reviewed approved business case analyses for JIF projects selected in fiscal year 2004 and DSS projects that included detailed descriptions of the projects, timelines for development and implementation, associated risks, costs, potential cost savings (if applicable), staffing requirements, and quarterly progress reports. We also obtained and reviewed VA and DOD policies governing sharing and reviewed relevant department reports, including those from the DOD Inspector General and DOD contractors, along with our prior work. To obtain information on the actions taken by VA and DOD to strengthen the sharing of health care resources, we interviewed officials from VA’s Office of Policy, Planning, and Preparedness and the Veterans Health Administration—including the VA/DOD Liaison Office and VA medical center (VAMC) staff at several locations engaged in the sharing of health care resources. We interviewed officials from DOD’s TRICARE Management Activity; DOD/VA Program Coordination Office; the military services’ surgeons general offices, which coordinate sharing activities; and several military treatment facilities (MTF) engaged in the sharing of health care resources. We also interviewed officials from Joint Executive Council (JEC) committees and Health Executive Council (HEC) workgroups to determine what policies, procedures, and guidance have been promulgated to promote health care resource sharing and coordination between VA and DOD. Further, we spoke with officials from the Office of Management and Budget (OMB). We reviewed the charters, when available, and briefing updates for each JEC committee and HEC workgroup and OMB’s scorecards for the President’s Management Agenda initiative directed at VA and DOD sharing. We analyzed sharing data between VA and each branch of service that included workload, sharing agreements, and cost data. We also reviewed the actions taken by both VA and DOD to strengthen the sharing of health care resources. In addition, we evaluated whether health care resource-sharing activities were considered as part of Capital Asset Realignment for Enhanced Services and base realignment and closure decisions. To assess whether VA and DOD performance measures are useful, we interviewed officials from VA’s Office of Policy, Planning, and Preparedness and the Veterans Health Administration—including the VA/DOD Liaison Office and VAMC staff at several locations engaged in the sharing of health care resources. We also interviewed officials from DOD’s TRICARE Management Activity; the DOD/VA Program Coordination Office; the military services’ surgeons general offices, which coordinate sharing activities; and several MTF locations engaged in the sharing of health care resources. We analyzed the VA/DOD joint strategic plan, VA’s strategic plan, DOD’s Military Health System Strategic Plan, VA’s performance and accountability report, DOD’s performance and accountability report, and VA/DOD’s annual report to the Congress on sharing. We conducted our work from January 2005 through March 2006 in accordance with generally accepted government auditing standards. Delta Systems II-Cad/Cam System: This is a fabrication technology system that produces molds for prosthetics and orthotics from lightweight foam through use of a laser scanner and mill. Installing this device at Tripler should allow for greater patient access; reduce clinic visits for casting, adjustments, and fittings; and allow for an increase in VA beneficiary access. Joint TeleMental System: Acquiring videoconferencing technology should allow VA to provide mental health services to DOD beneficiaries approximately 80 miles away. Joint Dialysis Unit: Through upgrading equipment and increased staffing, Travis Air Force Base’s dialysis unit is expected to be able to accommodate VA beneficiaries. Mammography Unit Expansion: The purchase of new digital mammography equipment, a stereotactic unit, and hiring of support staff should now reduce wait times for DOD beneficiaries and allow for VA beneficiary access. Teleradiology Initiative: This will upgrade DOD’s system so it can download images from VA for radiological interpretation and is intended to allow VA to provide computed tomography scans for DOD patients. Women’s Health Center: This project proposes to create a comprehensive women’s health center for VA and DOD beneficiaries by coordinating women’s services and includes hiring gynecology, wellness, and case management staff. Enhanced Outpatient Diagnostic Services: The acquisition of diagnostic equipment is intended to provide in-house imaging services to VA and DOD beneficiaries. Telepsychiatry: The hiring of a full-time VA psychiatrist is intended to allow VA to provide mental health services to DOD patients via videoconferencing. Cardiac Catheterization Laboratory: Remodeling existing VA space is intended to accommodate new equipment and provide in- house cardiac services to VA and DOD beneficiaries. Expansion of Existing Magnetic Resonance Imaging Joint Venture: The acquisition of an open magnetic resonance imaging unit located at Moncrief Army Community Hospital is intended to provide in-house services to VA and DOD beneficiaries. North Central San Antonio Clinic: The establishment of a joint VA/DOD clinic is intended to provide greater access for VA and DOD beneficiaries. Medical Enterprise Web Portals: The project is designed to standardize VA and DOD’s Web portals—they both will have the same “look and feel” to them from a beneficiary perspective, including a requirement that each portal meets national standards regarding accessibility for people with disabilities. Medical/Surgical Supply Data Sync: This project is intended to create a joint VA and DOD medical/surgical supply catalog. According to the project plan, the catalog will ultimately allow VA and DOD to jointly identify common medical/surgical products procured and maximize joint buying power for these products through negotiated volume purchase contracts. Radiology: The hiring of additional radiologists is intended to fully utilize existing equipment and provide greater access for VA and DOD beneficiaries. Sleep Lab Expansion: The renovation and expansion, from two beds to four beds, of the VA Sleep Diagnostic and Treatment Lab is intended to decrease wait times for VA beneficiaries and allow for DOD beneficiary access. Cardiac Surgery: The consolidation of VA and DOD cardiac surgery programs into a coordinated single large cardiac program is intended to improve quality of care for VA and DOD beneficiaries while achieving efficiencies and economies of scale. Neurosurgery Program: This project is intended to improve the provision of neurosurgical care to VA and DOD beneficiaries by jointly recruiting neurosurgeons. Dialysis: By providing the staff necessary to optimally utilize an existing DOD dialysis center, this project is intended to increase access for VA beneficiaries. Pain Management Improvement: Converting an anesthesiologist who specializes in pain rehabilitation from part-time to full-time is intended to recapture pain management workload that is currently being outsourced and decrease beneficiary wait times. Joint Magnetic Resonance Imaging: The acquisition of an open field magnetic resonance imaging unit and the hiring of a radiologist are intended to reduce patient wait time, referrals for contract care, delays in treatment, and length of stay for acutely ill patients. Clinical Fiber-Optics: By providing the necessary high-speed clinical connectivity between VA and DOD facilities, this project is intended to provide the bandwidth needed to transmit clinical images to VA. Oncology: This project is intended to create a hematology-oncology program for VA and DOD beneficiaries, who are currently referred to the local community. Digital Imaging: The seamless sharing of digital images, texts, and patient demographic information between clinical VA and DOD systems is intended to be a pilot data exchange program. Hyperbaric Medicine: Modifications to the DOD facility to allow for the installation of a hyperbaric chamber that is intended to provide greater access and decrease surgical wait times for VA and DOD beneficiaries. Mobile Magnetic Resonance Imaging: This project is intended to provide access to VA and DOD beneficiaries through the acquisition of a mobile magnetic resonance imaging unit. Mobile Magnetic Resonance Imaging: Site preparation and the acquisition of a mobile magnetic resonance imaging unit along with a digital printer are intended to recapture magnetic resonance imaging exams that are currently purchased in the local community, thereby improving access for VA and DOD beneficiaries. Healthcare Planning Data Mart: This project plans to develop a joint VA and Air Force database to capture the amount of care each contracts for outside of its respective health care system. Through the creation of the database, VA and Air Force managers hope to identify areas in which they can jointly purchase services and achieve savings through leveraged buying power. Mobile Magnetic Resonance Imaging: The acquisition of a mobile magnetic resonance imaging unit is intended to recapture magnetic resonance imaging exams that are currently purchased in the local community, thereby improving access for VA and DOD beneficiaries. Joint Venture Operations Revenue Cycle— The goal of this project is to conduct and execute the findings of studies in four key areas. (1) Health Care Forecasting, Demand Management, and Resource Tracking: Define, test and implement a system that will combine VA and DOD data for beneficiaries receiving care in the Pacific Islands joint venture market. This will include all eligibility, insurance, administrative, clinical, staffing, and costing data that will allow VA and DOD to query and output information on utilization and demand, supply and capacity, combined costs, facility and staff, services, and beneficiary population. (2) Referral Management and Fee Authorization: Define, test, and implement a system that will provide the capability of timely tracking of authorizations, obligations, and provisions of clinical care to beneficiaries referred from one department to the other. (3) Joint Charge Master Based Billing: Define, test, and implement a system that will provide DOD with the capability for itemized billing and patient-level costing. (4) Document Management: Define, test, and implement a system that gives VA and DOD the capability to support all the business and clinical processes of sharing care. Joint Venture Business Directorate—This project intends to achieve the following goals: (1) Through the use of a joint business office, evaluate areas of business collaboration as VA moves its main operation next door to the existing joint venture hospital. Areas for possible sharing include library, warehouse, radiology, ambulatory surgery, central sterile supply, GI procedure space, education facilities, physical plant utilities, security services, and patient transportation. (2) Generate itemized bills and utilize the existing VA fee program to capture workload and patient-specific health information. (3) Create a coordinated calculation of cost- based expenses to assist in market area procurement decisions. Joint Staffing—VA and DOD plan to jointly to recruit, hire, and train staff for difficult-to-fill direct patient care occupations, which provide clinical and ancillary support services. Specifically, the project is designed to (1) utilize the Augusta VAMC’s successful recruitment initiatives to aid DOD in hiring staff for direct patient care positions it has been unable to fill, (2) unite training initiatives so direct patient care staff may take advantage of training opportunities at either facility, and (3) hire and train a select group of staff that would service either facility when a critical staffing shortage occurred. Coordinated Staffing Initiative—The goals of this project are intended to achieve the following: (1) Develop a process to identify department-specific needs to address staffing shortfalls for integrated services. (2) Create a method to compare, reconcile, and integrate requirements between facilities. (3) Determine a payment methodology to support the procurement process for staffing shortfalls. (4) Establish a joint referral and appointment process, to include allocation of capacity and prioritization of workload. (5) Maintain an ongoing assessment of issues and problem resolution. Health Care Data Exchange—The goal of this project is to transmit a limited subset of currently available clinical data between VA and DOD. The intent of this project is to work with the developers of Composite Health Care System II (CHCS II), Bidirectional Health Information Exchange (BHIE), and Computerized Patient Record System, to exchange and view data such as discharge summaries. Laboratory Data Sharing—with CHCS II modifications: Phase I is the implementation of the Laboratory Data Sharing Initiative (LDSI) with the CHCS II modification. LDSI implementation is intended to eliminate rekeying of orders entered by VA providers in VA’s Veterans Health Information Systems and Technology Architecture (VISTA) into DOD’s CHCS II, decrease errors caused by transcription, and increase speed of lab results availability to VA providers for treatment purposes. Phase II will be the implementation of the BHIE project, which is currently being deployed, with the CHCS II modification. Initial focus will be on data sharing related to patient demographic information, outpatient pharmaceuticals prescribed to patient populations, and allergy information. Phase III expands on the initial development of the BHIE project by including the data sharing of radiology reports (text) and laboratory results, including anatomic pathology. Laboratory Data Sharing—VA’s VISTA to DOD’s Composite Health Care System I (CHCS I). LDSI is intended to meet the need of receiving electronic patient test results from reference labs, thereby eliminating manual data entry of such results. The goal is to create bidirectional communication between VISTA and CHCS I to facilitate ordering, sending, and receiving of all lab test subscripts (including chemistry, anatomic pathology, and microbiology). Tangible benefits include more efficient use of man- hours from not having to manually enter test results and improved turnaround time for the providers to receive results. Intangible benefits include increased patient safety via the elimination of manual test results. Joint Credentialing System—VA and DOD plan to jointly credential licensed providers based on an interface between DOD’s Centralized Credentials Quality Assurance System (CCQAS) and VetPro, VA’s credentialing system. The project is divided into four phases: Phase I–Implement the current version of CCQAS that is available at the time of implementation with the interface. Phase II–Create a means to provide the capability to view credentialing files and scanned primary source verification documentation in either system by VA or DOD staff. Phase III–Expand the use of credentialing in VetPro at VA and CCQAS at DOD to include nurses and other licensed professionals. Phase IV–Explore the feasibility of a local centralized site for primary source verification. Joint Executive Council (JEC): Established in February 2002, VA and DOD’s JEC was created to enhance VA and DOD collaboration, ensure the efficient use of federal resources, remove barriers and address challenges that impede collaborative efforts, assert and support mutually beneficial opportunities to improve business practices, and develop a joint strategic planning process to guide the direction of sharing activities. JEC is co- chaired by the Deputy Secretary of Veterans Affairs and the Under Secretary of Defense for Personnel and Readiness. Membership consists of senior leaders from both VA and DOD, including VA’s Under Secretary for Benefits and Under Secretary for Health and DOD’s Principal Deputy Under Secretary of Defense for Personnel and Readiness and Assistant Secretary for Health Affairs. JEC has two interagency councils and two interagency committees to further facilitate collaboration and sharing opportunities: (1) the Benefits Executive Council, (2) the Joint Strategic Planning Committee, (3) the Construction Planning Committee, and (4) the Health Executive Council. JEC’s primary responsibility is to set strategic priorities for the four interagency councils and committees, monitor the development and implementation of the Joint Strategic Plan, and ensure accountability is incorporated into all joint initiatives. Benefits Executive Council (BEC): Established by JEC in August 2003, BEC was charged with examining ways to expand and improve information sharing, refine the process of records retrieval, identify procedures to improve the benefits claims process, improve outreach, and increase servicemembers’ awareness of potential benefits. In addition, BEC provides advice and recommendations to JEC on issues related to seamless transition from active duty to veteran status through a streamlined benefits delivery process, including the development of a cooperative physical examination process and the pursuit of interoperability and data sharing. Joint Strategic Planning Committee: Established by JEC in October 2002, the committee was charged with developing a joint strategic plan that through specific initiatives, would improve the quality, efficiency, and effectiveness of the delivery of benefits and services to both VA and DOD beneficiaries through enhanced collaboration and sharing. VA/DOD Construction Planning Committee (CPC): Established by JEC in August 2003, CPC provides a formalized structure to facilitate cooperation and collaboration in achieving an integrated approach to capital coordination that considers both short-term and long-term strategic capital issues. CPC was charged with providing oversight to ensure that collaborative opportunities for joint capital asset planning are maximized, and provides the final review and approval of all joint capital asset initiatives recommended by any element of JEC structure. Health Executive Council (HEC): In 1997, VA and DOD established HEC—a precursor to JEC. HEC was co-chaired by the VA Under Secretary for Health and the Assistant Secretary of Defense (Health Affairs). JEC rechartered HEC in August 2003 to oversee the cooperative efforts of each department’s health care organizations. HEC has charged workgroups to focus on specific high-priority areas of national interest. HEC has organized itself into 11 workgroups to carry out its mission—to institutionalize VA and DOD sharing and collaboration through the efficient use of health services and resources. 1. Contingency Planning: The workgroup is responsible for developing collaborative efforts in support of the VA and DOD Contingency Plan and the National Disaster Medical System. Through the workgroup, VA and DOD are in the process of jointly updating the memorandum of understanding regarding VA furnishing health care services to members of the armed forces during a war or national emergency. 2. Continuing Education and Training: The workgroup is responsible for developing a shared training infrastructure and for designing, developing, and managing the operational procedures to facilitate increased sharing of education and training opportunities between VA and DOD. 3. Deployment Health: The workgroup is responsible for enhancing health care available to servicemembers returning from overseas deployment. Focusing on health risks associated with specific deployments, the group developed proactive approaches toward deployment health surveillance, health risk communication, and early identification and treatment of deployment-related health problems. 4. Evidence-Based Practice Guidelines: The workgroup is responsible for the creation and publication of jointly used guidelines for disease management. 5. Financial Management: The workgroup is responsible for developing and disseminating principles and procedures, interpreting current policies and guidance, establishing policies to be used in creating reimbursable arrangements, and resolving disputed issues related to such arrangements that cannot be resolved at local or intermediate organizational levels. The workgroup is also responsible for the implementation of JIF. 6. Graduate Medical Education (GME): The workgroup is responsible for reviewing the current state of the GME program between both departments, and implementing the joint pilot program for GME under which graduate medical education and training is provided to military physicians and physician employees of DOD and VA through one or more programs carried out in DOD’s military MTFs and VAMCs, as mandated by legislation in December 2002. 7. Joint Facility Utilization and Resource Sharing: The workgroup is responsible for examining issues such as removing barriers to resource sharing and streamlining the process for approving sharing agreements. The workgroup was originally tasked with identifying areas for improved resource utilization through local and regional partnerships, assessing the viability and usefulness of interagency clinical agreements, identifying impediments to sharing, and identifying best practices for sharing resources. The workgroup was responsible for providing oversight of the DOD/VA Joint Assessment Study mandated by the Department of Defense and Emergency Supplemental Appropriations for Recovery from and Response to Terrorist Attacks on the United States Act, 2002. The workgroup is also responsible for the implementation of DSS. 8. Information Management/Information Technology: The workgroup is responsible for developing interfaces and implementing standards to facilitate interoperability for improving exchange of health data between VA and DOD. 9. Medical Materiel Management: In lieu of a charter, VA and DOD officials signed a memorandum of agreement. Under the terms of the memorandum, the workgroup is to “combine identical medical supply requirements from both agencies and leverage that volume to negotiate better pricing.” 10. Patient Safety: The workgroup is responsible for reviewing and developing internal and external reporting systems for patient safety. DOD has established a Patient Safety Center at the Armed Forces Institute of Pathology using the VA National Center for Patient Safety as a model. 11. Pharmacy: The workgroup is responsible for expanding participation by the VA Pharmacy Benefits Management Strategic Health Care Group and the DOD Pharmacoeconomic Center to evaluate high-dollar and high-volume pharmaceuticals jointly. According to the workgroup, it is overseeing joint actions, such as joint contracts involving high- dollar and high-volume pharmaceuticals, which are designed to increase uniformity and improve the clinical and economic outcomes of drug therapy in the VA and DOD health systems. The workgroup’s goals include eliminating unnecessary redundancies that exist in areas of class reviews, contracting prescribing guidelines, and utilization management. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. VA and DOD Health Care: VA Has Policies and Outreach Efforts to Smooth Transition from DOD Health Care, but Sharing of Health Information Remains Limited. GAO-05-1052T. Washington, D.C.: September 28, 2005. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05-1051T. Washington, D.C.: September 28, 2005. Mail Order Pharmacies: DOD’s Use of VA’s Mail Pharmacy Could Produce Savings and Other Benefits. GAO-05-555. Washington, D.C.: June 22, 2005. DOD and VA Health Care: Incentives Program for Sharing Health Resources. GAO-05-310R. Washington, D.C.: February 28, 2005. VA and DOD Health Care: Resource Sharing at Selected Sites. GAO-04-792. Washington, D.C.: July 21, 2004. DOD and VA Health Care: Incentives Program for Sharing Resources. GAO-04-495R. Washington, D.C.: February 27, 2004. DOD and VA Health Care: Access for Dual Eligible Beneficiaries. GAO-03-904R. Washington, D.C.: June 13, 2003. VA and Defense Health Care: Increased Risk of Medication Errors for Shared Patients. GAO-02-1017. Washington, D.C.: September 27, 2002. VA and Defense Health Care: Potential Exists for Savings through Joint Purchasing of Medical and Surgical Supplies. GAO-02-872T. Washington, D.C.: June 26, 2002. DOD and VA Pharmacy: Progress and Remaining Challenges in Jointly Buying and Mailing Out Drugs. GAO-01-588. Washington, D.C.: May 25, 2001. VA and Defense Health Care: Evolving Health Care Systems Require Rethinking of Resource Sharing Strategies. GAO/HEHS-00-52. Washington, D.C.: May 17, 2000.
|
The National Defense Authorization Act for Fiscal Year 2003 required that the Departments of Veterans Affairs (VA) and Defense (DOD) implement programs referred to as the Joint Incentive Fund (JIF) and the Demonstration Site Selection (DSS) to increase health care resource sharing between the departments. The act requires GAO to report on (1) VA's and DOD's progress in implementing the programs. GAO also agreed with the committees of jurisdiction to report on (2) the actions taken by VA and DOD to strengthen resource sharing and opportunities to improve upon those actions and (3) whether VA and DOD performance measures are useful for evaluating progress toward achieving health care resource-sharing goals. VA and DOD are making progress in implementing two programs required by legislation in December 2002 to encourage health care resource sharing and collaboration--JIF and DSS. While JIF projects experienced challenges because of delays resulting from the initial absence of funding mechanisms and, in some cases, the need for additional acquisition and construction approvals, as of December 2005, 7 of 11 selected 2004 projects were operational. The DSS program also experienced challenges as some sites reported difficulty putting together project submission packages, noting confusion over the timelines and approval process as well as frustration with the amount of paperwork and rework required. Nonetheless, as of December 2005, 7 of the 8 DSS projects were operational. However, the Joint Executive Council (JEC) and Health Executive Council (HEC), VA and DOD entities established to facilitate collaboration and health care resource sharing between the departments, have not established a plan to measure and evaluate the advantages and disadvantages of DSS projects--information that will be useful for determining if projects that produce cost savings or enhance health care delivery efficiencies can be replicated systemwide. VA and DOD are creating mechanisms that support the potential to increase collaboration, sharing, and coordination of management and oversight of health care resources and services. The departments have taken steps to create interagency councils and workgroups to facilitate collaboration and sharing of information, establish working relationships among their leaders, and develop communication channels to further health care resource sharing. In addition, the departments developed a Joint Strategic Plan outlining six goals. However, JEC and HEC have not seized upon a number of opportunities to further collaboration and coordination. For example, JEC and HEC have not developed a system for collecting and monitoring information on the health care services that each department contracts for from the private sector--such as individual VA medical center or military treatment facility contracts for dialysis, laboratory services, or magnetic resonance imaging. If such a system were in place, the departments could use it to identify services that could be exchanged from one another or possibly obtain better contract pricing through joint purchasing of services, thus promoting systemwide cost savings and efficiencies. Furthermore, JEC and HEC have not directed that a joint nationwide market analysis be conducted to obtain information on what their combined future workloads will be in the areas of services, facilities, and patient needs. VA and DOD lack performance measures that would be useful for evaluating how well they are achieving their health care resource-sharing goals. For example, of the 30 measures contained in the departments' joint strategic plan, 5 were not developed at the time the plan was issued and 11 lacked longitudinal information. For the remaining 14 that require periodic measurement, there was variation in the rigor or specificity in the types of data to be collected or the analysis to be performed.
|
Each state administers its Medicaid program in accordance with its own Medicaid plan, which determines the groups of individuals to be covered, services to be provided, methodologies for providers to be reimbursed, and the administrative requirements that states must meet. To receive federal matching dollars for services provided to Medicaid beneficiaries, each state must submit a Medicaid plan for review and approval by HHS. States must meet certain federal requirements, but have flexibility beyond these federal parameters. For example, states must cover certain “mandatory” populations and benefits, but they have the option of covering “optional” categories of individuals and benefits. Coverage of optional populations and benefits varies across the states. States may also choose from different delivery systems, such as fee-for-service or managed care. States pay health care providers for covered services provided to Medicaid beneficiaries based on provider claims for services rendered. States generally make two types of supplemental payments to certain providers—payments separate from and in addition to those made to providers using regular Medicaid payment rates. Under federal law, states are required to make Disproportionate Share Hospital (DSH) payments to hospitals that serve a disproportionate share of low-income and Medicaid patients, in addition to regular Medicaid payments. Hospitals are subject to an annual limit on DSH payments, defined as the hospitals’ uncompensated care costs for Medicaid and uninsured patients minus Medicaid payments, and payments made on behalf of uninsured patients. States also make other supplemental payments, which are often referred to as non-DSH supplemental payments or “Upper Payment Limit (UPL) payments” to providers such as hospitals and nursing homes. These payments are based on the difference between Medicaid payments for services using regular Medicaid payment rates and the UPL, which is the ceiling on federal reimbursement. In general, the use of managed care to deliver Medicaid services precludes states from making UPL payments to providers because states are prohibited from making such payments for services provided under a managed care contract. Generally, the authority provided to the Secretary of Health and Human Services by section 1115 of the Social Security Act allows states to expand Medicaid coverage through demonstration projects to “expansion” populations that would not otherwise be eligible under traditional Medicaid programs. These demonstrations provide a way for states to innovate outside of many of Medicaid’s otherwise applicable requirements. For example, states may test ways to obtain savings or efficiencies in how they deliver services in order to cover expansion populations. Under a demonstration, states may also alter their Medicaid benefit package for categories of covered populations. Without this authority, states generally would be required to provide covered benefits in the same amount, duration, and scope to all beneficiaries covered under the state plan. States may have more than one comprehensive demonstration. For example, New Jersey had one demonstration targeted at expanding coverage to uninsured childless adults, and a separate demonstration targeted at expanding coverage to uninsured parents of Medicaid-eligible children. Both these demonstrations are comprehensive because they provide a broad range of services to these populations. States may also administer a large portion of their Medicaid program under a demonstration. For example, in Vermont, nearly all of the state’s Medicaid expenditures in fiscal year 2011 were for costs associated with a demonstration. Generally, to extend Medicaid to any previously uncovered populations or receive federal Medicaid matching funds for otherwise unallowable costs under the terms of a section 1115 demonstration, states must establish that the demonstration is budget neutral. To do so, states must show that their plans for changing their Medicaid program will generate savings to Medicaid, or they must get approval for redirecting existing Medicaid funding to cover the expected costs of the demonstration. For example: States have expanded the population eligible for Medicaid coverage by implementing managed care. In these demonstrations, states established budget neutrality by showing they would achieve savings from enrollment in managed care that could be used to cover new populations under the demonstration. States also have been approved by HHS to redirect certain categories of federal Medicaid funding for new purposes under the demonstration. Specifically, states have received approval to use all or a portion of their DSH allotments to cover previously ineligible individuals and costs under their demonstrations. States also have expanded coverage to previously ineligible populations, but in order to maintain budget neutrality, have provided the expansion population with a reduced benefit package—such as not covering inpatient hospital care—as compared to the typical benefits provided to Medicaid beneficiaries. Other strategies have included imposing higher cost-sharing on services or capping enrollment for expansion populations. States submit applications for section 1115 demonstrations to HHS. If HHS approves the demonstration, it is typically approved for a 5-year period. States that want to renew an existing demonstration have the option of requesting an extension or submitting an application for a new demonstration. States that submit an application for a new demonstration instead of an extension would need to terminate the existing demonstration, and would be required to notify beneficiaries of potential changes in coverage. A federal review team examines applications for both new demonstrations and extensions. The review team is led by CMS and includes representatives from OMB; from other agencies within HHS as applicable, such as the Substance Abuse and Mental Health Services Administration providing a review of waivers that affect mental health; and HHS Secretarial offices including the Assistant Secretary for Planning and Evaluation and the Assistant Secretary for Financial Resources. CMS’s Office of the Actuary provides nationwide data on projected Medicaid cost growth, but is not part of the federal review team. The federal review may consist of negotiations, including the exchange of questions and answers between the review team and the state. In approving applications, HHS might not approve all components of the states’ request contained in their applications. (See app. I for a discussion of applications that were submitted and reviewed between January 2007 and May 2012.) According to HHS’s policy, spending limits are based on the projected cost of continuing states’ existing Medicaid programs without a demonstration. The higher the projected costs, the more federal funding states are eligible to receive. The spending limits can be either an annual per person limit or an aggregate spending limit that remains fixed for the entire length of the demonstration, or a combination of both. HHS policy states that demonstration spending limits will be calculated from two components: Spending base. States select a recently completed year that establishes base levels of expenditures for populations included in the proposed demonstration—a state’s “spending base.” States also identify beneficiary groups for inclusion in the proposed demonstration. For example, demonstrations may include beneficiary groups, such as aged, blind and disabled, or families with children. However, the spending base must exclude certain base year expenditures, such as impermissible provider payments. Growth rates. States should submit to HHS 5 years of historical data for per person costs and beneficiary enrollment in their existing Medicaid program. HHS’s policy states that spending limits should be based on a benchmark growth rate, which is the lower of state- specific historical growth or the estimates of nationwide growth for the beneficiary groups included in the demonstration. The policy indicates that states, in providing HHS with state-specific historical growth rates, must also provide quantified explanations of any unusual changes in the trends. Nationwide projections of cost growth are developed by CMS actuaries to assist OMB in preparing the President’s budget. Growth rates for determining budget neutrality can vary for different eligibility groups. For example, the nationwide estimates of per capita cost growth in Medicaid for fiscal year 2012 were 6.0 percent for children, 3.4 percent for aged individuals, 2.6 percent for blind and disabled individuals, and 2.5 percent for adults. Figure 1 illustrates steps used to set spending limits for proposed section 1115 demonstrations. Some types of section 1115 demonstrations are not required to follow this process for determining spending limits. Specifically, for demonstrations that redirect a state’s federal DSH funding, HHS policy is to base the spending limit on the lower of the state’s DSH allotment or actual DSH expenditures prior to the demonstration. In addition, there is another group of recent section 1115 demonstrations pursuant to which federal law has defined how to calculate budget neutrality. Specifically, under the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA), states with existing section 1115 demonstrations covering childless adults using State Children’s Health Insurance Program (CHIP) funding were required to end these projects.apply and receive approval for new section 1115 demonstrations through which they could continue to cover childless adults using Medicaid funds. CHIPRA required that these new demonstrations be budget neutral, and required HHS to use a defined process of identifying the spending base and growth rates for demonstration spending limits. The 10 new comprehensive section 1115 demonstrations we examined focused on implementing ways of using federal funds to pay for services not typically covered under Medicaid. All 10 demonstrations were approved to implement different coverage strategies or cost sharing for certain beneficiary populations. Appendix II provides a brief summary of the key features of 10 demonstrations. Two states we reviewed—Arizona and Texas—obtained the authority under their section 1115 demonstrations to establish funding pools for purposes of making supplemental payments and to receive federal matching funds for these payments. As approved, Arizona’s section 1115 demonstration allows the state to make new types of supplemental payments to providers, and allowed the state to establish a funding pool from which these payments could be made. Arizona has operated a comprehensive section 1115 demonstration for many years, under which the majority of the Medicaid population is enrolled in managed care. Under its previous demonstration, the state made DSH payments to hospitals, but did not make UPL payments to its providers because the majority of services were provided under managed care contracts. UPL payments for services provided under managed care are generally prohibited under federal regulations. In March 2011, the state requested to terminate its existing section 1115 demonstration in order to limit coverage of certain adult populations.Subsequently, the state received approval for a new demonstration that continued the existing managed care delivery system, granted the authority for the state to make new types of supplemental payments to providers through a Safety Net Care Pool (SNCP), and expanded coverage to certain populations. Under the demonstration, the state obtained the authority to claim federal matching funds for new types of supplemental payments made to providers from SNCP. The state did not commit any state funds for these supplemental payments and instead relied on the contributions of eligible government entities for the nonfederal share of payments. According to HHS officials, because these supplemental payments were created under the authority of the demonstration, they were not subject to certain federal requirements that would otherwise apply. For example, officials reported that they did not consider these to be DSH payments and therefore the federal reimbursement for SNCP payments exceeds the maximum amount Arizona is allowed to receive under its DSH allotment. They were not considered to be UPL payments and therefore could be made even when services were provided under a managed care contract. According to HHS officials, the terms and conditions of the demonstration defined requirements for these payments. For example, under the terms and conditions, as the SNCP payments were Medicaid payments, HHS required they be subject to certain requirements when made to DSH hospitals. Specifically, SNCP payments received for inpatient or outpatient hospital costs were required to be counted against each hospital’s annual DSH payment limit. The terms and conditions of Arizona’s demonstration also included other federal requirements and limits for the SNCP payments. Specifically: For each year of the demonstration, the state was allowed to make up to $332 million—total federal and nonfederal funds—in payments to hospitals, clinics, and other nonhospital providers that have high levels of uncompensated care for medical services provided to Medicaid eligible and uninsured individuals. Demonstration requirements also limited these SNCP payments for individual providers’ to their costs of delivering services to Medicaid and uninsured individuals, and prohibited SNCP payments for nonemergency services provided to noncitizens who were not eligible for Medicaid. In addition to the $332 million, the state was allowed, for the first 2 years of the demonstration, to make up to $20 million—total federal and nonfederal funds—in payments that were previously made under a state-funded health program. Specifically, the state was approved to make payments to hospital trauma centers, hospital emergency departments, and rural hospitals across the state for clinical, professional, and operational costs. These payments were intended to help hospitals manage their uncompensated care costs. Prior to the demonstration, these were entirely state-funded payments that Arizona voters approved in 2002. Other features of Arizona’s demonstration included expanding coverage to two groups of children: children with family income at or below 175 percent of the federal poverty level (FPL), who were not otherwise eligible for Medicaid; and children up to age 19 with incomes between 100 and 200 percent of the FPL, who had access to employer sponsored health care coverage, and were not otherwise eligible for Medicaid. Total funding—federal and nonfederal funds—available for expansion was capped at about $77 million each year by the demonstration. The state was also allowed to extend the length of Medicaid coverage for postpartum women from the typical 60 days to 24 months. The total cost allowed for this program—federal and nonfederal—was $20 million. Two of the purposes of the Texas section 1115 demonstration were to allow the state to expand its use of managed care statewide, and to authorize new supplemental payments through two new funding pools. Prior to the demonstration, the state provided services to most of the state’s Medicaid population on a fee-for-service basis. Under this system, hospitals provided services to Medicaid-covered individuals and then submitted bills to the state for reimbursement based on the state’s regular Medicaid payment rates. In addition, the state made DSH payments and UPL payments for hospital services provided on a fee-for-service basis. Under the demonstration, HHS approved two new types of supplemental payments to be distributed through two funding pools. In creating these pools, the state did not commit any state funds and instead relied on the contributions of eligible government entities for the nonfederal share of payments. Under one pool, called the Uncompensated Care (UC) pool, the state could obtain federal matching funds on provider payments totaling up to $17.6 billion over the 5-year term of the demonstration. Under the second pool, called the Delivery System Reform Incentive Payment (DSRIP) pool, the state could obtain federal matching funds on provider payments totaling up to $11.4 billion over the 5-year term of the demonstration. The demonstration significantly increased the amount of federal funding Texas could claim for the two new types of supplemental payments. For example, in fiscal year 2011—the year prior to the demonstration—the state claimed federal matching funds on about $2.6 billion in UPL payments. Under the demonstration, the state was authorized to receive federal matching funds on $4.2 billion in payments made in the first year of the demonstration and on $6.2 billion in payments made in each of the remaining 4 years of the demonstration. HHS officials told us that because the UC pool payments were created under the demonstration, they were not DSH or UPL payments and therefore were not subject to the federal requirements that govern those payments. Thus, as with Arizona, these payments were in addition to, and not limited by, the maximum cap on federal matching that was provided to the state under its DSH allotment. Officials told us the UC pool payments also were not considered to be UPL payments and could be made for services provided to individuals enrolled in managed care. According to HHS, the terms and conditions of the demonstration established requirements and limits for the UC pool payments. Among other things, the terms and conditions required that these payments be limited to individual providers’ uncompensated costs of delivering services to Medicaid beneficiaries and uninsured individuals.Arizona, because the UC pool payments were Medicaid payments, the payments for inpatient or outpatient hospital costs were required to be counted against the amount of DSH payments that an individual hospital could receive. The terms and conditions also allowed the state to make these Medicaid supplemental payments to a variety of providers for serving Medicaid and uninsured individuals. These providers included physician-practice groups, government ambulance providers, government dental providers, and rural health providers with no public hospitals. According to the terms and conditions, the state could not receive federal funding for expenditures from the DSRIP pool until key milestones are met including: HHS’s approval of the state’s plan for and status of forming regional health care partnerships; identification of the public hospitals directing each of these partnerships; and development of a list of projects related to the four main areas noted above. In addition, incentive payments from the pool would be based on successful completion of HHS-approved metrics submitted by the regional healthcare partnerships related to the four areas. counted against the amount of DSH payments a hospital could receive under its hospital specific DSH limit. The DSRIP payments were approved by HHS under the expectation that the state would be expanding its Medicaid coverage under PPACA; however, since the demonstration’s approval, the state has not confirmed that it intends to expand Medicaid to new populations allowed under PPACA. At the time Texas’s demonstration was approved, PPACA required all states to expand Medicaid coverage to a new mandatory category of low-income individuals, and states were eligible to receive enhanced federal funding for this population beginning in January of 2014. However, subsequent to this approval, the U.S. Supreme Court ruled that any state that chooses not to expand Medicaid coverage will not be subject to a penalty of losing Medicaid funding for the entire program, and instead will only forego the enhanced funding for that population, therefore making the expansion a choice for the states. In June 2013, a state law was enacted that would prohibit the state Medicaid agency from expanding Medicaid coverage. In general, HHS can withdraw authorities to claim federal funding for expenditures under demonstrations in certain circumstances, including if it determines that the approval no longer promotes the objectives of the Medicaid program. However, HHS officials stated that the delivery system improvements that will result from the DSRIP pool payments will benefit low-income and Medicaid populations, whether the state expands Medicaid or not. HHS does not plan to revisit the terms and conditions of the Texas demonstration as it relates to the DSRIP pool, even if the state does not expand Medicaid as provided under PPACA. Indiana, the District of Columbia, Wisconsin, and Missouri were allowed to redirect all or a portion of their federal DSH allotment, primarily to cover populations made eligible for Medicaid under the terms of the demonstration. Three of the states—Indiana, the District of Columbia, and Wisconsin—were approved to use at least some of their DSH allotment solely for coverage expansion. Indiana was allowed to use a portion of its DSH allotment to pay for services for a new population of about 36,000 higher-income parents and childless adults. The District of Columbia demonstration expanded full Medicaid coverage to childless adults with incomes higher than the income level that would qualify them for Medicaid coverage under PPACA beginning in 2014—over 133 percent of the FPL. Individuals in this expansion population were previously covered under a local program for which the District of Columbia did not receive federal matching funds. Wisconsin was approved to expand its plan to an estimated 35,000 childless adults, providing them with benefits, such as physician and hospital services. The fourth state, Missouri, was also approved to redirect a portion of its DSH allotment for new purposes established under the demonstration, including, among other things, providing coverage to previously ineligible populations. Missouri was also allowed to redirect a portion of its DSH funds for payments for other purposes, as authorized under the demonstration, including payments to health clinics that provided ambulatory services to uninsured and indigent populations in and near St. Louis; payments for administrative costs—nonhospital services—to a health commission, which will coordinate, monitor, submit reports on the demonstration’s activities, and make recommendations for payment allocations; a program that educated patients on primary care and proper use of the initiation of a coverage expansion pilot that would provide a limited primary care benefit package and test the use of a voucher system to provide acute hospital services when needed by individuals in the pilot expansion population. This program worked with uninsured individuals that come to the emergency rooms and educated them on available resources for primary/non-emergent care, scheduled follow- up appointments with primary care providers, and arranged transportation to appointments. These services were coordinated while the individuals were in the emergency room. The costs—both state and federal—for this program could not exceed $175,000 per year for 2 years of the demonstration or $700,000 per year for 3 years. 1115 demonstration, and multiple other Medicaid waivers. According to HHS, this consolidation would allow Rhode Island to seamlessly provide services to individuals who previously had to qualify to receive services, such as home and community-based services through different programs that were governed by different rules and authorities. The state’s proposal indicated that this flexibility would allow the state to better manage the use of long-term care and increase home and community-based services. Through the demonstration, the state also was given the flexibility to make certain programmatic changes to its Medicaid program without having to follow more formal procedures. For example, for certain changes, such as those that would otherwise need to be processed as an amendment to the Medicaid state plan or to the demonstration terms and conditions or that did not affect eligibility or benefits, the state was allowed to only notify HHS of the change.was the first time they approved a test of this type of administrative flexibility. Idaho, Michigan, and New Mexico were required by federal law to discontinue using CHIP funds to cover childless adults, but were allowed to continue services for this population with Medicaid funds under a new section 1115 demonstration. These three states each applied and received approval for new demonstrations to continue to provide coverage to childless adults, without expanding to any new populations. Under these demonstrations, various types of benefits were provided to the childless adult populations. For example, Idaho was approved to provide premium assistance for qualifying employer-sponsored insurance, while Michigan and New Mexico were approved to provide coverage for services. Michigan was approved to limit benefits to outpatient services, and New Mexico was approved to provide coverage for both inpatient and outpatient services. All 10 states we reviewed were approved to implement new ways of expanding coverage or imposing cost sharing requirements on different Medicaid populations. Examples of these strategies are presented below. Arizona’s demonstration allowed, for a limited time, the state to charge expansion enrollees a fee when they miss a physician appointment in order to encourage proper use of medical services. The demonstration also allowed the state to impose cost sharing for non-emergency use of the emergency room, as well as higher cost sharing for brand name drugs when a generic is available. Indiana’s demonstration allowed the state to establish a high- deductible health plan and health care spending account for uninsured adults enrolled for coverage under the demonstration. Expansion enrollees must make specified contributions to their accounts, based on income levels, as a condition of continued enrollment. The accounts must be used by enrollees to pay for the cost of health care services until a deductible is reached; however, preventive services up to a maximum amount would be exempt from this requirement. The spending account was intended to provide incentives for participants to utilize services in a cost-efficient manner. This demonstration also allowed the state to impose an enrollment cap on the number of childless adult expansion enrollees for its health savings account program. Three states—Idaho, Michigan, and New Mexico—were approved to continue some coverage strategies from their previous demonstrations.demonstration required a 50 percent employer contribution toward the cost of the health benefit plan. Michigan’s demonstration allowed the state to continue to provide a limited benefit package that focused on outpatient services and required prior authorization for some of these services. Finally, New Mexico’s demonstration allowed the state to cap medical expenditures for each enrollee. For example, Idaho’s premium assistance Rhode Island was allowed to form and pay for entities dedicated to reviewing the needs of enrollees eligible for long-term care. According to the state’s proposal, this process facilitated the appropriate care setting by shifting care away from high-cost institutional settings when less costly home and community-based care was appropriate. The organization did this by helping enrollees decide how to manage their health care needs based on a distinction given to them as “highest need,” “high need,” or “preventive.” This designation allowed the state to determine which cost-effective, long-term services an enrollee could receive. For example, those designated as highest need were approved to receive nursing home care, while those designated as preventive were approved to receive certain home health services. For 4 of 10 demonstrations we reviewed, HHS approved spending limits that were based on assumptions of cost growth that were higher than those reflected by the state’s historical spending and the President’s budget. In addition, in some cases the approved spending limits included costs in the base year that were hypothetical. If HHS had held spending limits in the four demonstrations to levels suggested by its policy, we estimate that the spending limits would have been $32 billion lower over the 5-year term of the demonstrations. We also found that HHS’s budget neutrality policy is out-dated, because it does not reflect HHS’s current processes or provide assurances that data used for spending limits are reliable. HHS approved spending limits for the Arizona, Indiana, and Rhode Island demonstrations that used growth rates that exceeded benchmark rates and, in the case of Texas, included hypothetical costs in the base year spending. HHS officials reported that their policy and process allow for negotiations in determining spending limits, including adjustments to the benchmark policy. However, HHS’s policy does not specify criteria and methods for such adjustments or the documentation and support needed for adjustments. We found that the criteria and methods for making the adjustments in these states were not always clear or well supported. Our estimates show that, had HHS used benchmark growth rates and actual base year costs, the 5-year spending limits would have been almost $32 billion dollars lower than what was actually approved. The federal share of the $32 billion reduction would constitute an estimated $21 billion. (See table 1.) HHS departed from its policy in selecting base year expenditures and benchmark growth rates for the approved Arizona spending limit without a clear rationale. Had actual base year expenditures and benchmark growth rates been used, the 5-year spending limit would have totaled about $26 billion less. HHS established the largest portion of the Arizona spending limit, per person spending, using an outdated baseline of projections of the costs of operating the program without the demonstration. The projections were based on the estimated costs of operating the program developed for the state’s previous demonstration— initially approved in 1982—and adjusted forward to 2011. HHS’s policy indicates that the spending limits for new demonstrations should be based on actual expenditures in the base year. Arizona’s actual expenditures in 2011—the base year for the demonstration had HHS approved a spending limit based on its policy—were much lower than the projected costs used by HHS as the basis of the spending limit. HHS officials said that the agency was not able to estimate the cost of the Arizona Medicaid program without the demonstration using recent actual expenditure data, because the state’s Medicaid program had operated under a demonstration since 1982. We found that HHS’s rationale for relying on 30-year-old projections of what the Medicaid program would have cost without the demonstration was unsupported. Actual expenditure data were available and would more accurately reflect state spending under Medicaid than the old projections that were based on the state operating without a Medicaid program. We estimate that HHS’s use of projected costs rather than actual expenditures for the base year increased the spending limit by about $22 billion. HHS’s approved spending limit for Arizona also used growth rates for certain populations that were higher than the benchmark growth rates suggested by agency policy. The rates HHS used reflected national growth rates, which were higher than the state historical growth rates based on actual state historical expenditures. (See table 2.) Instead, HHS compared national growth rates to the projected state growth approved as part of the state’s previous demonstration, developed 30 years earlier. Officials also told us that they did not consider comparing the national growth rates to the state’s historical growth rates derived from actual state expenditures because it was unclear if these expenditures would have occurred absent the demonstration. However, without the demonstration, Arizona would not have a Medicaid program. In addition, the previously approved rate does not appear to be a valid substitute given the large difference between that rate and the growth rates indicated by actual historical expenditure data. We estimate that HHS’s use of the previously projected growth rates rather than actual state expenditures to derive the benchmark growth rate increased the approved 5-year spending limit by about $4.2 billion. For the Indiana demonstration, HHS approved a spending limit that was based on a projected growth rate that exceeded the benchmark growth rate without clear support for doing so. Had HHS used the benchmark growth rate, the demonstration’s 5-year spending limit would have been an estimated $416 million lower. While HHS developed growth rates based on only 3 of the 5 years of historical data, HHS documented that the most recent 2 years of data reflected large decreases in spending from the state’s increased use of managed care and that these changes in spending were a onetime effect that likely would not continue. We determined that HHS had adequately explained and documented its reason for making this adjustment. However, we found that HHS did not have adequate support for approving a 4.4 percent growth rate for all four populations included in the demonstration, when the historical data provided by the state showed benchmark growth rates that were lower than 4.4 percent for three of those populations. (See table 3.) HHS officials stated that a policy decision was made to use the average state historical growth rates because it was believed that it was more likely to reflect cost trends in the future. Officials added that one of the individual populations had a zero growth rate historically and it was decided that as a result of regular health care inflation, costs would grow. Approving an average growth rate does not appear to be a valid substitute for the state historical growth rates for each population given the significant difference for the adult caretaker populations, and the fact that health care inflation was present during prior years and would have also affected population- specific growth rates. Had HHS used benchmark growth rates for each population, the spending limit would have been about $416 million lower than the approved spending limit. HHS approved an aggregate spending limit of about $12.1 billion for the Rhode Island demonstration based on a growth rate that exceeded the benchmark growth rate without clearly supporting the use of a higher growth rate. Had HHS used the benchmark growth rate, we estimate that the spending limit would have been about $772 million lower than the approved limit. According to HHS officials, the spending limit was developed using the 2006 base year average national growth rate of 7.8 percent; however, the state’s historical growth rate in the 5 years prior to applying for the demonstration was 7.0 percent. HHS officials told us that though the state provided data for 2007 to be used as the base year, HHS instead chose 2006 as the base year, because negative trends in the 2007 data were not representative, did not appear reliable, and contained what they called outliers. So while HHS based the Rhode Island spending limit on the lower of the two growth rates for 2006, the agency could not provide clear support for using that base year. (See table 4.) In Texas, the HHS-approved spending limit included two types of hypothetical costs in the state’s base year expenditures. These costs represented higher payment amounts the state could have paid to providers, but did not actually pay. We estimated that, had the state only included actual expenditures as indicated by HHS’s policy, the spending limit would have totaled about $4.6 billion less. HHS’s decision sets a precedent that a state can increase a demonstration spending limit on the basis that it could have hypothetically paid Medicaid providers more than it actually chose to pay them, without a clear basis for doing so. First, Texas’s spending limit was based, in part, on hypothetical costs as opposed to actual incurred expenditures with respect to UPL payments that could be made for inpatient hospital services but were not actually made. Prior to applying for this demonstration, about 1.3 million of the state’s Medicaid population received inpatient hospital services under managed care, and the state did not make UPL payments for these services. In its spending limit estimate, however, Texas included costs for UPL payments and fee-for-service payments for beneficiaries previously receiving inpatient hospital services under a managed care delivery model. In its proposal, the state said it would take certain actions in response to directives from its state legislature. Specifically, the state said that if the demonstration was not approved by HHS it would carve out inpatient hospital services previously provided under managed care and pay for these services on a fee-for-service basis and also make UPL payments for these services. These actions would increase costs because fee-for-service payments and UPL payments to hospitals would greatly exceed capitation payments made to managed care plans. HHS officials stated that, given a directive of the Texas legislature, they believed the state would do so, and they allowed the estimated increased costs of such an arrangement to be factored into the spending limit even though Texas had not changed its payment model. As a result, $3.8 billion of the demonstration spending limit was based on what Texas estimated it could pay providers in the future but had not been paying prior to the demonstration. Second, Texas proposed including additional hypothetical costs in the base year expenditures by using the maximum amount of UPL payments the state could have paid rather than the actual amount of payments the state did make. In its proposal, the state documented that during each of the 4 years leading up to and including the base year, the state’s actual hospital inpatient UPL payments were less than the maximum amount the state could have paid. HHS officials noted that because the actual payments were accounting for an increasing percentage of the maximum UPL payments the state could have made, they allowed the state to use this larger amount. As a result of HHS’s decision, about $796 million of the demonstration spending limit was based on a hypothetical expenditure that did not represent actual expenditures of the state under its program. HHS approved spending limits for three demonstrations that redirected federal DSH funds, which were consistent with its policy. For the District of Columbia, Missouri, and Wisconsin demonstrations, HHS limited federal spending to the lower of the states’ DSH allotment or actual DSH expenditures in the year prior to the demonstrations. This approach helps provide assurances that the federal government will spend no more under the demonstrations than what it would have spent without them. For the District of Columbia and Missouri, HHS limited federal spending to a specific dollar amount, which represented a portion of the states’ DSH expenditures in the year prior to the demonstrations’ approvals. The Wisconsin spending limit was set at the total DSH allotment, which also represented the amount of expenditures in the year prior to the demonstration’s approval. The approved spending limit for the entire length of the demonstration was about $145 million for the District of Columbia, $105 million for Missouri, and $797 million for Wisconsin. The Idaho, Michigan, and New Mexico demonstrations were a unique type of section 1115 demonstration governed by requirements not applicable to other types of Medicaid section 1115 demonstrations. For these three states, HHS set spending limits using a process provided for under CHIPRA. These states applied and received approval for new section 1115 demonstrations, through which they continued to cover childless adults using Medicaid funds instead of CHIP funds. CHIPRA also defined the budget neutrality process for such demonstrations by identifying the base year and growth rates for demonstration spending limits. For each of the three demonstrations, HHS followed the budget neutrality procedures outlined in CHIPRA in setting the spending limit on an annual basis. The initial annual spending limits were based on expenditure projections of about $80,000 for Idaho, about $137 million for Michigan, and about $177 million for New Mexico. For the first demonstration year, spending limits were slightly less because the demonstrations operated less than a full year. HHS’s policy for setting spending limits for proposed demonstrations is inconsistent with its actual practices. To this extent, HHS’s internal controls are insufficient. According to Standards for Internal Control in the Federal Government, government processes, including management directives and administrative policies, should be clearly documented.discussing documentation for HHS’s policy, published in 2001, officials indicated that it reflected HHS’s most current processes and policy on In budget neutrality, but acknowledged that some aspects of the policy, as written, were no longer applicable to current processes. For example, HHS officials told us that the methods described for determining spending limits of demonstration extensions were no longer applied. In addition, while the policy requires that states submit 5 years of historical data in developing spending limits—and HHS officials told us that this is their preference—the agency’s current processes allow states to use data based on the state’s estimate of spending or enrollment. For example, if the 2 most recent years of expenditure data are not available because of delays in Medicaid claims processing, estimates for these years can be used. Officials indicated that if estimates are used instead of actual data, the state must explain any adjustments. But HHS officials did not have documentation for the current process or policy on when estimates are allowed, or the type of documentation of adjustments that is required. In addition, the HHS’s policy does not require documentation or describe how the data used to set spending limits are reviewed to ensure reliability and accuracy. According to officials, the data used for projecting spending comes from each state’s Medicaid data system, and HHS generally does not test the accuracy of the data. However, officials noted that the state systems may have their own quality and reliability checks. In October 2012, HHS introduced an optional waiver application template that included a standard budget neutrality form that states could use to submit 1115 demonstration applications. The template provides a standard format for states to submit commonly used data elements—such as historical expenditure and enrollment data, and the projected growth rates and per capita costs based on the state historical enrollment and costs—and a description of the sources and methods for obtaining state historical data. The budget neutrality form allows states to submit actual or estimated data. HHS officials told us that the new template does not establish any new budget neutrality policy, but instead was intended to make the application template more user-friendly than the prior template that was developed in conjunction with the agency’s policy published in 2001. The new budget neutrality form reiterates HHS’s 2001 policy that states that spending limits should be based on the lower of the state- specific historical growth rate or estimated nationwide growth rate. The form does not provide additional guidance, for example, on the process and criteria for when estimated state historical data rather than actual state historical expenditure data are used in setting spending limits, or when deviations from the benchmark policy are allowed and how they should be documented and supported. The fiscal challenges facing the federal government require prudent stewardship of federal Medicaid resources. While section 1115 Medicaid demonstrations serve as an important mechanism for states to implement projects that allow for innovation while promoting Medicaid objectives, HHS policy requires that they not expose the federal government to additional financial liability. The Secretary of Health and Human Services has an important responsibility for ensuring that comprehensive demonstrations will not increase federal costs above what would be incurred without these demonstrations. HHS’s long-standing budget neutrality policy for these demonstrations, on its face, recognizes that states should not be given access to additional federal funding at the same time they are provided with greater program flexibility. However, neither the policy, nor HHS’s implementation of it, ensures the prudent stewardship of federal Medicaid spending. After examining HHS’s approach for approving spending limits of recently approved demonstrations, we have three main concerns regarding the budget neutrality policy and process. First, HHS’s policy is not reflected in its actual practices and, contrary to sound management practices, is not adequately documented. Second, the policy and processes lack transparency regarding criteria and the supporting evidence required to justify deviations from historical spending and established benchmark growth rates. We recognize that forecasting spending during changing economic times is challenging and a state’s circumstances may warrant such deviations. Nonetheless, we believe that approved spending limits that are based on baselines and growth rate expectations that greatly deviate from HHS’s current benchmarks should be well-supported and documented. HHS’s policy is currently silent as to when deviations are allowed and does not require that reliable evidence be provided to justify deviations. Transparency around the basis for spending limit decisions is important not only for assurances of the ongoing fiscal integrity and sustainability of the program, but also for assurances of consistency of approvals among states. Third, the policy as implemented allows methods for establishing spending limits that we believe are inappropriate for such purposes, such as allowing states to include hypothetical costs in the baseline for spending limits. The second and third concerns parallel those we have raised in earlier reports. In 2008, because HHS disagreed that changes to the budget neutrality policy and review process were needed, we suggested that Congress consider requiring increased attention to fiscal responsibility in the approval of section 1115 Medicaid demonstrations and require the Secretary of Health and Human Services to improve the demonstration review process by, for example, clarifying the criteria for approving spending limits and documenting and making public the basis for such approvals. Thus far Congress has not acted on this suggestion. On the basis of the findings in this report, we believe the Secretary needs to take additional actions to ensure that HHS’s budget neutrality policy reflects current practices and that the spending limits for the Texas and Arizona demonstrations are appropriate, well supported, and based on clear criteria. To improve the transparency of the process for reviewing and approving spending limits for comprehensive section 1115 demonstrations, we recommend that the Secretary of Health and Human Services take the following two actions: 1. update the agency’s written budget neutrality policy to reflect actual criteria and processes used to develop and approve demonstration spending limits, and ensure the policy is readily available to state Medicaid directors and others; and 2. reconsider adjustments and costs used in setting the spending limits for the Arizona and Texas demonstrations, and make appropriate adjustments to spending limits for the remaining years of each demonstration. We provided a draft of this report to HHS for comment. In its written comments, HHS acknowledged that it has not always communicated its budget neutrality policy broadly or clearly, but stated it has applied its policy consistently. The Department suggested that recent steps to increase transparency—such as publishing a new section 1115 application template and implementing a federal public input process— reflect updated policy on how HHS sets spending limits and ensures demonstrations are budget neutral. While the application template may contain guidance on some of the data elements commonly used to demonstrate budget neutrality, we do not believe that it addresses how HHS reviews the applications or the criteria used for setting spending limits. We have revised our report to clarify how this template falls short of clarifying HHS’s budget neutrality policy. HHS did not otherwise identify any written policy it has issued since 2001 either during the course of our review or in its comments. HHS did not concur with our recommendation that its budget neutrality policy should be updated to reflect the actual criteria and processes used to develop and approve demonstration spending limits, and ensure that the policy is readily available. HHS stated that our findings that four states’ spending limits would have been lower had the agency followed its policy were flawed. HHS said that we used only a subset of the best available data that the Department used to assess budget neutrality and that we relied on an outdated policy issued in 2001. We disagree with these assertions. It is important to note that, to do our analysis, we relied on extensive documentation and information that HHS officials specifically provided us as the basis for the selected states’ budget neutrality determinations. For example, we obtained the spreadsheets with the data and calculations that HHS used to determine each state’s demonstration spending limits. We reconciled these spreadsheets with the spending limits and documentation in each state’s demonstration approval and had numerous discussions with HHS officials to confirm our understanding of the data and the basis for the final spending limits. At no time did officials tell us that they had provided us only with a subset of the data used to assess budget neutrality or cite additional information or data that we had not considered. HHS’s assertion that we relied on an outdated budget neutrality policy that did not reflect the Department’s current policy also conflicts with information provided to us during the course of this review. On multiple occasions, we discussed with officials the policy used to establish demonstration spending limits, including the applicability of the 2001 written policy. HHS officials told us—both verbally and in writing— that the 2001 written policy generally reflected the Department’s current policy toward budget neutrality. They told us that this document was the most recent document capturing the budget neutrality policy. However, as we described in the draft report, HHS officials told us that some parts of the 2001 written policy were outdated. The Department did not have any plans to update the 2001 policy. HHS did not concur with our recommendation that it should make adjustments to the spending limits for the remaining years of the Arizona and Texas demonstrations. In its comments, HHS said that the adjustments and costs it used were justified. However, HHS did not provide any new information or support beyond what was considered and discussed in the draft report. For example, HHS did not respond to our concerns that Texas’ spending limit included $3.8 billion in costs that the state could hypothetically pay providers in the future but did not actually pay them prior to the demonstration. We continue to believe HHS’ decisions were not clear or well supported. HHS also stated that it had significantly strengthened the accountability in Texas by requiring HHS approval before federal matching funds can be drawn down for state expenditures made under the demonstration, and by instituting robust reporting requirements. We believe that improved oversight of actual spending occurring under the demonstrations does not lessen the need for establishing sound spending limits. HHS’s comments are reproduced in appendix III. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. From January 2007 through May 2012, the Department of Health and Human Services (HHS) received 62 comprehensive section 1115 Medicaid demonstration applications from 38 states, 3 of which were subsequently withdrawn by the states. HHS approved 45 of the remaining applications, disapproved 1, and another 13 were still pending completion of review as of May 31, 2012. About two-thirds, or 31, of the 45 approved applications were for extensions of existing demonstrations, while 8 of the 13 still under review were for new demonstrations. For the 46 reviews that HHS completed, reviews took from 47 days to almost 4 years, and averaged 323 days from the date of application to the date of the review decision. About 72 percent of the reviews took a year or less to complete. (See table 5.) Officials with HHS stated that the nature of demonstration reviews is unpredictable because of the different factors outside HHS’s control, which can influence the review. Further, HHS generally does not have a set time frame within which applications must be reviewed. There are a number of factors that may have affected the review times for the demonstrations we reviewed. For example, prior to applying for a new demonstration, states may submit a concept paper to HHS to receive technical assistance, advice, and other guidance. There may then be extended dialogue between a state and HHS about the plans included in a concept paper. The process can provide states with an initial indication of the acceptability of their proposal and thereby facilitate the application process. According to the officials, in cases where states have submitted such papers, HHS reviews may be shorter. In addition, the purpose, scale, and complexity of demonstration applications vary, and will result in the need for more or less discussion between the state and HHS. Similarly, the completeness of the application can affect review times. Applications may have lacked important details, such as data on how the program will be implemented, its effect on relevant beneficiary populations, or how budget neutrality is achieved. In these cases HHS may request extensive clarification, which adds to the review time because of the time states need to respond. Also, HHS officials said that state legislative activity can alter the proposal during development, or midcourse, which would extend HHS review times. This appendix summarizes key information on 10 new comprehensive section 1115 demonstrations approved from January 2007 through May 2012. Key information presented includes: a summary of specific details about the purpose of the demonstration; the population covered; the term of each demonstration; the estimated number of people covered in the first and last year of the demonstration; and the approved spending limit over the term of each demonstration. Because the scope and purpose of demonstrations vary by state, the amount and detail of the information provided for each demonstration also varies. Demonstration Term: October 2011–September 2016 Estimated Number of People Covered: first year: 1,129,869; last year: 1,806,984 Approved Spending Limit: Over $74.4 billion Arizona requested to terminate its previous section 1115 demonstration, operating since 1982, in order to eliminate coverage for one of its adult populations covered previously, and to implement an enrollment freeze on its childless adults, effective in July 2011. The population that was to be eliminated was covered through the Medical Expense Deduction program, which was for adults with income in excess of 100 percent of the federal poverty level (FPL) who have qualifying health care costs that reduce their income to or below 40 percent of the FPL. The enrollment freeze applied to adults without dependent children with family income up to and including 100 percent of the FPL. This new demonstration continued to provide coverage for the Medicaid population through managed care. In addition, the state was approved to establish state a funding pool for making supplemental payments, totaling over $300 million per year, to providers that cover Medicaid and uncompensated care costs, and to make hospital payments for trauma and emergency services through a program that was originally a state- funded initiative. The demonstration also expanded coverage to certain children up to age 19 and women who lose Medicaid pregnancy coverage. The state increased personal financial responsibility through cost sharing by implementing the use of penalties for certain enrollees that miss scheduled physician appointments, and encouraging appropriate utilization of emergency room care by imposing cost sharing for improper use of the emergency room. The state also imposed higher cost sharing for brand name drugs when a generic is available. Demonstration Term: November 2010–December 2013 Estimated Number of People Covered: first year: 4,815; last year: 11,121 Approved Spending Limit: $145 million The District of Columbia was approved to redirect Disproportionate Share Hospital (DSH) funds in order to provide full Medicaid benefits to adults ages 21 through 64 with incomes between 133 percent and 200 percent of the FPL. Benefits under the demonstration were provided through a mandatory managed care delivery system. Most anticipated enrollees were covered previously through a local program that provided more limited benefits. Demonstration Term: January 2010–September 2014 Estimated Number of People Covered: first year: 350; last year: 350 Idaho was approved under a new demonstration to continue to provide premium subsidies to nonpregnant childless adults age 18 and above with incomes at or below 185 percent of the FPL. The demonstration allows a premium subsidy up to $100 per month per enrolled adult—a qualifying employee or the spouse of the employee—toward the individual’s share of the employer-sponsored health insurance premium. Participating employers are required to make a 50 percent contribution toward the cost of the health benefit plan. Demonstration Term: January 2008–December 2012 Estimated Number of People Covered: first year: 669,894; last year: 848,919 Approved Spending Limit: $10.6 billion Indiana received approval to operate two distinct health insurance programs. This demonstration preserved the program previously in place for Medicaid-eligible individuals and expanded coverage to uninsured adults; both programs were run through a managed care delivery system. The first program, called the Hoosier Healthwise Program, continued coverage for current Medicaid-eligible individuals. The second program, called Healthy Indiana Plan (HIP), expanded coverage for uninsured adults, not currently eligible for Medicaid. The expansion was partially funded using redirected DSH funding. The HIP provided a high-deductible health plan and an account similar to a health savings account for uninsured adults including low-income custodial parents and caretaker relatives of Medicaid and State Children’s Health Insurance Program (CHIP) children, and uninsured noncustodial parents and childless adults ages 19 through 64 with incomes between 22 and 200 percent of the FPL. Participation in HIP is voluntary, but all enrollees are required to receive medical care through the high-deductible health plans. HIP enrollees are required to help fund the $1,100 deductible by contributing to a savings account. These accounts are used by enrollees to pay for the cost of health care services until the deductible is reached; however, preventive services up to a maximum amount are exempt from this requirement. The benefits available under HIP are limited to $300,000 annually, and $1 million over a lifetime. The demonstration also included cost sharing depending on income. Demonstration Term: January 2010–September 2014 Estimated Number of People Covered: first year: 74,379; last year: 90,665 Michigan was approved under a new demonstration to continue providing a limited ambulatory benefit package through a managed care delivery system to low-income nonpregnant childless adults ages 19 through 64 years with incomes at or below 35 percent of the FPL. The benefit package included outpatient hospital services, physician services, diagnostic services, pharmacy, mental health and substance abuse services. Enrollees may be required to receive prior authorization before accessing certain ambulatory services. Demonstration Term: July 2010–December 2013 Estimated Number of People Covered:Approved Spending Limit: $105 million Missouri was approved to redirect its DSH funding to pay for four main activities in the St. Louis area: (1) health clinics that will provide services to the uninsured; (2) a health commission to manage activities related to the demonstration; and (3) a program to educate and encourage patients to use primary care rather than the emergency room. Lastly, the state was approved to expand coverage through a pilot that provides limited primary care benefits and a voucher system to provide acute hospital services to a population in the St. Louis area. Demonstration Term: January 2010–September 2014 Estimated Number of People Covered:Approved Spending Limit: $1.3 billion New Mexico was approved under a new demonstration to continue to provide coverage for nonpregnant childless adults. The eligible population is nonpregnant childless adults ages 19 to 64 years with incomes up to and including 200 percent of the FPL who are not eligible for Medicaid. Enrollees receive a comprehensive benefit package through a managed care delivery system in which premiums and copayments are required. These premiums include up to $35 for higher-income childless adults. The demonstration was designed to provide health care coverage to uninsured individuals who are unemployed, self-employed, or employed by an employer with 50 or fewer employees. Demonstration Term: January 2009–December 2013 Estimated Number of People Covered: first year: 192,778; last year: 206,540 Approved Spending Limit: $12.1 billion Rhode Island was approved to operate its entire Medicaid program under a demonstration and to continue to provide coverage to populations that were previously covered under several distinct waivers. Rhode Island was allowed to redesign its Medicaid program to provide cost-effective services that will ensure beneficiaries receive the appropriate services in the least restrictive and most appropriate setting. For example, the state was allowed to form and pay for entities dedicated to reviewing the needs of enrollees eligible for long-term care. This organization helps enrollees decide how to manage their health care needs based on a distinction given to them as “highest need,” “high need,” or “preventive.” This designation allows the state to determine which cost-effective long-term services an enrollee can receive. For example, those designated as highest-need individuals are approved to receive nursing home care while those designated as preventive are approved to receive certain home health services. The state was also approved to include other services under the demonstration, such as parenting and childbirth education classes, tobacco cessation services, and window replacement for lead- poisoned children. Demonstration Term: December 2011–September 2016 Estimated Number of People Covered: first year: 3,872,680; last year: 4,767,680 Approved Spending Limit: $142 Billion The Texas demonstration allowed the state to both expand the use of a managed care delivery system to existing covered populations and to preserve supplemental payments through the establishment of funding pools. The state was allowed to claim approximately $29 billion over the 5-year term of the demonstration on these pool payments. One pool was used to reimburse providers for uncompensated care costs, and the other was used to provide incentive payments to participating hospitals that The state also was implement and operate delivery system reforms.approved to cover children’s primary and preventive Medicaid dental services through a capitated statewide dental services program. Demonstration Term: January 2009–December 2013 Estimated Number of People Covered: first year: 25,129; last year: 40,800 Approved Spending Limit: $797 million Wisconsin obtained approval to redirect its DSH funding to expand coverage to childless adults, who are defined as individuals between the ages of 19 and 64 years with income that does not exceed 200 percent of the FPL. The program included a variety of features: a requirement for participants to complete a health needs assessment—used to match enrollees with health maintenance organizations and providers that meet the individual’s specific health care needs; tiering of health plans based on quality of care indicators; and enhanced online and telephone application tools that allow childless adults to choose from a variety of health insurance options. In addition to the contact named above, Tim Bushfield, Assistant Director; Susan Barnidge; Shirin Hormozi; Carolyn Feis Korman; Drew Long; Tom Moscovitch; Pauline Seretakis; and Hemi Tewarson made key contributions to this report. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Medicaid: More Transparency of and Accountability for Supplemental Payments Are Needed. GAO-13-48. Washington, D.C.: November 26, 2012. Medicaid: States Reported Billions More in Supplemental Payments in Recent Years. GAO-12-694. Washington, D.C.: July 20, 2012. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. Medicaid Demonstration Waivers: Lack of Opportunity for Public Input during Federal Approval Process Still a Concern. GAO-07-694R. Washington, D.C.: July 24, 2007. Medicaid Waivers: HHS Approvals of Pharmacy Plus Demonstrations Continue to Raise Cost and Oversight Concerns. GAO-04-480. Washington, D.C.: June 30, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C: February 13, 2004. SCHIP: HHS Continues to Approve Waivers That Are Inconsistent with Program Goals. GAO-04-166R. Washington, D.C.: January 5, 2004. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Wavier Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Medicaid Section 1115 Waivers: Flexible Approach to Approving Demonstrations Could Increase Federal Costs. GAO/HEHS-96-44. Washington, D.C.: November 8, 1995. Medicaid: States Use Illusory Approaches to Shift Program Costs to Federal Government. GAO/HEHS-94-133. Washington D.C.: August 1, 1994.
|
Medicaid, a $436 billion federal and state health care program for low-income individuals and families, is a significant and growing expenditure. Section 1115 of the Social Security Act authorizes the Secretary of Health and Human Services to waive certain Medicaid requirements and allow otherwise uncovered costs for demonstration projects that are likely to promote Medicaid objectives. By HHS policy, these demonstrations should be budget neutral, that is, not increase federal spending over what it would have been if the state's existing program had continued. States estimate what their spending would have been without the demonstration, and HHS approves a spending based on projected spending. GAO was asked to review HHS approval of recent Medicaid section 1115 demonstrations. GAO examined (1) the purpose of new demonstrations, and (2) the extent to which HHS's policy and process for reviewing proposed demonstration spending provide assurances that federal costs will not increase. For 10 new comprehensive demonstrations approved from January 2007 through May 2012, GAO reviewed application, approval, and budget neutrality documents provided by HHS; calculated estimated spending limits; and interviewed HHS officials. The 10 new demonstrations GAO examined expanded states' use of federal funds and implemented new coverage strategies. Arizona and Texas established funding pools to make new supplemental payments beyond what they could have made under traditional Medicaid requirements and receive federal matching funds for the payments. All 10 demonstrations were approved to use different coverage strategies or impose new cost sharing requirements, including limiting benefits or imposing deductibles for certain populations. The Department of Health and Human Services' (HHS) budget neutrality policy and process did not provide assurances that all recently approved demonstrations will be budget neutral. For 4 of 10 demonstrations GAO reviewed, HHS approved spending limits that were based on assumptions of cost growth that were higher than its benchmark rates, and that, in some cases, included costs states never incurred in their base year spending. HHS's benchmark growth rates are the lower of the state's recent growth rates or projections for Medicaid program growth nationwide. For example, HHS approved a spending limit for Arizona's demonstration using outdated information on spending--1982 data that was projected forward--that reflected significantly higher spending than what the state's Medicaid program had actually cost. For Texas, HHS approved a spending limit using a base year that included billions in costs the state had not incurred. GAO found limited support and documentation for the higher-than-benchmark limits HHS approved. If HHS had held the 4 demonstrations' spending to levels suggested by its policy, the 5-year spending limits would have been an estimated $32 billion lower than what was approved; the estimated federal share of this reduction would be about $21 billion. For 6 other demonstrations, the approved spending limits reflected the states' actual historical costs or criteria that were specified in law, which HHS followed. In examining HHS's current written budget neutrality policy, GAO found that the policy is outdated and does not include a process for assuring the reliability of the data used to set spending limits. GAO has previously suggested that Congress require HHS to improve its budget neutrality process, in part, by improving the review criteria and methods, and by documenting and making clear the basis for approved limits. In addition to these suggestions, GAO believes HHS needs to take further actions to address the findings in this report. GAO recommends that HHS update its budget neutrality policy and reexamine spending limits for the Arizona and Texas demonstrations. HHS disagreed with GAO's recommendations. GAO believes these steps are needed to improve the budget neutrality process.
|
Both SEC and IRS have had an interest in the tax services, particularly tax shelter services, that accounting firms provide taxpayers. In terms of SEC, according to the Sarbanes-Oxley Act of 2002, before an SEC registrant company’s auditor can provide non-audit services such as tax services to the company, the company’s audit committee must approve them. Effective May 6, 2003, SEC adopted rules required by the act to strengthen conflict of interest standards and clarify the relationship between the independent auditor and the audit committee. In adopting these rules, SEC said it was enhancing the independence of accountants that audited financial statements and prepared related reports to be filed with SEC. It also said that accounting firms could provide tax services to their audit clients, subject to each client’s audit committee pre- approval, without impairing their independence. However, accountants would impair their independence if they represented audit clients before a tax or district court or a federal court of claims. Further, according to the rules, audit committees should carefully scrutinize an accountant’s involvement in a transaction if the accountant initially recommended the transaction, the transaction’s sole business purpose might be tax avoidance, and its tax treatment might not be supported in the Internal Revenue Code and related regulations. The Sarbanes-Oxley Act established the Public Company Accounting Oversight Board (PCAOB) and authorized it to establish standards and rules for auditor independence. In exercising its responsibilities under the act, PCAOB determined it was appropriate to consider the impact on auditor independence of providing tax services to audit clients. In July 2004, it convened a roundtable discussion on the effect of tax services provided by auditors on auditor independence. Participants at the roundtable, including representatives of accounting firms, public companies, investors, and regulators, discussed many different topics, with suggestions including more PCAOB guidance to audit committees and a rule barring auditors from providing at least some tax services to audit clients. On December 14, 2004, PCAOB proposed new ethics and independence rules, with comments due by February 14, 2005, and an effective date of no earlier than October 20, 2005. The proposed rules would treat an accounting firm registered with PCAOB as not independent in certain instances for purposes of doing a financial statement audit and for other purposes. For example, the firm would be considered not independent if it provided services related to planning or giving an opinion on the tax treatment of a listed (described later) or confidential transaction under Department of the Treasury regulations. Similarly, the firm would be considered not independent if it provided these services for a transaction that was based on an aggressive interpretation of the applicable tax laws and regulations. Such a transaction is one that satisfies three criteria: it was initially recommended by a tax advisor; it has tax avoidance as a significant purpose; and it “is not at least more likely than not to be allowed under; applicable tax laws.” The proposal would also treat the firm as not independent if the firm provided tax services to officers who oversee an audit client’s financial reporting. It would not prohibit the audit firm from providing the audit client with routine tax return preparation and tax compliance, general tax planning and advice, international assignment tax services, and employee personal tax services. In addition, the proposal would expand on current SEC pre-approval requirements to require an auditor seeking audit committee pre-approval of tax services to give the committee certain information, discuss with the committee the services’ potential effects on the firm’s independence, and document the discussion’s substance. Treasury regulations address IRS’s oversight of tax shelters. Under the regulations, there are six categories of transactions for which investors must report, or disclose, the transactions into which they have entered, and promoters must maintain lists of investors who have entered into the transactions. At IRS, the Office of Tax Shelter Analysis (OTSA) maintains a database containing information on tax shelter investors and promoters, including accounting firms. Created in February 2000 to centralize and coordinate IRS’s response to abusive tax shelter activity nationwide, OTSA includes in its database the amount of potential federal tax loss estimated by the taxpayer or IRS to result from both listed and nonlisted transactions. These losses, which also represent benefits to the taxpayer, may or may not be disallowed by IRS upon further review of each transaction. IRS considers listed transactions, which must be reported on tax returns sent to IRS, to be abusive. The Joint Committee on Taxation has described listed transactions as having a tax avoidance purpose, with the tax benefits subject to disallowance under existing law. For a transaction to be listed, IRS must issue a notice, regulation, or other form of published guidance informing taxpayers of the details of the transaction. In October 2004, IRS had 30 types of listed transactions, a number that had grown more quickly in recent years than earlier. Nonlisted transactions generally are transactions reportable to IRS that may have some characteristics of abusive shelters but are not, and may never be, listed. At times, IRS questions whether some nonlisted transactions should be moved into the category of listed transactions. To address our first two objectives—relating to Fortune 500 companies, officers, and directors obtaining tax shelter services from their company auditors--we matched data from two sources—S&P and IRS. We acquired specific S&P data elements for the 497 companies on the April 2003 Fortune 500 list that, according to S&P, either were publicly owned or had to file with SEC for another reason, such as having publicly-traded debt. The data elements included the company’s employer identification number, the names of company officers and directors, and the name of the company’s auditor for each year from 1998 through 2003. The number of Fortune 500 companies for which we actually received S&P data varied; for instance, we received names of directors and officers for 471 companies and employer identification numbers for 492. As shown in table 1, the officers for 441 of the 471 companies were those listed in the company’s proxy statement section on most highly compensated officers, as filed with SEC for either 2000 or 2002. We used the years 2000 and 2002 because those were years when the federal government was significantly enhancing its presence to counter tax shelter activity that might have been going on for years. Because S&P did not have similar top officer information for the other companies in the Fortune 500, or 2000 or 2002 director information for any of them, it gave us the names reflecting current officers and directors as of March 2004—the date we obtained the data. Obviously, some of the March 2004 officers for 30 companies and the March 2004 directors for all 471 might have been different from those working in 2000 or 2002, which was closer to the time when most of the tax benefits related to the shelters were taken. Consequently, our analysis of the March 2004 information omits any officers and directors who left the relevant companies after 2002. We matched the S&P data to tax shelter information in IRS’s OTSA database as of May 28, 2004. IRS’s database included information disclosed to or discovered by IRS on companies, individuals, and other taxpayers who used tax shelters. It also included information on as many as three entities, including accounting firms, which IRS said promoted the shelter to the investor. We considered both listed and nonlisted transactions in the database because from an auditor independence standpoint, in both cases the promoters were involved with transactions that IRS or taxpayers believed needed to be reported to the federal government. To determine to what extent the 497 companies obtained tax shelter services, we matched the employer identification numbers in the S&P and IRS databases. When we found a match, we checked the promoter information in the IRS database against the audit firm information in the S&P database to see if the same accounting firm was listed as a promoter for a particular transaction and as the company’s auditor for one or more years that the shelter benefited the company. Although we do not know for sure that a company obtained tax shelter and auditing services from an accounting firm at exactly the same time, we considered it a match when at least one of the tax years for which the company received a tax benefit matched a fiscal year from 1998 through 2003 for which the accounting firm was the company’s auditor. We did this because IRS did not have information on exactly when taxpayers obtained tax shelter services, and 1998 was the year before the Department of the Treasury reported that the proliferation of corporate tax shelters was unacceptable. We analyzed information from other years to provide context. We also matched the names of company officers and directors in the S&P data to the names of the tax shelter investors in the IRS database. Whenever we found a match, we tried to verify if the same person was actually involved, as opposed to two people with the same first and last names. If the person appeared to be the same (for example, had the same middle initial), we matched the promoter name for that individual and, similar to what was just described, the tax benefit dates in the IRS database to the auditing firm of the individual’s company for 1998 through 2003 in the S&P data. Our matching methodology did not allow us to detect instances in which a spouse or other relative of the officer or director was the tax shelter investor or instances in which the investing entity was a partnership or other unit formed by the officer or director. As part of our work, we tested the reliability of IRS’s database and the data we received from S&P. For the IRS database, we reviewed related documentation, interviewed knowledgeable officials, and did electronic testing. For the information received from S&P, we reviewed S&P information on its controls over the data and verified sample data to publicly available documents obtained from SEC’s Web site or elsewhere. For both types of data, we found that the required data elements were sufficiently reliable for the purposes of our work. However, as we will describe later, the IRS database had important limitations and therefore our results are imprecise in reflecting the universe of companies, officers, and directors that might have obtained tax shelter services from the companies’ auditors. To deal with our last two objectives--those on case study companies obtaining tax shelter and other tax services from their auditor and funding these services for officers and directors--we selected publicly traded companies among the Fortune 500 to study in depth. Independent of any IRS information, we reviewed the April 2003 Fortune 500 list and chose companies that were headquartered in three geographically diverse parts of the country and whose audit committee chair worked or lived in one of those areas. We excluded companies whose audit committee chairs had been contacted in other recent GAO studies. Of the 23 companies that met our criteria, 8 agreed to provide information in response to a structured interview guide we used. For 5 of these 8 companies, we interviewed the audit committee chair. For the other 3, we relied only on written answers we received from the companies. Because we studied so few companies and because of the method of selection, we cannot say that the responses we received represent any larger group of companies. Further, the companies that we did study might have agreed to participate because they had special reasons for wanting to share their tax services experiences with us. Although they were not representative of change overall, we believe that the 8 companies illustrate some of the changes that have occurred in recent years related to auditors providing tax services. We did our work between December 2003 and January 2005 in accordance with generally accepted government auditing standards. As shown in table 2, 61 Fortune 500 companies used a tax shelter that was promoted by an accounting firm that was their external auditor for one or more years from 1998 through 2003 in which the company received benefits from the tax shelter. The 61 companies had 82 transactions worth about $3.4 billion in estimated potential tax losses over many years for transactions that were generally reportable on tax returns sent to IRS. They are out of 492 Fortune 500 companies for which S&P supplied employer identification numbers and for which we searched for a match in the May 28, 2004 version of IRS’s tax shelter database. Table 3 puts this information into various contexts. For instance, including the 61 companies just described, 67 companies with about $4.1 billion in tax shelter benefits obtained tax shelter services from a firm that was their auditor sometime, but not necessarily in the same year the company received some or all the related tax shelter benefits. We include the 6 additional companies because some analysts have questioned the propriety of accounting firms promoting tax shelters even to companies they are not currently auditing. For example, recent press reports described a company that employed an accounting firm as its auditor sometime after the year for which the company claimed a tax shelter benefit from the shelter provided by the accounting firm. According to the reports, the auditor began auditing financial statement items resulting from the tax shelter that it had previously provided, a task the auditor said was within SEC rules. For further context, table 3 shows that including the 67 companies, 114 Fortune 500 companies and almost 4,400 total taxpayers obtained tax shelter services from accounting firms, including firms they had not ever used as auditors but might one day. The estimated potential tax losses involved were about $9 billion for the 114 companies for any year in IRS’s database and about $24 billion for all taxpayers. Although we did not have enough information to know whether taxpayers obtained fewer tax shelter services from accounting firms as time went on, several accounting firms testified in November 2003 before the Permanent Subcommittee on Investigations of the Senate Committee on Governmental Affairs that they had scaled back their tax shelter activities in general. Including the 114 companies, 207 Fortune 500 companies, regardless of who their promoters were, used tax shelters accounting for about $56 billion in estimated potential tax losses, about 44 percent of it related to tax years 1998 through 2003. To break out the $56 billion further, of the 492 Fortune 500 companies for whom S&P supplied employer identification numbers, 139 appeared in IRS’s database to have engaged in listed transactions with estimated potential tax losses of about $16 billion. The number of companies engaged in nonlisted transactions estimated to be potentially worth about $40 billion was 129, and because some companies were involved in both kinds of transactions, the number engaged in either listed or nonlisted transactions was 207. The 207 Fortune 500 companies’ transactions are part of IRS’s total tax shelter database. As of May 28, 2004, for all taxpayers, the database contained listed and nonlisted transactions with estimated potential tax losses of about $129 billion, about half of it related to tax years 1998 through 2003. Most of the dollar amounts related to nonlisted, as opposed to listed, transactions, and some of the amounts shown as listed might represent transactions that taxpayers entered into before IRS had designated them as listed. About a third of the approximately 15,000 transactions in the database had an accounting firm listed as a promoter, and these transactions accounted for about 18 percent of the $129 billion estimated potential tax loss. We and IRS know the numbers in this section are not precise. Some of the imprecision could make the count of transactions and associated estimated potential losses too high, and some could make them too low. Accordingly, the numbers should be used with caution and should be understood and used as general estimates of the degree to which companies might have obtained tax shelter services from external auditors and of the possible dollar magnitude of the associated tax benefits, and thus possible decreased federal revenues. The numbers could be overestimates for the following reasons: The number of abusive transactions and their dollar amounts might have been or might still be reduced upon further examination, appeal, litigation, or other action. The database included some reported transactions that turned out to be nonabusive. Additional transactions might later be found to be nonabusive. According to an IRS official, the database included some tax shelter transactions more than once—at the level of a flow-through entity, such as a partnership, and again at the level of the taxpayers, for example, the individual partners—with the relevant dollar amounts thus appearing twice. This limitation would not apply to information dealing only with Fortune 500 companies’ use of tax shelters. The numbers could be underestimates for the following reasons: The IRS database did not include promoters for about a quarter of the transactions of the 207 Fortune 500 companies that used tax shelters. In these cases, the tax shelter might have been obtained using a promoter that the taxpayer did not identify to IRS, or, according to an IRS official, a very few taxpayers not working for firms designing tax shelters might have developed their own tax shelter. In total, the database did not include promoters for 2,095, or about 14 percent, of its transactions as of May 28, 2004. The database did not reflect estimated potential tax losses for about a third of the transactions of the Fortune 500 companies using an accounting firm to obtain tax shelters, or for about a quarter of the transactions of the total number of Fortune 500 companies obtaining tax shelters. According to an IRS official, this was because taxpayers did not include estimated losses on documents submitted to IRS. The official added that a possible reason for taxpayers not disclosing such information was that nondisclosure penalties did not yet exist. The database did not reflect estimated potential tax losses for about two- thirds of the 15,040 total transactions it contained. These potential losses could range from small to large amounts; however, their distribution is unknown. In addition, as of May 28, 2004, IRS had not yet entered into the database all of the tax shelter information that it possessed even though the information included data pertaining to transactions done years ago. The database only included information on abusive or possibly abusive transactions that had been disclosed to or discovered by IRS, and as alluded to earlier, the number of listed transactions had continually grown from even before OTSA was established. Adding to the uncertainty, the tax loss estimates in the database vary from being IRS officials’ recommended taxes, based on examining some transactions, to taxpayer judgments regarding potential losses in cases where examinations had not been done. According to an IRS official, taxpayer-provided information may represent estimates or incomplete information. Despite these data limitations, the numbers we present in this report provide a general indication of the extent to which Fortune 500 companies did use their external auditor for tax shelter services. In addition, they include larger numbers showing that many Fortune 500 and other taxpayers obtained tax shelter services using their own and other accounting firms, and many obtained tax shelters without using accounting firms at all. As shown in table 4, one or more officers or directors of 17 Fortune 500 companies used tax shelters that were promoted by an accounting firm that was the Fortune 500 company’s external auditor during at least one of the years that the officer or director benefited from the tax shelter. The years in question were 1998 through 2003, and the potential tax loss from these transactions was about $100 million. The officers or directors are from 471 Fortune 500 companies for which we had data on officers or directors from S&P that we matched against data in the May 28, 2004 version of IRS’s tax shelter database. To place the officers and directors of the 17 companies into context, in 33 companies, a transaction of at least one officer or director had an accounting firm listed as a promoter, and in 57 of them, at least one officer or director obtained a tax shelter regardless of whom he or she used as a promoter. The number of officers and directors involved in even the 57 companies translated to less than one percent of the officers and directors of Fortune 500 companies that we matched against IRS’s database. These numbers relating to officers and directors of Fortune 500 companies are subject to the limitations described previously for the numbers related to the companies themselves. For example, IRS’s database did not list promoters or estimated potential tax losses for every transaction. In addition, according to an IRS official, disclosures from individuals, partnerships, and S corporations, which were first due to IRS for filing year 2003, arrived in great numbers beginning in April 2004, and many were not yet entered into the IRS database as of May 28, 2004. According to their representatives, all eight of our case study companies adopted or refined policies or practices in 2002 or 2003 requiring their audit committees to pre-approve tax services to be obtained or governing the tax services provided. At least some of these changes were in response to the Sarbanes-Oxley Act. Examples of changes made include requiring that all engagements with the external auditor be subject to approval and directing more work to other providers. As stated earlier, these companies are not representative of other companies because of their small number, the way we selected them, and their unknown motivation for participating with us. However, they do illustrate that the provision of tax services has changed in recent years for at least some companies. According to company representatives, all eight case study companies obtained tax services from their auditors during the period from 2000 through 2003. Services provided ranged from company to company, sometimes involving, for instance, tax return preparation, tax return review, advice on foreign tax transactions, or consultations on negotiations. Only two of the companies told us of obtaining tax shelter services, and one of them obtained the services before 2000. Company representatives told us about how specific services the company acquired changed over time. In fiscal year 2004, one company’s audit committee rejected auditor involvement in a particular tax strategy out of concern that the auditor could potentially be in the position of auditing its own work. Another company told us of discontinuing an arrangement for obtaining certain tax services from its auditor because of the arrangement’s undesirable appearance. A third company told us of transferring some tax services from its auditor to other providers in 2003 and 2004 because the audit committee began requiring a compelling reason to use its auditor for the services. In spite of these changes, in stating general impressions, six case study companies said that having their audit firm provide tax services brought efficiency and effectiveness gains due to the firm’s understanding of the company and its business. The two case study companies that obtained tax shelter services in the past were among the three companies of the eight that said they did not have a current policy prohibiting obtaining tax shelter services. However, one of the two companies said that it did not plan to obtain tax shelters in the future. According to both companies, IRS challenged the tax shelter claimed, and the issue had not yet been resolved. Although six of our case study companies reported that officers or directors at some time since 2000 used the auditor for some tax services, such as tax return preparation, officials told us that four of the companies in 2002 or 2003 adopted policies prohibiting officers from using the auditor for the services in the future. One company cited auditor independence reasons for removing as of 2003 its requirement that a particular executive use the company’s auditor. In contrast to the situation with tax services in general, none of the companies reported officers or directors obtaining tax shelter services from the company auditor. Three companies we contacted said they did not have a policy prohibiting officers from obtaining tax services from the company auditor. However, even among those, one company knew of no officers who had actually used the auditor from 2000 onward for tax services. Another company allowed using the auditor but annually surveyed its senior officers about perceived or actual conflicts of interest. The third said its executives could use the auditor but had done so only in limited instances. For the year we asked about in which officers or directors were still using the company auditor for tax services—2001—two case study companies that paid for these services indicated one paid the auditor about $8,000 and the other about $13,000. Although two other companies reported paying for these services, they did not provide us with specific amounts. Both fell into the lowest non-zero choice of range we provided—greater than $0 but less than or equal to $1 million. The other four of the eight companies we studied reported paying their auditors nothing in 2001 for tax services for officers or directors. In general and not restricted to 2001, five case study companies reported setting aside funds for, or annually paying for, tax services that officers or directors obtain from their auditors or others. In those cases, company figures varied from a range of $30,000 to $50,000 in one case to $150,000 in another. In written comments on a draft of this report, the Commissioner of Internal Revenue said it was comprehensive and provided an accurate picture of the factors affecting IRS’s ability to have an accurate tax shelter database. He particularly pointed to indications in the draft report that not all the information in the database might relate to abusive tax avoidance transactions. The Commissioner also said that IRS changes and recent legislation will enable IRS to address the database limitations we note, several of which IRS had already identified and was working to overcome. He added that IRS was creating a new database and exploring considering whether various IRS forms should be revised to improve the quality of information IRS receives. In addition, he noted that IRS supported the December 2004 PCAOB action to revise auditor ethics and independence rules. The full text of the Commissioner’s comments is reprinted in appendix I. As discussed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of the report. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or at brostekm@gao.gov or Signora May at (404) 679-1920 or at maysj1@gao.gov. Jeffrey Arkin, Lawrence Korb, MacDonald Phillips, Tina Smith, James Ungvarsky, and Walter Vance were key contributors to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Recent legislative and regulatory changes have addressed the relationship between auditor-provided tax services and auditor independence. At this time, the federal regulatory community is exploring further changes. To contribute to the discussion surrounding these changes, GAO's objectives were to determine (1) according to Internal Revenue Service (IRS) data, how many Fortune 500 companies obtained tax shelter services from their auditor; (2) according to IRS data, in how many Fortune 500 companies did the auditor provide the services to individual company officers or directors; and (3) whether selected Fortune 500 case study companies changed how they obtain tax services from their auditor in recent years. For the first two objectives, GAO used IRS and Standard and Poor's data after finding they were sufficiently reliable for our work. GAO counted a company, officer, or director as obtaining a tax shelter service from the company's external auditor when an auditor that IRS identified as promoting a tax shelter also audited the company in at least one year that the shelter was in effect. For the third objective, independent of any IRS information, GAO selected case studies on the basis of geographic location and previous GAO contact. The companies are illustrative in nature and not intended to be representative of other companies. IRS data available on tax shelter services sometimes predate legislative and regulatory changes reflecting a heightened focus on auditor independence. However, both during this earlier period covered by some of the data and also following the recent changes, auditors were allowed to provide tax services, including tax shelter services, to firms they audited. According to IRS data, 61 Fortune 500 companies obtained tax shelter services from their external auditor during 1998 through 2003 for transactions generally reportable on tax returns sent to IRS. IRS considered some reportable transactions abusive, with tax benefits subject to disallowance under existing law, and other transactions to possibly have some traits of abuse. Estimated multi-year potential tax revenue lost to the federal government from the 61 companies' auditor-related transactions was about $3.4 billion (about $1.8 billion in categories IRS considered abusive). In 17 companies, at least one officer or director used the company's auditor to obtain individual tax shelter services. These numbers are imprecise because they have important limitations. These limitations, such as some transactions in IRS's database without tax shelter providers listed, are fully discussed in this report. Commenting on a draft of this report, IRS said that ongoing changes and recent legislation will enable it to address the data limitations noted. According to their representatives, all eight case study companies adopted or refined policies or practices in 2002 or 2003 for pre-approving tax services or governing the tax services provided, such as who would provide them. All eight reported using their auditor for tax services during 2000 through 2003. Two told us of obtaining tax shelter services from their auditor, but one of them obtained the services before this period. Six of the eight reported officers or directors obtaining individual tax services from the auditor at some time since 2000, with four disallowing the practice later. None reported officers or directors using the auditor for individual tax shelter services.
|
Federal agencies spent more than $235 billion in fiscal year 2001 to buy goods and services ranging from weapon systems and medical equipment to information technology services and the operation of government facilities. This is an 11 percent increase over the amount spent in 1997. This growth is expected to continue as federal agencies address emerging threats and acquire enhanced information technology. The significance of contracting in the federal government is reflected by the sheer magnitude and the degree to which contracting consumes agencies’ discretionary resources. Overall, contracting for goods and services accounted for about 24 percent of the government’s discretionary resources in fiscal year 2001. However, contract spending consumed between 34 percent and 73 percent of the discretionary resources available to the four largest acquisition spending federal agencies. Federal contracting increased by 11 percent during the 5-year period we studied, from about $213 billion in fiscal year 1997 to over $235 billion in fiscal year 2001. As shown in figure 1, DOD is the largest agency in terms of contracting dollars spent, accounting for about two-thirds of the government’s total spending on goods and services. In fiscal year 2001, DOD contracted for more than $152.6 billion of goods and services, or more than twice the amount spent by the next nine largest federal agencies combined. The three military departments—the Air Force, Army, and Navy—individually spend more than the largest civilian agency, the Department of Energy (DOE). From fiscal years 1997 through 2001, purchases of goods increased by 17 percent. This was due in large measure to DOD’s increased spending on weapon systems and other defense-related items. Overall, however, agencies continued to purchase far more services than goods. Purchases of services grew by about 11 percent, as agencies modernized their information systems and obtained various professional, administrative, and management support services. Nine of the 10 agencies we reviewed increased their spending on services. The other agency, the National Aeronautics and Space Administration (NASA), experienced a 4 percent decrease, reflecting significant reductions in spending for research and development and for professional, administrative, and management support services. As shown in figure 2, agencies varied in the degree to which they contracted for services. For example, DOE spent more than 98 percent of its contract dollars on services in fiscal year 2001, while the Department of Agriculture’s (USDA) spending for services accounted for only about 30 percent of its acquisition spending. The degree to which individual agencies contract for services underscores the importance of ensuring that service acquisitions are managed properly. For example, we noted in May 2001 that some service procurements were not being conducted efficiently, putting taxpayer dollars at risk. Last year, we reported that leading commercial companies had taken a strategic approach to acquiring services, which in turn resulted in significant cost savings and service improvements. Taking a strategic approach involves a range of activities—from developing a better picture of what the company is spending on services, to taking an enterprisewide approach to procuring services, to developing new ways of doing business. Based in part on our report, the National Defense Authorization Act For Fiscal Year 2002 required that DOD develop enhanced data collection and management processes for services acquisitions. Additionally, as will be discussed in greater detail in the next section, Congress and the administration are encouraging the use of performance-based approaches to acquiring services as a way of improving the acquisition of services. Agencies rely to various degrees on private vendors to provide the goods and services needed to carry out their missions and support their operations. Overall, contracting for goods and services accounted for 24 percent of the government’s discretionary resources in fiscal year 2001. For four agencies included in our review—DOE, NASA, GSA, and DOD— the acquisition function is central to accomplishing their mission-related goals. About 76 percent of DOE’s funds, for example, are spent on the management and operation of over 30 government-owned laboratories and other nuclear facilities. NASA contracts account for about 72 percent of its discretionary budget resources, and one of GSA’s primary missions is to help federal agencies procure goods and services. However, spending on contracts accounted for less than 23 percent at each of the other six agencies in our review, as shown in figure 3. Further growth in contract spending, at least in the short term, is likely given the President’s request for additional funds for defense and homeland security, agencies’ plans to update their information technology systems, and other factors. For example, the President’s fiscal year 2004 budget request reflects steady increases in DOD’s discretionary budget authority, as well as increases in the budgets of other agencies involved in homeland security. Additionally, the President’s budget request reflects increased investment in information technology both for new systems and for related support. Further, the administration’s emphasis on competitive sourcing could increase agencies’ reliance on services provided by the private sector. Competitive sourcing in the federal government is conducted under guidance provided in OMB Circular A-76, which outlines procedures for determining whether to perform a commercial activity with government employees or by contract. Additionally, the circular provides policy for standardizing how and when an agency competes a commercial activity with the private sector. OMB’s current 2-year goal is to compete 15 percent of the federal government’s commercial-type positions. This effort could result in significant increases in the number of service contracts, given that in the past the private sector has won over half of the competitions. The past decade has seen the emergence of several procurement trends that have changed the way the government acquires goods and services, as Congress and the administration have sought ways to simplify the acquisition process, shorten procurement times, reduce administrative burdens and costs, and improve acquisition outcomes. In particular, federal agencies are increasingly relying on contracts awarded by other federal agencies to obtain goods and services and have turned to using government purchase cards for many of their low dollar value procurements. The growth in these procurement methods has been dramatic, and is apparent in nearly every agency we reviewed. Additionally, agencies have begun to increase their use of commercial contracting methods and performance- based acquisition approaches. As we have reported previously, taking full advantage of these methods and approaches requires that agencies have adequate guidance and training, a strong internal control environment, and data that can be used by agency management to make informed decisions. Our work at selected agencies has found that these conditions have not always been present, thereby contributing to agencies missing opportunities to achieve savings, reduce administrative burdens, and improve acquisition outcomes. Federal agencies are increasingly using contracts and acquisition services offered by other agencies, a fact that is most notably demonstrated in the growth of GSA’s Federal Supply Schedule and governmentwide acquisition contracts (GWAC). These interagency contracts are being used in a variety of situations, from those in which a single agency provides limited contracting assistance to a more comprehensive approach in which the provider agency’s contracting officer handles all aspects of the procurement. Agencies charge users of these contracts a fee to cover administrative expenses. GSA’s schedule program enables federal agencies to quickly acquire goods and services, thereby helping them to, among other objectives, reduce lead times and lower administrative costs. GSA does this by awarding contracts to vendors and making these contracts available for use by other agencies. GWACs are intended to facilitate purchases of information technology-related products and services, such as network maintenance and technical support, systems engineering, and integration services. As shown in figure 4, sales under the schedule program have more than tripled over the past 5 years, increasing from $4.3 billion in fiscal year 1997 to about $14.4 billion in fiscal year 2001. Agency officials at each of the agencies we reviewed reported increases in their use of the schedules program over the past 5 years, driven largely by increased purchases of information technology and professional, administrative, and management support services. As shown in table 1, DOD and GSA, the largest users of the schedules program, accounted for about 75 percent of schedule sales in fiscal year 2001. GSA’s increased share is largely attributable to the growth of GSA’s Federal Technology Service, which places orders under the schedule for information technology services and equipment on behalf of other federal agencies. However, orders placed by the Federal Technology Service are counted as spending by GSA, rather than as spending by the federal agency that will ultimately receive the service or equipment. Agency officials also reported that their use of GWACs increased considerably over the 5-year period between fiscal years 1997 and 2001. For example, USDA reported an increase from about $5.1 million to $44 million, and Treasury reported an increase from $92 million to $155 million. Officials at the other eight agencies we reviewed reported that they also increased their use of GWACs; however, they could not provide detailed GWAC information because this spending was not an integral part of their management information systems. While GSA officials indicated that they have not modified the FPDS to collect specific information on GWAC spending, several officials at other agencies stated that they either are collecting or will begin to collect additional information on their agencies’ GWAC use. While use of these interagency contracting methods can allow agencies to meet their needs quickly, our past work has shown that agencies are not adequately adhering to guidelines on competition. Further, recent agency Inspector General reports noted that DOD, VA, and NASA personnel did not consistently follow procedures intended to promote competition or ensure fair and reasonable prices when using these interagency contract methods to acquire information technology services, medical equipment, or research and development projects. Purchase card spending has increased significantly governmentwide. This program provides federal agencies a low-cost and efficient means for quickly obtaining goods and services directly from vendors. Under the Federal Acquisition Regulation, the commercial purchase card is now the preferred method of paying for micropurchases. The purchase card may also be authorized to be used in greater dollar amounts and may be used to make payments under existing contracts. As figure 5 shows, governmentwide purchase card use increased from $5.3 billion in fiscal year 1997 to $13.8 billion in fiscal year 2001—a 160 percent rise. Increases in purchase card use in the agencies we reviewed ranged from 45 percent to 344 percent. As figure 6 shows, agencies used purchase cards to varying degrees. DOD was the largest user of purchase cards, spending about $6.1 billion in fiscal year 2001. VA was the largest civilian agency user of the purchase card program, spending about $3.8 billion. VA officials noted that due to VA’s organizational structure and its continuous need for disposable medical and surgical supplies, purchase cards are one of VA’s key procurement techniques and are used extensively as a payment mechanism. During the past 2 years, we found that significant internal control weaknesses in several agency purchase card programs allowed cardholders to make fraudulent, improper, abusive, or questionable purchases that resulted in lost, missing, or misused government property. Agencies are responding to the recommendations that we and others have made regarding internal control weaknesses. For example, the Navy has reduced the number of cardholders by more than 50 percent, from 59,000 in June 2001 to 25,000 by March 2002, thus improving the likelihood of effective program management. Additionally, DOD has begun implementing new training and approval processes. For example, DOD implemented automated controls during fiscal year 2002 to help with monitoring credit limits, cardholder reconciliation, and approving officials’ review of monthly statements. Further, OMB requires that agencies provide quarterly reports on their efforts to improve the management oversight of government-issued purchase and travel cards. In recent years, there has been a significant increase in agencies’ use of streamlined procedures to acquire commercial items. Many procurement reform advocates have recommended that federal agencies purchase commercial items to save money and reduce acquisition time, rather than pay companies to develop unique items for the government’s use. The Federal Acquisition Streamlining Act established a preference for the acquisition of commercial items. Because commercial items are subject to competitive market forces, they may be acquired using streamlined solicitation and evaluation procedures generally provided for under part 12 of the Federal Acquisition Regulation (FAR). For example, contracting officers may reduce the time needed to solicit bids and award contracts by combining certain steps in the solicitation process, using streamlined evaluation techniques, and eliminating certain administrative requirements. In fiscal year 2001, the purchase of commercial items using FAR part 12 procedures accounted for 19 percent of the spending for goods and services by federal agencies, up from 9 percent 5 years earlier. From fiscal year 1997 through fiscal year 2001, governmentwide use of part 12 procedures increased by 148 percent. As shown in table 2, 9 of the 10 agencies in our review increased their use of part 12 procedures by at least 100 percent. Significant growth in service contracts has led Congress and the administration to encourage greater use of performance-based service contracting to achieve greater cost savings and better outcomes. Under performance-based approaches, the contracting agency specifies the outcome or result it desires and lets the contractor decide how best to achieve the desired outcome. Performance-based contracts offer significant benefits, such as encouraging contractors to innovate and find cost-effective ways of delivering services. In fiscal year 2001, agencies reported that 24 percent of their eligible service contracts, by dollar value, were considered performance based. There was wide variation in the extent to which agencies used performance-based contracts. As figure 7 shows, 3 of the 10 agencies in our review fell short of OMB’s goal that 10 percent of their eligible service contracts be performance based in fiscal year 2001. We recently found that some agencies achieved only mixed success in incorporating four basic performance-based attributes into their contracts. These attributes include describing desired outcomes rather than how the services should be performed, setting measurable performance standards, describing how the contractor’s performance will be evaluated, and establishing positive and negative incentives, as appropriate. Our review raised questions as to whether agencies have an adequate understanding of performance-based contracting and how to take full advantage of this approach. Agency officials themselves pointed to the need for better guidance on performance-based contracting and better criteria for identifying which contracts should be called “performance based.” In response to our recommendations, the Office of Federal Procurement Policy is developing new guidance to help agencies improve their use of performance-based contracting. Over the last decade, the federal acquisition workforce has had to adapt to changes in staffing levels, workloads, and the need for new skill sets. Procurement reforms have placed unprecedented demands on the acquisition workforce. For example, contracting specialists are required to have a greater knowledge of market conditions, industry trends, and the technical details of the commodities and services they procure. Governmentwide data indicate that in fiscal year 2001 both the number of acquisition workforce employees and the number of contract actions declined slightly from fiscal year 1997 levels. However, the extent to which these changes occurred varied from agency to agency. Ensuring that agencies will have the right people with the right skills to successfully meet the increasingly complex demands expected in the future has become a priority at most of the agencies we reviewed. While agencies still face many hurdles, our recent work has found that most agencies have taken steps to address their strategic human capital planning challenges. As of September 2001, the federal acquisition workforce included about 103,000 individuals, reflecting an overall 5 percent decline from 1997 levels. As shown in table 3, changes in the acquisition workforce varied by agency; for example, 6 of the 10 agencies we reviewed lost between 2 percent and 9 percent of their acquisition workforces, while the other 4 agencies increased their acquisition workforces by between 8 percent and 11 percent. DOD experienced the largest personnel decrease in its acquisition workforce, declining by 9 percent to just over 68,500 personnel. Changes in the acquisition workforce have been accompanied by changes in the types of actions being managed. The total number of contract actions processed in fiscal year 2001 decreased 6 percent from fiscal year 1997 levels. As shown in table 3, our analysis indicates that while most agencies are processing fewer smaller actions—those valued at less than $25,000— most agencies are also managing an increased number of larger actions. Agencies have made far greater use of purchase cards for making their smaller dollar purchases, which accounts for the declining rate of smaller dollar actions. While we have not evaluated how these changes have affected federal agencies as a whole, the DOD Inspector General noted in 2000 that the increased contract workload was adversely affecting contract oversight by creating imbalances and backlogs in closing out completed contracts. The changes in staffing levels and workload come at a time when the role of the government’s acquisition staff is changing considerably. Federal agency officials expect their acquisition personnel to analyze business problems and help develop strategies in the early stages of the acquisition process. Industry and government experts recognize that a key to making a successful transformation toward a more sophisticated acquisition environment is having the right people with the right skills. To accomplish this, leading public organizations in the United States and abroad have found that strategic human capital management must be the centerpiece of any serious change management initiative. Strategic management of human capital is a key governmentwide initiative in the President’s Management Agenda. One aspect of strategic human capital planning is succession planning, where an agency identifies its future needs in terms of workforce skills and numbers. Our prior work has shown that when workforce reductions do not consider future needs—such as a staff reduction at DOD during the 1990s—the result is a workforce that is not balanced with regard to experience and skill sets. The need for planning is underscored by the fact that, similar to human capital challenges across a variety of occupation categories, all agencies face the prospect of losing many of their skilled acquisition personnel over the next 5 years. As shown in figure 8, about 38 percent of acquisition personnel governmentwide are either already eligible to retire or will be eligible by September 30, 2007. At DOD and DOE—the two largest contracting agencies in our review—39 percent of the acquisition workforce will be eligible to retire by fiscal year 2008; at the other eight agencies, between 30 to 36 percent of their current workforces will be eligible to retire. Our recent reviews of how agencies are addressing their future acquisition workforce needs found that all of the agencies we reviewed have made progress. For example, the agencies have either published or drafted human capital strategic plans for their overall workforces or for their acquisition workforces, and some are revamping training, recruitment, and retention programs to address future workforce needs. However, these agencies have encountered challenges, in part due to shifting priorities, missions, and budgets that make it difficult to predict with any certainty the specific skills and competencies their acquisition workforces will need. Further, many agencies simply lack good data on their workforces, such as size and location, knowledge and skills, and attrition and retirement rates. This information is critical to mapping out the current condition of the workforce and deciding what needs to be done to ensure that each agency has the right mix of skills and talent for the future. Effectively managing federal contracts is essential to ensuring that the more than $235 billion spent annually through contracts provides high- quality goods and services that meet the users’ needs in a timely fashion. While managing spending effectively is always a key management responsibility, the need for effective management is more acute in agencies that rely heavily on acquiring goods and services to carry out their missions or support their operations. Changes in what the government buys, its contracting approaches and methods, and its acquisition workforce combine to create a dynamic acquisition environment. The purpose of introducing or expanding streamlined purchase methods, such as GWACs, purchase cards, and supply schedules, was to enhance contracting efficiency, reduce administrative burdens, lower transaction costs, and shorten procurement times. However, our work has found that the lack of proper training, guidance, and internal controls can increase an agency’s procurement risk and lead to reduced public confidence. While agencies are taking corrective actions to address these concerns, many actions remain in the early stages of implementation. We requested comments on a draft of this report from each of the agencies we reviewed, as well as from the Office of Federal Procurement Policy. Each agency provided comments, generally via electronic mail. Agency officials concurred with our analyses and provided technical comments, which we incorporated as appropriate. Some agencies noted that their internal data systems contained procurement data that differed from that contained in the Federal Procurement Data System or contained workforce data that differed from that reflected in the Central Personnel Data File. For example, DOE and VA officials noted that their systems indicated higher use of performance- based contracting than the data contained in Federal Procurement Data System. We have noted these differences where appropriate in the report. Additionally, HHS and DOT officials noted that their definitions of their acquisition workforces differed from what we used. Because there is no commonly accepted definition of the acquisition workforce, we elected to use a consistent definition, as discussed in our scope and methodology, to better enable cross-agency comparisons. We are sending copies of this report to the Director, Office of Management and Budget; the Administrator, Office of Federal Procurement Policy; the Secretaries of Agriculture, Defense, Energy, Health and Human Services, Transportation, Treasury, and Veterans Affairs; the Administrator of General Services; the Administrator, National Aeronautics and Space Administration; the Attorney General; and interested congressional committees. We will also provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Major contributors to this report were Don Bumgardner, Chad Holmes, Kevin Heinz, Robert L. Ackley, Julia Kennon, Gary Middleton, John W. Mingus, Jr., John Van Schaik, Greg Wilmoth, and Suzanne Melancon. If you have any questions about this report, please contact me at (202) 512-4841 or Timothy J. DiNapoli at (202) 512-3665. To identify spending, procurement methods, and acquisition workforce trends, we judgmentally selected 15 data elements. These elements are not intended to be all-inclusive or exhaustive; rather, they reflect data relevant to key issues and trends identified in prior GAO reports or that provide basic information valuable to understanding an agency’s procurement function and approach. We reviewed these elements with senior procurement officials at each of the agencies we reviewed; these officials generally agreed that such elements provided useful and relevant information for gauging their agencies’ procurement activities. We obtained data on these elements from the General Services Administration’s Federal Procurement Data Center (FPDC), agency officials, and the Office of Management and Budget (OMB). FPDC administers the Federal Procurement Data System (FPDS), which is the federal government’s central database on contracting actions. FPDS contains detailed information on contracting actions over $25,000, including contract type, amount obligated, the types of goods or services purchased, and various vendor characteristics. FPDS contains less detailed information on actions of $25,000 or less. Because FPDC relies on federal agencies for procurement information, these data are only as reliable, accurate, and complete as the information reported by the agencies. We did not independently verify the information contained in the database. However, in 1998, FPDC conducted an accuracy audit, which showed that the average rate of accurate reporting in the FPDS database was 96 percent. GAO used data from FPDS that covered the 5-year period fiscal year 1997 through fiscal year 2001, the last year for which complete data were available. We subsequently adjusted the data provided by DOD to FPDS to correct for a fiscal year 2001 reporting error. We obtained additional information from agency procurement officials for certain data elements that were not readily available from FPDS, such as their agency’s use of governmentwide acquisition contracts. Additionally, we obtained data from the Federal Aviation Administration, which is not required to submit information to the FPDC. We reflected this data in the governmentwide analyses, as well as in the Department of Transportation’s profile. We also asked each agency to provide a description of its key procurement initiatives undertaken during the past 2 years. We did not independently verify the information provided or assess the degree to which agency-reported initiatives achieved their objectives. We collected information on each agency’s discretionary resources from the Office of Management and Budget’s MAX Budget Information System, which is used to collect, validate, analyze, model, and publish federal budget information. Discretionary budget resources reflect the budget amount that the agency is appropriated for a current fiscal year plus the budget authority that the agency carries over from prior fiscal years. Unless otherwise noted, all figures were adjusted for inflation and represent constant fiscal year 2001 dollars. To determine trends in the acquisition workforce, we analyzed data obtained from the Office of Personnel Management’s Central Personnel Data File (CPDF), which is the governmentwide human resources reporting system. The data we used reflect information on permanent employees reported to the CPDF as of September 30 of the particular year. The CPDF relies on agencies to ensure that the data are timely, accurate, complete, and edited in accordance with OPM standards. There is no standard definition of what constitutes an agency’s acquisition workforce, and agencies have defined their workforces in various ways. To provide consistency and comparability among agencies, we defined the acquisition workforce as those individuals serving in the following 14 occupation series: 1. GS-246: Industrial relations 2. GS-346: Logistics management 3. GS-511: Auditors 4. GS-1101: General business 5. GS-1102: Contracting series 6. GS-1103: Industrial property manager 7. GS-1104: Property disposal 8. GS-1105: Purchasing officer 9. GS-1106: Procurement clerical support 10. GS-1150: Industrial specialists 11. GS-1152: Production control 12. GS-1910: Quality assurance 13. GS-2003: Supply management 14. GS-2010: Inventory management Table 4 provides additional information on the data elements we included in each agency’s profile. We reviewed each agency profile with senior agency procurement officials and incorporated their comments where appropriate. We conducted this work between September 2002 and March 2003 in accordance with generally accepted government auditing standards. Mission: To support and defend the Constitution of the United States; provide for the common defense of the nation, its citizens, and its allies; and protect and advance U.S. interests around the world. The following departments combine for the majority of Department of Defense’s (DOD) fiscal year 2001 total discretionary budget resources: The Army accounted for 25 percent. The Air Force accounted for 23 percent. The Navy accounted for 21 percent. Other defense agencies account for 31 percent. DOD’s discretionary resources increased by 10 percent from fiscal year 1997 through fiscal year 2001 and totaled $446.3 billion in fiscal year 2001. Over the 5-year period, the proportion of DOD’s discretionary resources spent under contracts remained stable at 34 percent. DOD’s purchases of goods have increased by 23 percent and totaled $66.1 billion in fiscal year 2001. Purchases of services for contracts over $25,000 increased by 7 percent over the 5-year period, accounting for more than 54 percent of DOD’s contracts, or about $77.0 billion, in fiscal year 2001. Over the 5-year period, DOD’s service spending was driven by increased spending for information technology (46 percent); professional, administrative, and management support (21 percent); and medical services (22 percent). Although slightly declining since fiscal year 1997, research and development contracts accounted for about 28 percent, or $21.5 billion, of DOD’s total service spending in fiscal year 2001. Over the 5-year period, spending changed on the following goods: ships (128 percent) and aircraft DOD spent about $143.1 billion on contracts over $25,000 in fiscal year 2001, with firm fixed-price and other kinds of fixed-price contracts accounting for over 63 percent of DOD’s contract dollars. Since fiscal year 1997, the amount of contract dollars awarded under competitive procedures was about 58 percent of DOD’s total contract dollars over $25,000. Purchase card use has increased by 169 percent over the 5-year period, totaling $6.1 billion in fiscal year 2001. In fiscal year 2001, DOD authorized the use of 230,646 purchase cards. In fiscal year 2001, about 23 percent of DOD’s eligible contracts were performance based. DOD’s total workforce and acquisition workforce have declined by 9 percent since fiscal year 1997, continuing a decade-long decline that began in the early 1990s. DOD’s total workforce decreased to about 629,000 and the acquisition workforce decreased to about 69,000 in fiscal year 2001. Over 90 percent of DOD’s acquisition workforce has at least 10 years of federal service; by fiscal year 2008, 39 percent will be eligible to retire. Improve the credibility and effectiveness of the acquisition and logistics support process. Change budgeting, procurement, program management, and logistics processes and policies to support implementation of evolutionary acquisition and reduce cycle time. Improve logistics responsiveness and supply chain integration to make DOD’s logistics system Implement reforms to increase efficiency and effectiveness in the acquisition of services and improve support of socioeconomic programs. Revitalize the quality and morale of the DOD acquisition workforce. Improve training and education by building a new learning environment designed to empower each DOD acquisition workforce member with more control over learning needs. Establish a life-cycle workforce management approach to the civilian workforce, including human capital Use new outreach and communication strategies to create better awareness and knowledge of acquisition Improve the health of the defense industrial base. Establish a strategic approach to adopt commercial acquisition processes. Sourcing and Acquisition: Challenges Facing the Department of Defense. GAO-03-574T. Washington, D.C.: March 19, 2003. Major Management Challenges and Program Risks: Department of Defense. GAO-03-98. Washington, D.C.: January 2003. Purchase Cards: Control Weaknesses Leave Army Vulnerable to Fraud, Waste, and Abuse. GAO-02-732. Washington, D.C.: June 27, 2002. Acquisition Workforce: Department of Defense’s Plans to Address Workforce Size and Structure Challenges. GAO-02-630. Washington, D.C.: April 30, 2002. Contract Management: DOD Needs Better Guidance on Granting Waivers for Certified Cost or Pricing Data. GAO-02-502. Washington, D.C.: April 22, 2002. Purchase Cards: Continued Control Weaknesses Leave Two Navy Units Vulnerable to Fraud and Abuse. GAO-02-506T. Washington, D.C.: March 13, 2002. Best Practices: Taking A Strategic Approach Could Improve DOD’s Acquisition of Services. GAO-02-230. Washington, D.C.: January 18, 2002. DOD Systems Modernization: Continued Investment in the Standard Procurement System Has Not Been Justified. GAO-01-682. Washington, D.C.: July 31, 2001. Contract Management: Not Following Procedures Undermines Best Pricing Under GSA’s Schedule. GAO-01-125. Washington, D.C.: November 28, 2000. Acquisition Reform: DOD’s Guidance on Using Section 845 Agreements Could Be Improved. GAO/NSIAD-00-33. Washington, D.C.: April 7, 2000. Contract Management: Few Competing Proposals for Large DOD Information Technology Orders. GAO/NSIAD-00-56. Washington, D.C.: March 20, 2000. Report Number D-2002-075—Controls Over the DOD Purchase Card Program, March 29, 2002. Report Number D-2000-100—Contracts for Professional, Administrative, and Management Support Services, March 10, 2000. Report Number D-2000-088—DOD Acquisition Workforce Reduction Trends and Impacts, February 29, 2000. The Air Force’s total discretionary budget resources, after declining between fiscal years 1997 and 1999, increased by 8 percent between fiscal years 2000 and 2001 and totaled $101.1 billion in fiscal year 2001. Over the 5-year period, the proportion of the Air Force’s discretionary resources spent through contracts increased from 36 percent to about 40 percent. Since fiscal year 1997, Air Force spending has been driven by a 16 percent increase in the purchases of goods. In fiscal year 2001, service contracts remained steady at $20.9 billion, or about 53 percent of Air Force’s contracts over $25,000. Research and development contracts decreased by 7 percent during the 5-year period, totaling $8.9 billion in The Air Force increased its combined purchases of aircraft components and related parts by 64 percent— spending about $13.0 billion on these items in fiscal year 2001. In addition, the Air Force changed its spending in other categories: information technology services (94 percent); aircraft engines and related parts (71 percent); professional, administrative, and management support (15 percent); and repair of equipment (-41 percent). The Air Force spent about $40 billion through contracts over $25,000 in fiscal year 2001, with firm fixed-price and other kinds of fixed-price contracts accounting for about 62 percent of the Air Force’s total contract dollars. About 24 percent of the Air Force’s eligible service contracts are performance based. Purchase card spending increased by almost 200 percent over the 5-year period. In fiscal year 2001, the Air Force authorized the use of 79,762 purchase cards. The Air Force’s significant increase in its use of the federal supply schedule has been driven by purchases The Air Force’s total workforce and acquisition workforce decreased by 9 percent from fiscal year 1999 through fiscal year 2001. By fiscal year 2008, about 38 percent of the current acquisition workforce will be eligible to retire. The Army’s total discretionary budget resources decreased 4 percent from fiscal year 1997 to fiscal year 2001 and totaled $113.1 billion in fiscal year 2001. The amount spent through contracts accounted for more than one-third of the Army’s discretionary resources in fiscal year 2001. Spending on goods increased by about 29 percent over fiscal year 1997 levels. In particular, the Army nearly tripled its spending on aircraft and increased its spending on ground vehicles by 66 percent. Spending on services increased by about 15 percent over the 1997 level, driven by a 55 percent increase in spending for professional, administrative, and management support contracts. Such contracts now account for 16 percent of the Army’s service contract spending, up from less than 12 percent in fiscal year 1997. The Army’s spending for research and development increased by about 8 percent over fiscal year Of the $37 billion the Army spent on contracts over $25,000 in fiscal year 2001, $21.7 billion, or 59 percent, was spent using firm fixed-price contracts. Between fiscal years 1997 and 2001, purchase card use increased 139 percent, to $2.5 billion in fiscal year 2001. In fiscal year 2001, the Army authorized the use of 109,446 purchase cards. During the 5-year period, about 59 percent of the Army’s contract dollars were spent on competitively In fiscal year 2001, 25 percent of the Army’s service contracts were performance based. The Army’s use of the federal supply schedule program increased by more than 190 percent over the 5-year period. This increase was driven primarily by the increased purchases of services, which rose from about $604 million in fiscal year 1997 to over $1.7 billion in fiscal year 2001. From fiscal year 1997 through fiscal year 2001, the Army’s total workforce and acquisition workforce decreased by 6 and 7 percent, respectively. Throughout this period, the acquisition workforce accounted for about 8 percent of the total workforce. In fiscal year 2001, 61 percent of the acquisition workforce had 20 years or more of federal service, while 4 percent had fewer than 5 years of federal service. By fiscal year 2008, 40 percent of the current acquisition workforce will be eligible to retire. The Navy’s total discretionary budget resources increased 12 percent from fiscal year 1997 to fiscal year 2001 and totaled $95.3 billion in fiscal year 2001. The proportion of the Navy’s discretionary resources spent through contracts was about 45 percent in fiscal year 2001. From fiscal years 1997 through 2001, a 25 percent increase in spending for goods, coupled with a 6 percent decline in spending on services, resulted in the Navy spending nearly as much on goods as on services in fiscal year 2001. The principal causes of this shift were significant increases in spending on ships (up 133 percent) and decreased spending on research and development (down 17 percent). The Navy relies on a mix of fixed-price and cost-type contracts to achieve its mission. For example, in fiscal year 2001, firm fixed and other kinds of fixed-price contracts accounted for more than 53 percent of the Navy’s total contract dollars, while another 42 percent was spent under cost-type contracts. In fiscal year 2001, more than half of the Navy’s contract dollars were awarded under noncompeted contracts. Further, 16 percent of the Navy’s competitively awarded dollars were made on contracts in which only 1 offer was received. The Navy’s use of the federal supply schedule program increased by more than 227 percent over the 5-year period. This increase was driven primarily by the increased purchases of services, which rose from about $132 million in fiscal year 1997 to over $1.1 billion in fiscal year 2001. Purchase card use has increased by 185 percent over the 5-year period. In fiscal year 2001, the Navy spent about $1.8 billion with 27,926 purchase cards. About 15 percent of Navy’s eligible service contracts were considered performance based in Since fiscal year 1997, both the Navy’s total workforce and its acquisition workforce have declined by about 10 percent, to about 176,000 and 15,000, respectively. In fiscal year 2001, more than 60 percent of the Navy’s acquisition workforce had 20 years or more of service; conversely, only 11 percent had fewer than 10 years of service. By fiscal year 2008, about 40 percent of the Navy’s acquisition workforce will be eligible to retire. Mission: To support agriculture production by ensuring a safe, affordable, nutritious, and accessible food supply; caring for agricultural, forest, and range lands; supporting sound development of rural communities; providing economic opportunities for farm and rural residents; expanding global markets for agricultural and forest products and services; and working to reduce hunger in America and throughout the world. The following components account for the majority of the Department of Agriculture’s (USDA) fiscal year 2001 total discretionary budget resources: The Forest Service manages public lands in national forests and grasslands. In fiscal year 2001, the Forest Service accounted for 21 percent of USDA’s discretionary resources. Food and Nutrition Service provides better access to food and a more healthful diet through its food assistance programs and comprehensive nutrition education efforts. In fiscal year 2001, the Food and Nutrition Service accounted for 20 percent of USDA’s discretionary resources. Other key components, which combined to account for about 21 percent of USDA’s discretionary resources, include the Farm Service, the Rural Housing Service, and the Rural Development Service. USDA’s discretionary resources increased by 18 percent between fiscal years 1997 and 2001 and totaled $23.5 billion in fiscal year 2001. Over the 5-year period, the proportion of USDA’s discretionary resources spent through contracts increased from 14 percent to about 16 percent. About 70 percent of USDA’s contract spending went for purchases of goods, while about 30 percent was spent on services. For contracts valued over $25,000, spending on goods increased by 53 percent from fiscal year 1997 through fiscal year 2001. This increase was driven by purchases of food-related products and nonmetallic crude materials, such as cereal grains. Overall, purchases of food accounted for nearly half of USDA’s contract spending. Similarly, spending on services increased by 44 percent over the same period, primarily as a result of increased purchases of services related to natural resources and conservation and property maintenance and construction. Since fiscal year 1997, USDA’s spending has increased significantly in the following categories: natural resources and conservation (152 percent), nonmetallic crude materials (145 percent), construction (43 percent), and food (40 percent). USDA spent about $3.7 billion through contracts in fiscal year 2001, with firm fixed-price and other kinds of fixed-price contracts accounting for about 99 percent of USDA’s contract dollars. Purchase card spending between fiscal years 1997 and 2001 quadrupled, rising from $140.2 million to $564.2 million. In fiscal year 2001, USDA authorized the use of 22,865 purchase cards. Spending on commercial items using FAR part 12 procedures increased by about 93 percent from fiscal years 1997 through 1999, but decreased by 79 percent from fiscal years 1999 through 2001. USDA officials indicated that this decrease was a result of USDA reclassifying certain items as noncommercial. In fiscal year 2001, 93 percent of USDA’s contracts over $25,000 were competed. USDA’s total workforce in fiscal year 2001 was slightly higher than its fiscal year 1997 level. Over the same period, its acquisition workforce decreased by about 6 percent, to about 5,700 personnel. By fiscal year 2008, 29 percent of USDA’s current acquisition workforce will be eligible to retire. Purchase Card Oversight: To strengthen its oversight of purchase card transactions, USDA officials reported that they are: scanning the USDA Purchase Card Management System database for questionable transactions and requesting USDA agencies to justify these transactions as appropriate; issuing revised regulations to tighten card use and oversight procedures—for example, employees who fail to reconcile their accounts within 60 days will have their cards deactivated; and forming an interagency working group to define criteria for both automated alerts and preformatted reports for agency use in identifying questionable transactions. Performance-Based Service Contracting: USDA is emphasizing the use of performance-based service contracting procedures by using a monthly “report card,” which is furnished to senior management in each contracting activity’s headquarters. This report card addresses how well each USDA agency is doing in meeting established goals. In fiscal year 2002, USDA reported that more than 20 percent of its eligible service contracts were performance based, exceeding the governmentwide goal. Integrated Acquisition System: USDA is working to deploy its first corporate procurement automation system to support electronic requisitioning and contract document generation. The system is expected to interface with USDA’s corporate financial system and help improve the timeliness and accuracy of USDA’s financial statements. The system is currently undergoing pilot testing in locations within two USDA agencies. Workforce: To deal with the potential retirement of a large percentage of its skilled procurement workforce over the next few years, USDA is developing new regulations addressing classroom training, on-the-job experience, and education requirements for its acquisition workforce. USDA also recently added information on training resources and on-line classes to its procurement Web site’s home page. Major Management Challenges and Program Risks: Department of Agriculture. GAO-03-96. Washington, D.C.: January 2003. Mission: To foster a secure and reliable energy system that is environmentally and economically sustainable; to be a responsible steward of the nation’s nuclear weapons and nuclear materials; to clean up the department’s facilities; to lead in the physical sciences and advance the biological, environmental, and computational sciences; and to provide scientific instruments important to the Department of Energy (DOE). DOE groups its activities into three areas, which together comprised more than 90 percent of its fiscal year 2001 total discretionary budget resources. National Nuclear Security Administration (NNSA) maintains and enhances the safety, reliability, and performance of the U.S. nuclear weapons stockpile, including the ability to design, produce, and test weapons, to meet national security requirements. NNSA also engages in nonproliferation activities and the operation of Navy reactors. In fiscal year 2001, NNSA accounted for 35 percent of DOE’s discretionary resources. Energy Programs include nondefense environmental management, scientific research and development regarding renewable energy resources and nuclear energy, the remediation and maintenance of uranium facilities, and nuclear waste disposal. In fiscal year 2001, Energy Programs accounted for 31 percent of DOE’s discretionary resources. Environmental and other defense activities include defense-related environmental restoration and waste management, nuclear waste disposal, facilities closure projects, and environmental management privatization. In fiscal year 2001, these activities accounted for 29 percent of DOE’s discretionary resources. DOE’s discretionary resources rose by about 6 percent from fiscal year 1997 through fiscal year 2001 and totaled $25.5 billion in fiscal year 2001. DOE relies heavily on contracting to support its mission. For example, during the 5-year period, an average of 73 percent of its discretionary resources were spent on contracts. In fiscal year 2001, about 98 percent of DOE’s contracts over $25,000 were spent on services, of which three- quarters went to the management and operation of over 30 government-owned laboratories and nuclear facilities. DOE’s spending on natural resources and conservation services rose from $331.7 million in fiscal year 1997 to $1.35 billion in fiscal year 2001, a 306 percent increase. This increase was mostly driven by three large contracts for the cleanup, removal, and disposal of hazardous substances. DOE ranks as one of the largest agencies in research and development contracting, spending almost $1.1 billion in fiscal year 2001. DOE has historically relied on cost-type contracts as its primary contracting vehicle. In fiscal year 2001, $17.4 billion of the $18.6 billion—or 93 percent—that DOE spent on contracts over $25,000 was spent on cost-type contracts. Nearly all of DOE’s contracts for managing and operating its laboratories and facilities were cost-type. DOE continues to increase the amount awarded under competed contracts. For example, in fiscal year 2001, 64 percent of DOE’s contracts were competed. The use of purchase cards grew by an average of 12 percent annually from fiscal year 1997 through fiscal year 2001, and totaled approximately $220 million in fiscal year 2001. In fiscal year 2001, DOE authorized the use of 6,250 purchase cards. DOE’s total workforce, as well as its acquisition workforce, remained relatively stable from fiscal year 1997 through fiscal year 2001. In fiscal year 2001, DOE’s total workforce was 15,997, with 1,449, or 9 percent, in the acquisition workforce. In fiscal year 2001, 57 percent of the acquisition workforce had 20 years or more of federal service, while 7 percent had fewer than 5 years of federal service. Contract Management: To address both pre-award and post-award contract administration issues, a Contract Administration Division was formed to provide guidance and seek out and resolve contract administration issues. During fiscal year 2002, the division conducted a review of over 50 internal DOE directives. As a result of this review, a significant number of directives will be revised, canceled, or consolidated in an effort to be more consistent with performance-based management principles. Acquisition Career Development Program: To ensure that workforce skills stay current, DOE adopted the requirement for 80 hours of continuous learning every two years. As of the first quarter of 2002, 88 percent of the covered workforce met the certification requirements. Electronic Procurement: To streamline and eliminate redundant processes and develop paperless solutions, DOE developed DOE/C-Web, a Web-based electronic small purchase system, and the Industry Interactive Procurement System (IIPS), a Web-based system for large contracts (over $100,000) to issue solicitations, receive proposals, conduct negotiations, and make awards via the Internet. The systems have allowed DOE to achieve the following results: 100 percent of all synopses and notices requiring posting in FedBizOpps were posted electronically through IIPS. The number of solicitations posted on IIPS increased from 88 in fiscal year 1999 to approximately 900 in fiscal year 2002. The number of transactions conducted via DOE/C-Web increased from approximately 1,800 in fiscal year 1998 to 2,743 in fiscal year 2002. Department of Energy: Status of Contract and Project Management Reforms. GAO-03-570T. Washington, D.C.: March 20, 2003. Major Management Challenges and Program Risks: Department of Energy. GAO-03-100. Washington, D.C.: January 2003. Contract Reform: DOE Has Made Progress, but Actions Needed to Ensure Initiatives Have Improved Results. GAO-02-798. Washington, D.C.: September 13, 2002. Department of Energy: Contractor Litigation Costs. GAO-02-418R. Washington, D.C.: March 8, 2002. IG-0538–Management Challenges at the Department of Energy, December 21, 2001. IG-0510–Use of Performance-Based Incentives at Selected Departmental Sites, July 9, 2001. IG-0509–Integrated Planning, Accountability, and Budgeting System-Information System, June 28, 2001. Mission: The Department of Health and Human Services (HHS) is the United States government’s principal agency for protecting the health and welfare of all Americans. The following bureaus account for the majority of HHS’ fiscal year 2001 discretionary budget resources: National Institutes of Health (NIH) is responsible for conducting scientific research regarding the nature and behavior of living systems to extend healthy life and reduce the burdens of illness and disability. In fiscal year 2001, NIH accounted for 36 percent of HHS’ discretionary resources. Administration for Children and Families is responsible for promoting the economic and social well being of families, children, individuals, and communities. In fiscal year 2001, the administration accounted for 21 percent of HHS’ discretionary resources. Other key bureaus: Food and Drug Administration, Centers for Medicare and Medicaid Services, Centers for Disease Control and Prevention, and Indian Health Service. HHS’ discretionary budget increased by 47 percent from fiscal year 1997 through fiscal year 2001, and totaled $61 billion in fiscal year 2001. Over the 5-year period, the proportion of HHS’ discretionary resources spent under contracts decreased slightly, dropping from more than 9 percent in fiscal year 1997 to 8 percent in fiscal year 2001. HHS relied heavily on services from fiscal years 1997 through 2001. Although HHS’ spending on goods increased 127 percent, from $159 million to $360 million during the 5-year period, spending on services increased 19 percent, from $3.3 billion to $3.9 billion, between fiscal years 1997 and 2001. HHS spent about $1.1 billion on research and development projects in fiscal year 2001, or about 27 percent of total contract spending. The amount spent on these projects remained relatively stable during the 5-year period. From fiscal years 1997 through 2001, HHS experienced significant spending increases in three categories: medical, dental, and veterinary equipment (363 percent); professional, administrative, and management support (123 percent); and IT services (115 percent). HHS spent about $4.3 billion through contracts over $25,000 in fiscal year 2001. HHS relies on cost-type contracts as its primary contracting vehicle, accounting for 66 percent of these contract dollars. Purchase card spending increased by 258 percent over the 5-year period, from $95.2 million in fiscal year 1997 to $341.2 million in fiscal year 2001. HHS’ significant increase in its use of the federal supply schedule has been driven by purchases of services, which increased from about $6 million to over $98 million, during the 5-year period. The total HHS workforce has been increasing, particularly at NIH and the Food and Drug Administration, due to bio-defense initiatives. This trend is expected to continue, given the current focus on combating the threat of biological or chemical terrorism. In fiscal year 2002, more than 80 percent of the acquisition workforce had more than 10 years of federal service; the majority has 20 years or more of service. By fiscal year 2008, about 33 percent of the current acquisition workforce will be eligible to retire. Reverse auctioning: Reverse auctioning is a process that allows many sellers to compete for the business of a single buyer. However, unlike a traditional auction, bid prices go down. HHS claimed cost savings of more than $1.3 million from fiscal year 2000 through 2001 using reverse auctioning techniques. Performance-based service contracting: HHS officials reported significant increases in their use of performance-based contracting, including contracts with Medicare intermediaries and carriers. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 2003. Medicare: Comments on HHS’ Claims Administration Contracting Reform Proposal. GAO-01-1046R. Washington, D.C.: August 17, 2001. Medicare Contracting Reform: Opportunities and Challenges in Contracting for Claims Administration Services. GAO-01-918T. Washington, D.C.: June 28, 2001. Medicare: Opportunities and Challenges in Contracting for Program Safeguards. GAO-01-616. Washington, D.C.: May 18, 2001. Inspector General Report Number A-04-99-05561—-Audit of Medicare Administrative Costs Claimed by Blue Cross Blue Shield of Florida for Fiscal Years 1995 through 1998, July 31, 2002. Mission: Enforcing laws in the public interest and protecting the public from criminal activity. The following bureaus account for the majority of the Department of Justice’s (DOJ) fiscal year 2001 total discretionary budget resources: Office of Justice Programs develops programs that improve law enforcement’s ability to prevent and control crime, improve the criminal and juvenile justice systems, increase knowledge about crime and related issues, and assist crime victims. In fiscal year 2001, the office accounted for 24 percent of DOJ’s discretionary resources. Federal Bureau of Prisons seeks to provide safe, efficient, and humane correctional services and programs. In fiscal year 2001, the bureau accounted for 18 percent of DOJ’s discretionary resources. Federal Bureau of Investigation (FBI) conducts investigations and enforces federal laws. In fiscal year 2001, the FBI accounted for 16 percent of DOJ’s discretionary resources. Other key bureaus: Immigration and Naturalization Service, Drug Enforcement Administration, and the DOJ’s discretionary resources increased 16 percent from fiscal year 1997 through fiscal year 2001 and totaled $26.4 billion in fiscal year 2001. Over the 5-year period, the proportion of DOJ’s discretionary resources spent under contracts increased slightly, rising from 15 percent in fiscal year 1997 to 17 percent in fiscal year 2001. For contracts valued over $25,000, spending on services increased 64 percent from fiscal year 1997 through fiscal year 2001. This growth was driven by increases in the following services: professional, administrative and management support services (128 percent); building construction (125 percent); information technology services (64 percent). DOJ spent about $3.9 billion on contracts over $25,000 in fiscal year 2001. DOJ relied on firm fixed-price and other kinds of fixed-price contracts as the agency’s primary contracting vehicles. On average from fiscal year 1997 through fiscal year 2001, fixed-price contracts accounted for 85 percent of DOJ’s contract dollars. DOJ’s use of the federal supply schedule and contracts awarded by other agencies increased considerably during the 5-year period. For example, DOJ spent $234 million in fiscal year 1997 using the federal supply schedule; in fiscal year 2001, it had spent $470 million. Purchase card spending increased by 179 percent over the 5-year period, from $190.9 million in fiscal year 1997 to $533.4 million in fiscal year 2001. In fiscal year 2001, DOJ authorized the use of 16,073 purchase cards. DOJ’s use of FAR part 12 to purchase commercial items grew more than 400 percent over the last 5 years. This increase was due to DOJ’s increased emphasis on (1) commercial purchases, and (2) more accurate reporting of data to FPDS. Workforce From fiscal year 1997 through fiscal year 2001, DOJ’s total workforce increased by 11 percent—growing from about 110,000 to 123,000. However, DOJ’s acquisition workforce remained relatively stable, decreasing 2 percent over the 5-year period. In fiscal year 2002, more than 85 percent of the acquisition workforce had 10 years or more of By fiscal year 2008, about 30 percent of the current acquisition workforce will be eligible to retire. Acquisition-related electronic government initiatives: Since October 2000, DOJ has implemented several e-government programs to improve its procurement processes. These programs include: Federal Business Opportunities, which is a GSA-managed Web-based system that provides for electronic notice of agency requirements and solicitations for contract opportunities. This system has been deployed departmentwide, and all DOJ synopses for contracts over $25,000 are now posted to that site. Central Contractor Registration is a Web-based governmentwide database of vendor information. DOJ is the administrator for two programs used to track contractor performance: Contractor Past Performance System: an electronic federal report card collection system that is used to collect and record past performance information for subsequent use in determining contractor eligibility and selection. Past Performance Information Retrieval System: A Web-enabled application that allows the retrieval of contractor past performance information from various databases. Departmentwide guidelines for evaluating candidates for GS-1102 contract specialist positions: Over the last 5 years, guidelines were issued to bureau personnel officers for evaluating candidates for GS-1102 contract specialist positions. Key elements include new education standards and certification processes. Major Management Challenges and Program Risks: Department of Justice. GAO-03-105. Washington, D.C.: January 2003. Information Technology: INS Needs to Strengthen Its Investment Management Capability. GAO-01-146. Washington, D.C.: December 29, 2000. Border Patrol: Procurement of MD 600N Helicopters Should Be Reassessed. GGD-00-201. Washington, D.C.: September 29, 2000. Report Number 02-32–Federal Bureau of Prisons Management of Construction Contracts, August 2002. Report Number 01-16–Justice’s Reliance on Private Contractors for Prison Services, July 31, 2002. Mission: To promote a stable economy, manage the government’s finances, and safeguard federal financial systems and our nation’s leaders. The following bureaus account for the majority of Treasury’s fiscal year 2001 total discretionary budget resources: Internal Revenue Service (IRS)—responsible for determining, assessing, and collecting tax revenue in the United States. In fiscal year 2001, the IRS accounted for 51 percent of Treasury’s total discretionary budget resources. U.S. Customs Service—responsible for enforcing laws to safeguard U.S. borders against the illegal entry of goods and of regulating legitimate commercial activity. In fiscal year 2001, Customs accounted for 20 percent of Treasury’s total discretionary budget resources. Other key bureaus—the U.S. Mint, the Secret Service, and the Bureau of Alcohol, Tobacco, and Firearms. Treasury’s total discretionary budget resources increased by 32 percent from fiscal year 1997 through fiscal year 2001 and totaled $19.7 billion in fiscal year 2001. Over the same period, the amount spent through contracts increased slightly, both in real terms and as a share of Treasury’s discretionary resources. For example, in fiscal year 1997, contract obligations accounted for about 14 percent of Treasury’s discretionary resources; by fiscal year 2001, contract obligations accounted for 17 percent. For contracts valued over $25,000, spending on services increased by 71 percent from fiscal year 1997 through fiscal year 2001, while spending on goods increased by 44 percent. Treasury’s spending on goods increased significantly during fiscal year 1999; this was attributed to (1) the U.S. Mint’s development of the “state quarters” program, (2) the Secret Service’s upgrade in hand weapons, and (3) preparation for Y2K-related incidents. Since fiscal year 1997, Treasury experienced significant spending increases in four categories: information technology (IT) equipment (181 percent), communication detection equipment (144 percent), administrative and management support services (138 percent), and IT services (81 percent). Treasury has changed its procurement approach in several key areas since fiscal year 1997: Treasury’s use of contracts awarded or administered by other agencies has doubled over this period and accounted for about 16 percent of Treasury’s contract obligations in fiscal year 2001. After increasing by about 53 percent from fiscal years 1997 through 1999, purchase card use remained relatively stable through fiscal year 2001. In fiscal year 2001, Treasury authorized the use of 16,558 purchase cards. Treasury’s workforce size has remained relatively stable over the 5-year period. Between fiscal years 1997 through 2001, Treasury’s total workforce grew about 2 percent from about 156,000 employees to almost 159,000. Treasury’s acquisition workforce represents about 1.5 percent of its total workforce. In fiscal year 2001, 47 percent of the acquisition workforce had 20 years or more of federal service, while only 4 percent of the workforce had fewer than 5 years of service. By fiscal year 2008, approximately 30 percent of Treasury’s acquisition workforce will be eligible to retire. Performance-based Service Contracting (PBSC): To increase its use of PBSC methods, Treasury developed PBSC training and a handbook, presented information on PBSC to all bureaus, and worked with bureaus on individual procurements. Procurement officials at Treasury stated that these efforts resulted in increasing Treasury’s use of performance-based contracts to 20 percent in fiscal year 2002. Improving procurement system reviews: Treasury developed the Acquisition Management Assistance Review program to assess three key areas (people, process, and tools) to determine the health of Treasury’s procurement systems. Procurement Intern Program: Because its acquisition workforce is aging, Treasury developed a procurement intern program to identify and develop new talent for the bureaus. Major Management Challenges and Program Risks: Department of the Treasury. GAO-03-109. Washington, D.C.: January 2003. Acquisition Workforce: Status of Agency Efforts to Address Future Needs. GAO-03-55. Washington, D.C.: December 18, 2002. IRS Contracting: New Procedure Adds Price or Cost as a Selection Factor for Task Order Awards. GAO-03-218. Washington, D.C.: December 10, 2002. Business Systems Modernization: IRS Needs to Better Balance Management Capacity with System Acquisition Workload. GAO-02-356. Washington, D.C.: February 28, 2002. OIG-02-074—General Management: The Mint Leased Excessive Space For Its Headquarters Operation, March 29, 2002. Mission: To ensure a fast, safe, efficient, accessible, and convenient transportation system that meets our vital national interests and enhances the quality of life of the American people, today and into the future. Two administrations account for the majority of the Department of Transportation’s (DOT) fiscal year 2001 total discretionary budget resources: The Federal Aviation Administration (FAA) regulates civil aviation to promote safety and fulfill the requirements of national defense; encourages and develops civil aeronautics, including new aviation technology; operates a common system of air traffic control and navigation for both civil and military aircraft; implements programs to control aircraft noise and other environmental effects of civil aviation; and regulates U.S. commercial space transportation. In fiscal year 2001, the FAA accounted for 41 percent of DOT’s total discretionary budget resources. The United States Coast Guard (USCG) is responsible for maritime search and rescue, recreational boating safety, vessel traffic management, at-sea enforcement of living marine resource laws and treaty obligations, at-sea drug and illegal migrant interdiction, and port security. In fiscal year 2001, the USCG accounted for 15 percent of DOT’s total discretionary budget resources. Other key organizations: Federal Transit Administration and Federal Highway Administration. DOT’s discretionary resources decreased 28 percent from $51.8 billion in fiscal year 1997 to $37.3 billion in fiscal year 2001. This was due largely to changes brought about by the Transportation Equity Act for the 21st Century (TEA-21). TEA-21, which was enacted in June 1998, shifted a significant amount of discretionary funds to mandatory spending categories. While discretionary resources decreased during the 5-year period, the amount spent through contracts increased by 26 percent. In fiscal year 2001, the DOT spent almost $5.6 billion, or 15 percent of its discretionary resources, through contracting. Spending on goods decreased by 29 percent from fiscal year 1997 to fiscal year 2001, while spending on services increased 49 percent. In fiscal year 2001, 85 percent of the amount spent through contracts over $25,000 was for services. DOT relies on a mix of contract types to achieve its mission; slightly more than half of DOT’s fiscal year 2001 contracts were fixed-price, while more than a quarter were cost-type. Another 14 percent were labor hours or time and materials contracts. In fiscal year 2001, 19 percent of DOT’s service contracts were performance based. In fiscal year 2001, DOT authorized the use of 21,728 purchase cards. The amount spent using FAR part 12 procedures increased from $297 million in fiscal year 1997 to $1.6 billion in fiscal year 2001, a 437 percent increase. In fiscal year 2001, DOT’s total workforce was 64,509, a 2 percent increase from fiscal year 1997. About 2 percent of the total workforce was made up of the acquisition workforce, which decreased by 7 percent during the 5-year period, from 1,634 in fiscal year 1997 to 1,514 in fiscal year 2001. In fiscal year 2001, 56 percent of the acquisition workforce had 20 years or more of federal service, while 5 percent had fewer than 5 years of service. By fiscal year 2008, approximately 36 percent of DOT’s acquisition workforce will be eligible to retire. Security: To support the current security crisis within our country after September 11, 2001, DOT: (1) helped stand up the Transportation Security Administration (TSA) by assisting in creating a TSA Acquisition Management System, developing a standard set of TSA contract provisions and clauses, and providing operational support in the solicitation and award of TSA contracts; and (2) continues to address security issues relating to controlling access to sensitive information and background checks on contractor personnel in positions where sensitive information or national security interests are present. Procurement Performance Management System: DOT continues its major initiative to improve procurement performance by implementing DOT’s procurement performance management program. This program assists managers in targeting areas for improvement based on the results of specified metrics chosen for their importance to the administration, DOT management, or DOT customers. Major Management Challenges and Program Risks: Department of Transportation. GAO-03-108. Washington, D.C.: January 2003. National Airspace System: Status of FAA’s Standard Terminal Automation Replacement System. GAO-02-1071. Washington, D.C.: September 17, 2002. National Airspace System: FAA’s Approach to Its New Communications System Appears Prudent, but Challenges Remain. GAO-02-710. Washington, D.C.: July 15, 2002. FAA Alaska: Weak Controls Resulted in Improper and Wasteful Purchases. GAO-02-606. Washington, D.C.: May 30, 2002. Coast Guard: Budget and Management Challenges for 2003 and Beyond. GAO-02-538T. Washington, D.C.: March 19, 2002. Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain. GAO-01-564. Washington, D.C.: May 2, 2001. National Airspace System: Persistent Problems in FAA’s New Navigation System Highlight Need for Periodic Re-evaluation. GAO/RCED/AIMD-00-130. Washington, D.C.: June 12, 2000. FI-2002-092–FAA Oversight of Cost Reimbursable Contracts, May 8, 2002. FI-2002-089–DOT’s Information Technology Omnibus Procurement Program (ITOP), April 15, 2002. FI-2001-057–FRA E-Mail System Replacement Contracts, May 3, 2001. FI-2000-125–Inactive Obligations on Contracts, September 25, 2000. AV-2000-127–Technical Support Services Contract: Better Management Oversight and Sound Business Practices Are Needed, September 28, 2000. Mission: To restore the capability of those who suffered harm during their military service; to ensure a smooth transition as veterans return to civilian life in their communities; to honor and serve all veterans for the sacrifices they made on behalf of the nation; to contribute to the public health, socioeconomic well being, and history of the nation. The following administrations account for nearly all of the Department of Veterans Affairs’ (VA) fiscal year 2001 total discretionary budget resources. Veterans Health Administration is responsible for medical care, education, and research, and serves as medical backup to the Department of Defense. In fiscal year 2001, VHA accounted for 89 percent of VA’s discretionary resources. Veterans Benefits Administration provides benefits and services to veterans and their dependents, including compensation and pensions, education benefits, loan guarantees, and insurance. National Cemetery Administration provides burial benefits to veterans and eligible dependents and Presidential Memorial Certificates to deceased veterans’ next of kin. VA’s discretionary resources rose by about 20 percent from fiscal years 1997 through fiscal year 2001 and totaled $26.5 billion in fiscal year 2001. In fiscal year 2001, contract obligations accounted for 22 percent, or $5.9 billion, of VA’s discretionary resources. VA spends almost half of its contract dollars on medical and dental equipment and supplies. Since fiscal year 1997, spending for these supplies has grown by 92 percent due in large part to an increase in patient workload. Spending on services grew by about 14 percent, largely driven by increased spending for information technology (226 percent) and medical services (24 percent). VA relies heavily on firm fixed-price contracts. In fiscal year 2001, $3.9 billion—or 91 percent—of the $4.3 billion that VA obligated for contracts over $25,000 was obligated on firm fixed-price contracts. Purchase card spending increased from $855 million in fiscal year 1997 to $3.8 billion in fiscal year 2001, a 344 percent increase. In fiscal year 2001, VA authorized the use of 34,090 purchase cards. VA spent $3.4 billion, 79 percent of total contracting dollars, on competed contracts in fiscal year 2001. VA typically received two or more offers on more than 90 percent of its competed contracts. At 202,414 personnel in fiscal year 2001, VA’s total workforce was about the same level as fiscal year 1997. VA’s acquisition workforce decreased by 6 percent from its fiscal year 1997 level and totaled about 2,562 personnel in fiscal year 2001. In fiscal year 2001, 52 percent of the acquisition workforce had 20 years or more of federal service, while 5 percent had fewer than 5 years of federal service. Computer, hardware and software procurement: VA requires that computer hardware and software vendors offer products to VA at a cost equal to or lower than those offered any other customers. Prices that are found to be too high are required to be lowered before they are accepted. According to VA officials, this initiative resulted in savings of $33 million in the period of June through October 2002. The prices paid by VA over this time period average 21.6 percent below the vendors’ GSA Federal Supply Schedule prices for the same items. Vocational rehabilitation and employment service national acquisition strategy: To address concerns related to contracting for services in the field, a task force developed the National Acquisition Strategy to provide uniform prices and services at the 58 Veterans Benefit Administration regional offices. In September 2002 VA awarded 249 performance-based service contracts using a uniform format. According to VA officials, over 95 percent of the awards went to small businesses, veteran-owned businesses, and service-disabled, veteran-owned businesses. Joint contracting between VA and DOD: In March 2001 VA had 34 joint contracts for pharmaceuticals between VA and DOD. In November 2002, that number more than doubled, to 76. In addition, there are 18 pending joint contracts for pharmaceuticals, vital sign monitors, and radiation therapy equipment. VA Federal Supply Schedule Program: The VA Federal Supply Schedule Program was expanded in late 2000, to include professional health care services. This schedule is open to all federal agencies and provides for temporary contract services of surgeons, specialists, nurses, radiologists, pharmacists, and dentists. Recently, allied health services (nursing assistants, pharmacy technicians, and dental assistants) were added to this schedule. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. VA and DOD Health Care: Factors Contributing to Reduced Pharmacy Costs and Continuing Challenges. GAO-02-969T. Washington, D.C.: July 22, 2002. VA and Defense Health Care: Potential Exists for Savings through Joint Purchasing of Medical and Surgical Supplies. GAO-02-872T. Washington, D.C.: June 26, 2002. DOD and VA Pharmacy: Progress and Remaining Challenges in Jointly Buying and Mailing Out Drugs. GAO-01-588. Washington, D.C.: May 25, 2001. VA Laundry Service: Consolidations and Competitive Sourcing Could Save Millions. GAO-01-61. Washington, D.C.: November 30, 2000. 01-00504-9–Summary Report–Combined Assessment Program Reviews at Veterans Health Administration Medical Facilities (January 1999–March 2001), October 10, 2001. 01-01855-75–Evaluation of the Department of Veterans Affairs Purchasing Practices, May 15, 2001. 9R3-E99-037–Audit Of The Department Of Veterans Affairs Purchase Card Program, February 12, 1999. 8D2-E01-002–Audit of VA Procurement Initiatives For Computer Hardware, Software, and Services (PCHS/PAIRS) and Selected Information Technology Investments, January 22, 1998. Mission: To provide policy leadership and expertly managed space, products, services, and solutions, at the best value, to enable federal agencies to accomplish their missions. The following services account for the majority of the General Services Administration’s (GSA) fiscal year 2001 total discretionary budget resources. Federal Technology Service (FTS) provides information technology solutions and network services to support federal agencies. In fiscal year 2001, FTS accounted for 46 percent of GSA’s total discretionary budget resources. The Public Building Service (PBS) oversees the construction, development, and maintenance of federal buildings and manages the leasing of commercial office space. In fiscal year 2001, PBS accounted for 51 percent of GSA’s total discretionary budget resources. Federal Supply Service (FSS) provides agencies with numerous supplies and services, including commercial products, professional services, vehicle acquisition and leasing, and travel and transportation services. FSS manages the Federal Supply Schedule program, which provides federal agencies with access to more than 4 million products and services and coordinates the governmentwide travel and purchase card programs. The FSS accounts for none of GSA’s discretionary resources, because the service is financed by a revolving fund. With a revolving fund, the FSS obtains most of its funding from the fees paid by other agencies to buy from the FSS program. GSA’s discretionary resources increased by 38 percent from fiscal year 1997 through fiscal year 2001 and totaled $19.3 billion in fiscal year 2001. Over the 5-year period, the proportion of GSA’s discretionary resources spent under contracts increased from 59 percent to about 64 percent. GSA relies heavily on service contracts, which accounted for more than 80 percent of all contracts over $25,000 in fiscal year 2001. For contracts valued over $25,000, spending on services increased by 75 percent from fiscal year 1997 through fiscal year 2001. Spending on goods increased by 13 percent. The increase in service spending was driven by increased purchases of IT services, which grew from $594.0 million in fiscal year 1997 to $4.7 billion in fiscal year 2001. GSA’s increased share is largely attributable to the growth of GSA’s Federal Technology Service. However, orders placed by the Federal Technology Service are counted as spending by GSA, rather than spending by the federal agency that will ultimately receive the service or equipment. Since fiscal year 1997, GSA’s spending has undergone significant increases in the following categories: IT and telecommunication services (691 percent), motor vehicles (24 percent), IT equipment (19 percent), and lease of facilities (17 percent). GSA spent about $11.7 billion through contracts in fiscal year 2001, with firm fixed-price and other kinds of fixed-price contracts accounting for over 90 percent of GSA’s contract dollars. Purchase card spending doubled since fiscal year 1997, and totaled nearly $160 million in fiscal year 2001. In fiscal year 2001, GSA authorized the use of 3,776 purchase cards. GSA’s total workforce has remained relatively stable over the 5-year period, at about 14,100. Over this same period, its acquisition workforce has increased by more than 10 percent. More than 91 percent of GSA’s acquisition workforce has more than 10 years of federal service; by fiscal year 2008, 34 percent will be eligible to retire. Construction Brain Trust. The Construction Brain Trust was implemented in fiscal year 2001 to reduce the time, cost and complexity of the construction contracting process. The membership consists of representatives from GSA policy offices, GSA regions, and construction-related associations, surety companies, and law firms. Agency-wide performance-based contracting program. To improve GSA’s use of performance-based contracts, it established a Web site and developed additional training materials, such as the Seven Steps to Performance-based Service Acquisition Guide, for use by its acquisition personnel. Applied Learning Center. This initiative was implemented in 2001. The long-term goal of the center is to assist acquisition professionals perform their jobs, identify skill gaps, and broaden the knowledge base of acquisition professionals into areas such as budget, finance, and program management. Contract Management: Government Faces Challenges in Gathering Socioeconomic Data on Purchase Card Merchants. GAO-03-56. Washington, D.C.: December 13, 2002. Acquisition Workforce: Status of Agency Efforts to Address Future Needs. GAO-03-55. Washington, D.C.: December 18, 2002. Acquisition Workforce: Agencies Need to Better Define and Track the Training of Their Employees. GAO-02-737. Washington, D.C.: July 29, 2002. Contract Management: Interagency Contract Program Fees Need More Oversight. GAO-02-734. Washington, D.C.: July 25, 2002. Contract Management: Roles and Responsibilities of the Federal Supply Service and Federal Technology Service. GAO-02-821R. Washington, D.C.: June 7, 2002. Telecommunications: GSA Action Needed to Realize Benefits of Metropolitan Area Acquisition Program. GAO-02-325. Washington, D.C.: April 4, 2002. Contract Management: Not Following Procedures Undermines Best Pricing Under GSA’s Schedule. GAO-01-125. Washington, D.C.: November 28, 2000. Special Report on FSS’s Multiple Award Schedule Pricing Practices, August 24, 2001. Report Number A995288–Audit of Federal Technology Service’s Use of Multiple Award Indefinite Delivery Indefinite Quantity Contracts, September 19, 2000. Report Number A995175–Audit of the Federal Protective Service’s Contract Guard Program, March 28, 2000. Mission: To develop human exploration of space, advance and communicate scientific knowledge, and research and develop aeronautics and space technologies. Two accounts account for the majority of the National Aeronautics and Space Administration’s (NASA) fiscal year 2001 total discretionary budget resources: Science, Aeronautics and Technology (SAT) provides funds for research and development in the offices of Space Science, Earth Science, Biological and Physical Research, and Aerospace Technology, respectively. SAT also funds academic programs that NASA has established in elementary and secondary schools, as well as research conducted at more than 100 universities in the United States. In fiscal year 2001, SAT accounted for 44 percent of NASA’s discretionary resources. Human Space Flight (HSF) primarily provides funds for the construction and operation of the international space station and the operation of the space shuttle program. Other programs include developing expendable launch vehicles, improving space communications and data systems, and providing safety and mission support. HSF also provides for the design, repair, rehabilitation, and modification of facilities and construction of new facilities. In fiscal year 2001, HSF accounted for 45 percent of NASA’s discretionary resources. NASA’s discretionary resources decreased by about 6 percent from fiscal year 1997 through fiscal year 2001, totaling $15.8 billion in fiscal year 2001. The amount spent through contracts decreased slightly, both in real terms and as a share of NASA’s discretionary resources. Nevertheless, NASA relies on contracts to achieve its mission to a greater extent than most federal agencies. NASA contracts primarily for services. Of NASA’s $11 billion spent on contracts over $25,000 in fiscal year 2001, about $9.6 billion—or 86 percent—were for services, including operating various government-owned facilities, providing professional and administrative support, and conducting research and development activities. Overall, NASA’s spending for services declined by 7 percent between fiscal years 1997 and 2001, though there were significant variations in individual service categories. NASA uses a variety of methods in carrying out its procurement functions. Due to the nature of the items and services needed to carry out its mission, NASA relies heavily on cost-type contracts; 83 percent of contract obligations over $25,000 for fiscal year 2001 were made under cost-type contracts. NASA spent slightly more than half of its contracts over $25,000 on competed contracts, a relatively lower percentage than other federal agencies. For those contracts it competes, NASA receives two or more bids nearly 90 percent of the time. NASA reports that 64 percent of eligible service contracts were performance based in fiscal year 2001. NASA’s use of purchase cards grew since fiscal year 1997, but accounts for a small percentage of its budget While NASA generally acquires government-unique items, it increased its purchases using FAR part 12 procedures from $225 million in fiscal year 1997 to about $794 million in fiscal year 2001. The size of NASA’s workforce remained relatively stable from fiscal year 1997 through fiscal year 2001, decreasing by about 4 percent. NASA’s acquisition workforce, which represented about 7 percent of its total workforce, experienced a similar trend. In fiscal year 2001, 56 percent of the acquisition workforce had 20 years or more of federal service, while only 3 percent had fewer than 5 years of service. By fiscal year 2008, approximately 33 percent of NASA’s acquisition workforce will be eligible to retire. Risk-based acquisition management: To reduce the incidence and severity of impacts arising from unforeseen programmatic events, NASA recently developed this process to integrate risk principles when developing the acquisition strategy, selecting sources, choosing contract type, structuring fee incentives, and conducting contractor surveillance. Award term contracting: NASA is using this approach to reward contractor performance by enabling contract extension for excellent performance and reduced costs. In addition to profit, a continuing relationship becomes a prime motivator for the contractor. Evaluate and Improve Performance-Based Service Contracting: NASA has initiated an agencywide awareness program and training sessions for government and contractor employees relating to performance-based service contracting. Major Management Challenges and Program Risks: National Aeronautics and Space Administration. GAO-03-114. Washington, D.C.: January 2003. Space Station: Actions Under Way to Manage Cost, but Significant Challenges Remain. GAO-02-735. Washington, D.C.: July 17, 2002. NASA: Compliance With Cost Limits Cannot Be Verified. GAO-02-504R. Washington, D.C.: April 10, 2002. IG-03-003–NASA Contracts for Professional, Administrative, and Management Support Services, October 16, 2002. IG-02-027–NASA’s Contract Audit Follow-up System, September 30, 2002. IG-02-011–Review of Performance-Based Service Contract Quality Assurance Surveillance Plans, June 24, 2002. IG-02-011–International Space Station Spare Parts Costs, March 22, 2002. IG-02-002–Restructuring of the International Space Station Contract, November 8, 2001. IG-01-027–Acquisition of the Space Station Propulsion Module, May 21, 2001.
|
The federal government, comprised of more than 60 agencies and nearly 1.7 million civilian workers, acquires most of its goods and services through contracts. Recent changes in what the government buys, its contracting approaches and methods, and its acquisition workforce have combined to create a dynamic acquisition environment. Many of these recent changes enhance contracting efficiency and offer a number of benefits, such as reduced administrative burdens. However, GAO's past work has found that if these changes are not accompanied by proper training, guidance, and internal controls, agency procurements may be at greater risk. While effectively managing contracts is always a key management responsibility, this responsibility is more acute in those agencies that rely heavily on acquisitions to accomplish their missions. The goal of this report is to identify for Congress, the administration, and accountability organizations those procurement-related trends and challenges that may affect federal agencies. Specifically, GAO analyzed recent federal procurement patterns, the use of various procurement methods, and changes in the acquisition workforce. Federal agencies procured more than $235 billion in goods and services during fiscal year 2001, reflecting an 11 percent increase over the amount spent 5 years earlier. Further growth in contract spending, at least in the short term, is likely to increase given the President's request for additional funds for defense and homeland security, agencies' plans to update their information technology systems, and other factors. Overall, contracting for goods and services accounted for about 24 percent of the government's discretionary resources in fiscal year 2001. Federal agencies are taking advantage of the streamlined acquisition processes that were developed in the 1990s, including relying on contracts awarded by other federal agencies to obtain goods and services. The increase in the use of this acquisition method is driven largely by purchases of information technology and by professional, administrative, and management support services. Similarly, agencies are increasingly using purchase cards for many of their low dollar value procurements. Over the last decade, the federal acquisition workforce has had to adapt to changes in staffing levels, workloads, and the need for new skill sets. Procurement reforms have required contracting specialists to have a greater knowledge of market conditions, industry trends, and the technical details of the commodities and services they procure. A priority at most agencies we reviewed was attracting and retaining the right people with the right skills to successfully address the increasingly complex actions expected in the future. Many agencies have made progress with strategic human capital planning efforts. We reviewed 10 agencies that represent over 90 percent of the federal government's acquisition spending. All agencies provided comments on our report and concurred with our analyses.
|
The Census Bureau’s mission is to serve as the leading source of high- quality data about the nation’s people and economy. The Bureau’s core activities include conducting decennial, economic, and government censuses, conducting demographic and economic surveys, managing international demographic and socioeconomic databases, providing technical advisory services to foreign governments, and performing such other activities as producing official population estimates and projections. Conducting the decennial census is a major undertaking involving considerable preparation, which is currently under way. A decennial census involves identifying and correcting addresses for all known living quarters in the United States (known as “address canvassing”); sending questionnaires to housing units; following up with nonrespondents through personal interviews; identifying people with nontraditional living arrangements; managing a voluminous workforce responsible for follow-up activities; collecting census data by means of questionnaires, calls, and personal tabulating and summarizing census data; and disseminating census analytical results to the public. The Bureau estimates that it will spend about $3 billion on automation and IT for the 2010 Census, including four major systems acquisitions that are expected to play a critical role in improving its coverage, accuracy, and efficiency. Figure 1 shows the key systems and interfaces supporting the 2010 Census; the four major IT systems involved in the acquisitions are highlighted. As the figure shows, these four systems are to play important roles with regard to different aspects of the process. To establish where to count (as shown in the top row of fig. 1), the Bureau will depend heavily on a database that provides address lists, maps, and other geographic support services. The Bureau’s address list, known as the Master Address File (MAF), is associated with a geographic information system containing street maps; this system is called the Topologically Integrated Geographic Encoding and Referencing (TIGER) database. The ® MAF/TIGER database, highlighted in fig. 1, is the object of the first major IT acquisition—the MAF/TIGER Accuracy Improvement Project (MTAIP). The project is to provide corrected coordinates on a county-by-county basis for all current features in the TIGER database. The vital role of this database in the census operations is the reason that MTAIP is a key acquisition, even though it is relatively small in scale (compared with the other three key IT acquisitions) and will not result in new systems. To collect respondent information (see the middle row of fig. 1), the Bureau is pursuing two initiatives. First, the Field Data Collection Automation (FDCA) program is expected to provide automation support for field data collection operations as well as reduce costs and improve data quality and operational efficiency. This acquisition includes the systems, equipment, and infrastructure that field staff will use to collect census data, such as mobile computing devices. Second, the Decennial Response Integration System (DRIS) is to provide a system for collecting and integrating census responses from all sources, including forms, telephone interviews, and mobile computing devices in the field. DRIS is expected to improve accuracy and timeliness by standardizing the response data and providing it to other Bureau systems for analysis and processing. To provide results, the Data Access and Dissemination System II (DADS II) acquisition (see the bottom row of fig. 1) is to replace legacy systems for tabulating and publicly disseminating data. The DADS II program is expected to provide comprehensive support to DADS. Replacement of the legacy systems is expected to maximize the efficiency, timeliness, and accuracy of tabulation and dissemination products and services; minimize the cost of tabulation and dissemination; and increase user satisfaction with related services. Table 1 provides a brief overview of the four acquisitions. Responsibility for these acquisitions lies with the Bureau’s Decennial Management Division and the Geography Division. Each of the four acquisitions is managed by an individual project team staffed by Bureau personnel. Additional information on the contracts for these four systems is provided in appendix II. In preparation for the 2010 Census, the Bureau plans a series of tests of its operations and systems (new and existing) in different environments, as well as to conduct what it refers to as the Dress Rehearsal. During the Dress Rehearsal period, which runs from February 2006 through June 2009, the Bureau plans to conduct development and testing of systems, run a mock Census Day, and prepare for Census 2010, which will include opening offices and hiring staff. As part of the Dress Rehearsal activities, the Bureau began address canvassing in April 2007 and plans to distribute questionnaires in February 2008 in preparation for the mock Census Day on April 1, 2008. It plans to begin performing nonresponse follow-up activities immediately afterwards. These Dress Rehearsal activities are to provide an operational test of the available system functionalities, in a census-like environment, as well as other operational and procedural activities. We have previously reported on weaknesses in the Bureau’s IT acquisition management. In June 2005, we reported on the Bureau’s progress in five IT areas—investment management, systems development and management, enterprise architecture management, information security, and human capital. These areas are important because they have substantial influence on the effectiveness of organizational operations and, if implemented effectively, can reduce the risk of cost and schedule overruns and performance shortfalls. We reported that while the Bureau had many practices in place, much remained to be done to fully implement effective IT management capabilities. To improve the Bureau’s IT management, we made several recommendations. The Bureau agreed with the recommendations but is still in the process of implementing them. In March 2006, we presented testimony on the Bureau’s progress in implementing acquisition and management capabilities for two key IT system acquisitions for the 2010 Census—FDCA and DRIS. We testified that although the project offices responsible for these two contracts had carried out initial acquisition management activities, neither office had the full set of capabilities needed to effectively manage the acquisitions, including a full risk management process. Effective management of major IT programs requires that organizations use sound acquisition and management processes, including project and acquisition planning, solicitation, requirements development and management, and risk management. We recommended that the Bureau implement key activities needed to effectively manage acquisitions. For example, we recommended that the Bureau establish and enforce a system acquisition management policy that incorporates best practices, including those for risk management. The Bureau agreed with our recommendations and is in the process of implementing them. Three key systems acquisitions for the 2010 Census are in process, and a fourth contract was recently awarded. The ongoing acquisitions are showing mixed progress in providing deliverables while adhering to planned schedules and cost estimates. Currently, two of the three projects have experienced schedule delays, and the date for awarding the fourth contract was postponed several times. In addition, we estimate that one of the three ongoing projects (FDCA) will incur about $18 million in cost overruns. In response to schedule delays as well as other factors, including cost, the Bureau has made schedule adjustments and plans to delay certain system functionality. As a result, Dress Rehearsal operational testing will not address the full complement of systems and functionality that was originally planned, and the Bureau has not yet finalized its plans for further system tests. Delaying functionality increases the importance of system testing after the Dress Rehearsal operational testing to ensure that the decennial systems work as intended. MTAIP is a project to improve the accuracy of the MAF/TIGER database, which contains information on street locations, housing units, rivers, railroads, and other geographic features. MTAIP is to provide corrected coordinates on a county-by-county basis for all current features in the TIGER database. Features not now in TIGER are to be added with accurate coordinates and required attributes. Currently, the acquisition is in the second and final phase of its life cycle. During Phase I, from June 2002 through December 2002, the contractor identified technical requirements and established the production approach for Phase II activities. In Phase II, which began in January 2003 and is ongoing, the contractor is developing improved maps for all 3,037 counties in the United States; to date, it has delivered more than 75 percent of these maps, which are due by September 2008. Beginning in fiscal year 2008, maintenance for the contract will begin. The contract closeout activities are scheduled for fiscal year 2009. MTAIP is on schedule to complete improvements by the end of fiscal year 2008 and is meeting cost estimates. The following is the status of MTAIP’s schedule and cost estimates: The MTAIP acquisition is on schedule for the deliverables for Phases I and II. According to Bureau documents, as of September 2006, the contractor (Harris Corporation) had delivered (as required) 2,000 improved county maps out of the 3,037. As of March 2007, Bureau documents showed that the contractor had completed 338 of the 694 counties expected to be complete by the end of fiscal year 2007. The contractor is scheduled to complete the remaining 356 counties by the end of fiscal year 2007. Cost estimates for Phase I and Phase II are $4.8 million and $205.2 million, respectively, for a total contract value of $210 million. The contract met cost estimates for Phase I, and based on cost performance reports, we project no cost overruns by September 2008. As of June 2007, the Bureau had obligated $178 million through September 2010. FDCA is to provide the systems, equipment, and infrastructure that field staff will use to collect census data. It is to establish office automation for the 12 regional census centers, the Puerto Rico area office, and approximately 450 temporary local census offices. It is to provide the telecommunications infrastructure for headquarters, regional and local offices, and mobile computing devices for field workers. FDCA also is to facilitate integration with other 2010 Census systems and to provide development, deployment, technical support, de-installation, and disposal services. At the peak of the 2010 Census, about 4,000 field operations supervisors, 40,000 crew leaders, 500,000 enumerators and address listers, and several thousand office employees are expected to use or access FDCA components. The FDCA acquisition is currently in the first phase of execution, since it has completed its baseline planning period in June 2006. The contractor is currently in the process of developing and testing FDCA software for the Dress Rehearsal Census Day. In future phases, the project will continue development, deploy systems and hardware, support census operations, and perform operational and contract closeout activities. However, as shown in table 2, according to the Bureau it revised its original schedule and delayed or eliminated some key functionality that was expected to be ready during Execution Period 1. The Bureau, said it revised the schedule because it realized it had underestimated the costs for the early stages of the contract, and that it could not meet the level of first-year funding because the fiscal year 2006 budget was already in place. According to the Bureau, this initial underestimation led to schedule changes and overall cost increases. In the revised schedule, the Bureau delayed or eliminated some key functionality from the Dress Rehearsal, including the automated software distribution system. Further, the revised software development schedule stretches from two to seven increments over a longer period of time. Delivery of these increments ranges from December 2006 through December 2008. As of May 2007, the contractor reported that the increment development schedule continues to be aggressive. The project is meeting all planned milestones on the revised schedule. The contractor has delivered 1,388 mobile computing devices to be used in address canvassing for the Dress Rehearsal. Also, key FDCA support infrastructure has been installed, including the Network Operations Center, Security Operation Center, and the Data Processing Centers. According to the department, all Regional Census Centers and Puerto Rico area offices have been identified and are on schedule to open in January 2008. The project life-cycle costs have already increased. At contract award in March 2006, the total cost of FDCA was estimated not to exceed $596 million. However, in September 2006, the project life-cycle cost was increased to about $624 million. In May 2007, the life-cycle cost rose by a further $23 million because of increasing system requirements, which resulted in an estimated life-cycle cost of about $647 million. Table 3 shows the current life-cycle cost estimates for FDCA. In addition, the FDCA project has already experienced $6 million in cost overruns, and more are expected. Both our analysis and the contractor’s analysis expect FDCA to experience additional cost overruns. Based on our analysis of cost performance reports (from July 2006 to May 2007), we project that the FDCA project will experience further cost overruns by December 2008. The FDCA cost overrun is estimated between $15 million and $19 million, with the most likely overrun to be about $18 million. Harris, in contrast, estimates about a $6 million overrun by December 2008. According to Harris, the major cause of projected cost overruns is the system requirements definition process. For example, in December 2006, Harris indicated that the requirements for the Dress Rehearsal Paper Based Operations in Execution Period 1 had increased significantly. According to the cost performance reports, this increase has meant that more work must be conducted and more staffing assigned to meet the Dress Rehearsal schedule. The schedule changes to FDCA have increased the likelihood that the systems testing at the Dress Rehearsal will not be as comprehensive as planned. The inability to perform comprehensive operational testing of all interrelated systems increases the risk that further cost overruns will occur and that decennial systems will experience performance shortfalls. DRIS is to provide a system for collecting and integrating census responses, standardizing the response data, and providing it to other systems for analysis and processing. The DRIS functionality is critical for providing assistance to the public via telephone and for monitoring the quality and status of data capture operations. The DRIS acquisition is currently in the first of three overlapping project phases. In Phase I, which extends from March 2006 to September 2008, the project is performing software development and testing of DRIS. By December 2007, it is to provide an initial system to be used for the Dress Rehearsal Census Day, during which DRIS will process 14 census forms (out of 84 possible forms). In October 2007, the project is to begin Phase II, in which it is to deploy the completed system and perform other activities to support census operations. The final phase is to be devoted to data archiving and equipment disposal. Although DRIS is currently on schedule to meet its December 2007 milestone, the Bureau revised the original DRIS schedule after the contract was awarded in October 2005. Under the revised schedule (see table 4), the Bureau delayed or eliminated some functionality that was expected to be ready for the Dress Rehearsal Census Day. According to Bureau officials, they delayed the schedule and eliminated functionality for DRIS when they realized they had underestimated the fiscal year 2006 through 2008 costs for development. As shown in table 5, the government’s funding estimates for DRIS Phase I were significantly lower than the contractor’s. Originally, the DRIS solution was to include paper, telephone, Internet, and field data collection processing; selection of data capture sites; and preparation and processing of 2010 Census forms. However, the Bureau reduced the scope of the solution by eliminating the Internet functionality. In addition, the Bureau has stated that it will not have a robust telephone questionnaire assistance system in place for the Dress Rehearsal. The Bureau is also delaying selecting sites for data capture centers, preparing data capture facilities, and recruiting and hiring data capture staff. Although Bureau officials told us that the revisions to the schedule should not affect meeting milestones for the 2010 Census, the delays mean that more systems development and testing will need to be accomplished later. Given the immovable deadline of the decennial census, the Bureau is at risk of reducing functionality or increasing costs to meet its schedule. The government’s estimate for the DRIS project was $553 million through the end of fiscal year 2010. In October 2005, at contract award, the Phase I and Phase II value was $484 million. The DRIS project is not experiencing cost overruns, and our analysis of cost performance reports from April 2006 to May 2007 projects no cost overruns by December 2008. As of May 2007, the Bureau had obligated $37 million, and the project was 44 percent completed. As of May 2007, the DRIS contract value had not increased. The DADS II acquisition is to replace the legacy DADS systems, which tabulate and publicly disseminate data from the decennial census and other Bureau surveys. The DADS II contractor is also expected to provide comprehensive support to the Census 2000 legacy DADS systems. In January 2007, the Bureau released the DADS II request for proposal. The contract was awarded in September 2007. However, the Bureau had delayed the DADS II contract award date multiple times. The award date was originally planned for the fourth quarter of 2005, but the date was changed to August 2006. On March 8, 2006, the Bureau estimated it would delay the award of the DADS II contract from August to October 2006 to gain a clearer sense of budget priorities before initiating the request for proposal process. The Bureau then delayed the contract award again by about another year. Because of these delays, DADS II will not be developed in time for the Dress Rehearsal. Instead, the Bureau will use the legacy DADS system for tabulation during the Dress Rehearsal. However, the Bureau’s plan is to have the DADS II system available for the 2010 Census. No cost information on the DADS II contract was available because it was recently awarded. Operational testing helps verify that systems function as intended in an operational environment. For system testing to be comprehensive, system functionality must be completed. Further, for multiple interrelated systems, end-to-end testing is performed to verify that all interrelated systems, including any external systems with which they interface, are tested in an operational environment. However, as described above, two of the projects have delayed planned functionality to later phases, and one project contract was recently awarded (September 2007). As a result, the operational testing that is to occur during the Dress Rehearsal period around April 1, 2008, will not include tests of the full complement of decennial census systems and their functionality. According to Bureau officials, they have not yet finalized their plans for system tests. If further delays occur, the importance of these system tests will increase. Delaying functionality and not testing the full complement of systems increase the risk that costs will rise further, that decennial systems will not perform as expected, or both. The project teams varied in the extent to which they followed disciplined risk management practices. For example, three of the four project teams had developed strategies to identify the scope of the risk management effort. However, three project teams had weaknesses in identifying risks, establishing adequate mitigation plans, and reporting risk status to executive-level officials. These weaknesses in completing key risk management activities can be attributed in part to the absence of Bureau policies for managing major acquisitions, as we described in our earlier report. Without effective risk management practices, the likelihood of project success is decreased. According to the Software Engineering Institute (SEI), the purpose of risk management is to identify potential problems before they occur. When problems are identified, risk-handling activities can be planned and invoked as needed across the life of a project in order to mitigate adverse impacts on objectives. Effective risk management involves early and aggressive risk identification through the collaboration and involvement of relevant stakeholders. Based on SEI’s Capability Maturity Model® Integration (CMMI), risk management activities can be divided into four ® key areas (see fig. 2): preparing for risk management, identifying and analyzing risks, executive oversight. The discipline of risk management is important to help ensure that projects are delivered on time, within budget, and with the promised functionality. It is especially important for the 2010 Census, given the immovable deadline. Risk preparation involves establishing and maintaining a strategy for identifying, analyzing, and mitigating risks. The risk management strategy addresses the specific actions and management approach used to perform and control the risk management program. It also includes identifying and involving relevant stakeholders in the risk management process. Table 6 shows the status of the four project teams’ implementation of key risk preparation activities. As the table shows, three project teams have established most of the risk management preparation activities. However, the MTAIP project team implemented the fewest practices. The team did not adequately determine risk sources and categories, or adequately develop a strategy for risk management. As a result, the project’s risk management strategy is not comprehensive and does not fully address the scope of the risk management effort, including discussing techniques for risk mitigation and defining adequate risk sources and categories. In addition, three project teams (MTAIP, FDCA, and DADS II) had weaknesses regarding stakeholder involvement. The three teams did not provide sufficient evidence that the relevant stakeholders were involved in risk identification, analysis, and mitigation activities; reviewing the risk management strategy and risk mitigation plans; or communicating and reporting risk management status. In addition, the FDCA project team had not identified relevant stakeholders. These weaknesses can be attributed in part to the absence of Bureau policies for managing major acquisitions, as we described in our earlier reports. Without adequate preparation for risk management, including establishing an effective risk management strategy and identifying and involving relevant stakeholders, project teams cannot properly control the risk management process. Risks must be identified and described in an understandable way before they can be analyzed and managed properly. This includes identifying risks from both internal and external sources and evaluating each risk to determine its likelihood and consequences. Analyzing risks includes risk evaluation, categorization, and prioritization; this analysis is used to determine when appropriate management attention is required. Table 7 shows the status of the four project teams’ implementation of key risk identification and evaluation activities. As of July 2007, the MTAIP and DRIS project teams were adequately identifying and documenting risks, including system interface risks. For example, these teams were able to identify the following: The MTAIP project identified significant risks regarding potential changes in funding and the turnover of contractor personnel as the program nears maturity. The DRIS project identified significant risks regarding new system security regulations, changes or increases to Phase II baseline requirements, and new interfaces after Dress Rehearsal. However, the FDCA and DADS II project teams did not identify all risks, including specific system interface risks. For example: The FDCA project had not identified any significant risks related to the handheld mobile computing devices, for the project office to monitor and track, despite problems arising during the recent address canvassing component of the Dress Rehearsal. However, it did identify significant risks for the contractor to manage; these risks were associated with using the handheld mobile computing devices including usability and failure rates. Responsibility for mitigating these risks was transferred to the contractor. The FDCA and DADS II projects did not provide evidence that specific system interface risks are being adequately identified to ensure that risk handling activities will be invoked should the systems fail during 2010 Census. For example, although the DADS II will not be available for the Dress Rehearsal, the project team did not identify any significant interface risks associated with this system. One reason for these weaknesses, as mentioned earlier, is the absence of Bureau policies for managing major acquisitions. Failure to adequately identify and analyze risks could prevent management from taking the appropriate actions to mitigate those risks; this increases the probability that the risks will materialize and magnifies the extent of damage incurred in such an event. Risk mitigation involves developing alternative courses of action, workarounds, and fallback positions, with a recommended course of action for the most important risks to the project. Mitigation includes techniques and methods used to avoid, reduce, and control the probability of occurrence of the risk; the extent of damage incurred should the risk occur; or both. Examples of activities for mitigating risks include documented handling options for each identified risk; risk mitigation plans; contingency plans; a list of persons responsible for tracking and addressing each risk; and updated assessments of risk likelihood, consequence, and thresholds. Table 8 shows the status of the four project teams’ implementation of key risk mitigation activities. Three project teams (MTAIP, FDCA, and DADS II) developed mitigation plans that were often untimely or included incomplete activities and milestones for addressing the risks. Some of these untimely and incomplete activities and milestones included the following: Although the MTAIP project team developed mitigation plans, the plans were not comprehensive and did not include thresholds defining when risk becomes unacceptable and should trigger the execution of the mitigation plan. The FDCA project team had developed mitigation plans for the most significant risks, but the plans did not always identify milestones for implementing mitigation activities. Moreover, the plans did not identify any commitment of resources, several did not establish a period of performance, and the team did not always update the plans with the latest information on the status of the risk. In addition, the FDCA project team did not provide evidence of developing mitigation plans to handle the other significant risks as described in their risk mitigation strategy. (These risks included a lack of consistency in requirements definition and insufficient FDCA project office staffing levels.) The mitigation plans for DADS II were incomplete, with no associated future milestones and no evidence of continual progress in working towards mitigating a risk. In several instances, DADS II mitigation plans were listed as “To Be Determined.” With regard to the second practice in the table (periodically monitoring risk status and implementing mitigation plans), the MTAIP, FDCA, and DADS II project teams were not always implementing the mitigation plans as appropriate. For example, although the MTAIP project team has periodically monitored the status of risks, its mitigation plans do not include detailed action items with start dates and anticipated completion dates; thus, the plans do not ensure that mitigation activities are implemented appropriately and tracked to closure. The FDCA and DADS II project teams did not identify system interface risks nor prepare adequate mitigation plans to ensure that systems will operate as intended. In addition, the DADS II risk reviews showed no evidence of developing risk- handling action items, tracking any existing open risk-handling action items, or regularly discussing mitigation steps with other risk review team members. Because they did not develop complete mitigation plans, the MTAIP, FDCA, and DADS II project teams cannot ensure that for a given risk, techniques and methods will be invoked to avoid, reduce, and control the probability of occurrence. Reviews of the project teams’ risk management activities, status, and results should be held on a periodic and event-driven basis. The reviews should include appropriate levels of management, such as key Bureau executives, who can provide visibility into the potential for project risk exposure and appropriate corrective actions. Table 9 shows the status of the four project teams’ implementation of activities for senior-level risk oversight. The project teams were inconsistent in reporting the status of risks to executive-level officials. DRIS and DADS II did regularly report risks; however, the FDCA and MTAIP projects did not provide sufficient evidence to document that these discussions occurred or what they covered. Although presentations were made on the status of the FDCA and MTAIP projects to executive-level officials, presentation documents did not include evidence of discussions of risks and mitigation plans. Failure to report a project’s risks to executive-level officials reduces the visibility of risks to executives who should be playing a role in mitigating them. The IT acquisitions planned for 2010 Census will require continued oversight to ensure that they are achieved on schedule and at planned cost levels. Although the MTAIP and DRIS acquisitions are currently meeting cost estimates, FDCA is not. In addition, while the Bureau is making progress developing systems for the Dress Rehearsal, it is deferring certain functionality, with the result that the Dress Rehearsal operational testing will address less than a full complement of systems. Delaying functionality increases the importance of later development and testing activities, which will have to occur closer to the census date. It also raises the risk of cost increases, given the immovable deadline for conducting the 2010 Census. The Bureau’s project teams for each of the four acquisitions have implemented many practices associated with establishing sound and capable risk management processes, but they are not always consistent: the teams have not always identified risks, developed complete risk mitigation plans, or briefed senior-level officials on risks and mitigation plans. Among risks that were not identified are those associated with the FDCA mobile computing devices and systems testing. Also, mitigation plans were often untimely or incomplete. Further, no evidence was available of senior-level briefings to discuss risks and mitigation plans. One reason for these weaknesses is the absence of Bureau policies for managing major acquisitions, as we pointed out in earlier work. Until the project teams and the Decennial Management Division implement appropriate risk management activities, they face an increased probability that decennial systems will not be delivered on schedule and within budget or perform as expected. To ensure that the Bureau’s four key acquisitions for the 2010 Census operate as intended, we are making four recommendations. First, to ensure that the Bureau’s decennial systems are fully tested, we recommend that the Secretary of Commerce require the Director of the Census Bureau to direct the Decennial Management Division and Geography Division to plan for and perform end-to-end testing so that the full complement of systems is tested in a census-like environment. To strengthen risk management activities for the decennial census acquisitions, the Secretary should also direct the Director of the Census Bureau to ensure that project teams identify and develop a comprehensive list of risks for the acquisitions, particularly those for system interfaces and mobile computing devices, and analyze them to determine probability of occurrence and appropriate mitigating actions; develop risk mitigation plans for the significant risks, including defining the mitigating actions, milestones, thresholds, and resources; and provide regular briefings on significant risks to senior executives, so that they can play a role in mitigating these risks. We are not making recommendations at this time regarding the Bureau’s policies for managing major acquisitions, as we have already done so in previous reports. In response to a draft of this report, the Under Secretary for Economic Affairs of Commerce provided written comments from the department. These comments are reproduced in appendix III. The department disagreed with our conclusion about operational testing during the 2008 Dress Rehearsal. According to the department, although some minimal functionalities are not a part of the Dress Rehearsal, all critical systems and interfaces would be tested during the 2008 Dress Rehearsal. It planned to conduct additional fully integrated testing of all systems and interfaces after the Dress Rehearsal, including the functionalities not included in the Dress Rehearsal itself. It also planned to incorporate lessons learned from the Dress Rehearsal in this later testing. Nonetheless, the Bureau’s test plans have not been finalized. Further, the Dress Rehearsal will not include two critical systems (the DRIS telephone system and the DADS II tabulation system). Thus, it remains unclear whether testing will in fact address all interrelated systems and functionality in a census-like environment. Consistent with our recommendation, following up with documented test plans to do end-to- end testing would help ensure that decennial systems will work as intended. With regard to risk management, the department said it plans to examine additional ways to manage risks and will prepare a formal action plan in response to our final report. However, it disagreed with our assessment with regard to risk identification, pointing out that one project identified risks associated with handheld mobile computing devices and assigned responsibility for these to the contractor. In addition, the project identified systems interfaces as a risk. However, the project did not identify significant risks for the project office to monitor and track related to problems arising during the address canvassing component of the Dress Rehearsal. Also, although this project identified a general risk related to system interfaces, it did not identify specific risks related to particular interfaces. The department also provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Chairman and Ranking Member of the Committee on Homeland Security and Governmental Affairs. We are also sending copies to the Secretary of Commerce, the Director of the U.S. Census Bureau, and other appropriate congressional committees. We will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov or Madhav S. Panwar at (202) 512-6228 or panwarm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) determine the status and plans, including schedule and costs, for four key information technology (IT) acquisitions, and (2) assess whether the Census Bureau is adequately managing the risks facing these key system acquisitions. To determine the status and plans, we reviewed documents related to the major 2010 Census acquisitions, including requests for proposals, acquisition contracts, project plans, schedules, cost estimates, program review reports, earned value management data, test plans, and other acquisition-related documents. We analyzed earned value management data obtained from the contractors to assess the contractor’s cost and schedule performance. We also interviewed program officials to determine the current status of the acquisitions’ schedules and cost estimates. To assess the status of risk management, we evaluated the practices for key areas (establishing a risk strategy, risk identification, mitigation, and reporting) and compared these to industry standards—specifically, the Capability Maturity Model). The CMMI model was developed by Carnegie Mellon University’s Software Engineering Institute (SEI) and includes criteria to evaluate risk management for development and maintenance activities. We adapted these CMMI criteria and performed a Class B Standard CMMI Appraisal Method for Process Improvement to evaluate the risk management of program teams and contractors involved in the decennial system acquisitions and development initiatives. In doing so, we selected leading practices within the areas of preparing for risk management, identifying and analyzing risks, mitigating risks, and executive oversight. We evaluated the practices as fully implemented, partially implemented, or not implemented. Specifically, a blank circle indicates that practices are not performed at all or are performed on a predominantly ad hoc basis; a half circle indicates that the while selected key practices have been performed, others remain to be implemented; and a solid circle indicates that practices adhere to industry standards. To evaluate the extent to which the Bureau and contractors followed these leading practices, we reviewed relevant documents such as risk management plans, risk reports, mitigation plans, meeting minutes from risk review meetings; we also interviewed knowledgeable officials about their risk management activities. Specifically, we met with project team officials for the four key decennial system acquisitions and their primary contractors (Harris Corporation and Lockheed Martin), as applicable. We also reviewed the lists of risks identified by each of the project teams and their primary contractors and assessed their accuracy and completeness, including whether the risks were associated with the acquisition’s development plans. We conducted our work from December 2006 through August 2007 in the Washington, D.C., metropolitan area in accordance with generally accepted government auditing standards. The following are GAO’s comments on the department’s letter dated September 25, 2007. 1. Although the department states that it plans to test all critical systems and interfaces either during or after the Dress Rehearsal, we are aware of two critical systems (the DRIS telephone system and the DADS II tabulation system) that are not to be included in the Dress Rehearsal, and the Bureau’s plans are not yet finalized. As a result, we stand by our characterization that operational testing would take place during the Dress Rehearsal without the full complement of systems and functionality originally planned. Consistent with our recommendation, following up with documented test plans to do end-to-end testing would help ensure that systems work as intended. 2. The department said that our statement could be interpreted that cost increases resulted from an increase in the number of system requirements. It said this is not entirely accurate because although some requirements were added (generally related to security), other cost increases were due to the process of developing detailed requirements from high-level functional requirements. However, it is our view that the process of developing detailed requirements from high-level functional requirement does not inevitably lead to cost increases if the functional requirements were initially well-defined. 3. See comment 1. 4. We have modified our report to reflect this additional information. However, although our discussion of schedule and cost changes preceded our discussion of risk management, we did not intend to imply that risk management weaknesses had contributed to these changes. We revised our report to help clarify this. 5. We have revised our report to clarify the use of automation for data collection for all FDCA components. 6. See comment 1. 7. We agree that this statement is referring to all FDCA equipment, infrastructure, and systems. 8. We have revised our report to update the status of the systems. 9. We have revised our report to reflect the status of the office site selections. 10. See comment 2. 11. We disagree with the department’s comment that “cost overrun” refers to a contractor originally underestimating costs. We use “cost overrun” to refer to any increase in costs from original estimates. 12. We have revised our report to reflect this information. 13. We have revised our report to add this information. 14. We agree that the FDCA project identified certain risks, as the department describes. However, although it identified risks associated with handheld mobile computing devices and assigned responsibility for these to the contractor, it did not identify significant risks for the project office to monitor and track related to problems arising during the address canvassing component of the Dress Rehearsal. In addition, although this project identified interface management as a risk, it did not identify specific risks related to other systems. Accordingly, although we modified our report to reflect this information, we did not change our overall evaluation. 15. The department stated that for the FDCA and MTAIP projects, risk status is regularly discussed with executive-level officials at Commerce and the Bureau, and that it provided us with briefing slides to support this statement. It said that it also uses other communication channels to report project issues and risks. However, the evidence provided did not show that FDCA and MTAIP risks were regularly discussed with executive-level officials. For example, while the FDCA project provided two presentations in October 2006 and March 2007, these presentations did not have discussions of risk and mitigation plans. Similarly, our review of the MTAIP project teams’ presentations during quarterly reviews did not show that risk status was discussed. Therefore, we still conclude that these projects, unlike the other two, did not have sufficient evidence that executive-level officials were being regularly briefed on risk status. 16. See comment 2. 17. We have revised our report to reflect this information. In addition to the contacts named above, individuals making contributions to this report included Cynthia Scott (Assistant Director), Mathew Bader, Carol Cha, Barbara Collier, Neil Doherty, Karl Seifert, Niti Tandon, Amos Tevelow, and Jonathan Ticehurst.
|
Automation and information technology (IT) are expected to play a critical role in the 2010 decennial census. The Census Bureau plans to spend about $3 billion on automation and technology that are to improve the accuracy and efficiency of census collection, processing, and dissemination. The Bureau is holding what it refers to as a Dress Rehearsal, during which it plans to conduct operational testing that includes the decennial systems. In view of the importance of IT acquisitions to the upcoming census, GAO was asked to (1) determine the status and plans for four key IT acquisitions, including schedule and cost, and (2) assess whether the Bureau is adequately managing associated risks. To achieve its objectives, GAO analyzed acquisition documents and the projects' risk management activities and compared these activities to industry standards. Three key systems acquisitions for the 2010 Census are in process, and a fourth contract was recently awarded. The ongoing acquisitions show mixed progress in meeting schedule and cost estimates. Currently, two of the projects are not on schedule, and the Bureau plans to delay certain functionality. The award of the fourth contract, originally scheduled for 2005, was awarded in September 2007. In addition, one project has incurred cost overruns and increases to its projected life-cycle cost. As a result of the schedule changes, the full complement of systems and functionality that were originally planned will not be available for the Dress Rehearsal operational testing. This limitation increases the importance of further system testing to ensure that the decennial systems work as intended. The Bureau's project teams for each of the four IT acquisitions have performed many practices associated with establishing sound and capable risk management processes, but critical weaknesses remain. Three project teams had developed a risk management strategy that identified the scope of the risk management effort. However, not all project teams had identified risks, established mitigation plans, or reported risks to executive-level officials. For example, one project team did not adequately identify risks associated with performance issues experienced by mobile computing devices. In addition, three project teams developed mitigation plans that were often untimely or included incomplete activities and milestones for addressing the risks. Until the project teams implement key risk management activities, they face an increased probability that decennial systems will not be delivered on schedule and within budget or perform as expected.
|
Established as a national program in the mid 1970s, the WIC program is intended to improve the health status of low-income pregnant and postpartum women, infants, and young children by providing supplemental foods and nutrition education. Pregnant and post-partum women, infants, and children up to age 5 are eligible for WIC if they are found to be at nutritional risk and have incomes below certain thresholds.improve birth and dietary outcomes and contain health care costs, and USDA considers WIC to be one of the nation’s most successful and cost- effective nutrition intervention programs. In passing the legislation that created WIC, Congress intended the program to assist participants during critical times of growth and development. WIC participants typically receive food benefits—which may include infant formula—in the form of vouchers or checks that can be USDA has established redeemed at state-authorized retail vendors.seven food packages that are designed for different categories and nutritional needs of WIC participants (see fig. 1). The authorized foods must be prescribed from the food packages according to the category and nutritional needs of the participants. Because multiple members of a family may be eligible to receive WIC benefits, individual families could receive more than one food package. In 2007, USDA issued interim regulations that implemented the first comprehensive revisions to the WIC food packages since 1980. After considering comments received in response to the interim regulations, FNS issued final regulations in 2014. The revisions aligned the food packages with the 2005 Dietary Guidelines for Americans and infant feeding practice guidelines of the American Academy of Pediatrics, and were largely based on recommendations of the IOM, which FNS commissioned to review the food packages. Infants who are not exclusively breastfeeding can receive formula from WIC until they turn 1 year of age. However, WIC’s authorizing legislation requires that the nutrition education provided to participants include breastfeeding promotion and support. In addition, the Child Nutrition Amendments of 1992 required that USDA establish a program to promote breastfeeding as the best method for infant nutrition. Accordingly, USDA, through regulations and guidance, has emphasized the importance of encouraging participating mothers to breastfeed. WIC regulations require state and local agencies to create policies and procedures to ensure that breastfeeding support and assistance are provided to WIC participants throughout the prenatal and postpartum period. USDA’s role in operating WIC is primarily to provide funding and oversight, and state and local WIC agencies are charged with carrying out most administrative and programmatic functions of the program. Specifically, USDA provides grants to state agencies, which use the funds to reimburse authorized retail vendors for the food purchased by WIC participants and to provide services. As part of its monitoring and oversight obligations, USDA annually reviews the state plan for each state WIC agency, which describes the agency’s objectives and procedures for all aspects of administering WIC for the coming fiscal year. USDA’s review of state plans is one of the federal oversight mechanisms for the program, and state plans provide important information about how states administer WIC. For their part, state agencies are responsible for developing WIC policies and procedures within federal requirements, entering into agreements with local agencies to operate the program, and monitoring and overseeing its implementation by these local agencies. WIC regulations define participant violations, which include the sale of WIC food benefits, and require state agencies to establish procedures to control participant violations. However, the regulations do not specify what steps states should take to prevent participant violations, such as methods for identifying attempted sales of WIC food benefits. The regulations also require state agencies to establish sanctions for participant violations, and require mandatory sanctions for certain types of violations. For example, when a state establishes a claim of $100 or more against a participant who improperly disposed of program benefits—such as through online sales—it must disqualify the participant for 1 year. Beyond these mandatory sanctions, in most cases the regulations do not specify how severely states should sanction participants for particular violations. The WIC oversight structure is part of the program’s internal controls, which are an integral component of management. Internal control is not one event, but a series of actions and activities that occur on an ongoing basis. As programs change and as agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal control to assure that the control activities being used are effective and updated when necessary. Management should design and implement internal control based on the related cost and benefits. Effective internal controls include: (1) communicating information to management and others to enable them to carry out internal control and other responsibilities and (2) assessing the risks agencies face from both external and internal sources. In the past decade, USDA has taken steps to better tailor WIC food packages containing formula to the nutritional needs of participating infants. After reviewing the nutritional needs of WIC participants and food packages in a study contracted by USDA, the IOM recommended changes to the packages—including those containing formula. USDA issued revised regulations in 2007—which state agencies were required to implement at the start of fiscal year 2010—that adopted many of the recommendations from the IOM report to further encourage and support breastfeeding among participating mothers. According to USDA, the revised food packages were developed to better reflect current nutrition science and dietary recommendations than the food packages they were replacing. Among other changes, the revised regulations reduced the amount of formula provided to partially breastfed infants of all ages, delayed when partially breastfed infants may begin receiving formula, and reduced the amount of formula provided to older fully formula-fed infants. Amount of formula for partially breastfed infants: The revised regulations authorize maximum formula amounts for infant participants that vary by the extent to which the infant is breastfed (see fig. 2). Under the previous regulations, partially breastfed infants could receive up to the same amount of formula as those who were fully formula-fed. Under the revised regulations, “partially breastfed,” for the purposes of assigning the WIC food package, means an infant who is breastfed but also receives formula from the WIC program up to a maximum amount that is approximately half the amount of formula allowed for a fully formula-fed infant. USDA stated that this new category is intended to provide stronger incentives for continued breastfeeding, such as by providing partially breastfeeding mothers with additional quantities and types of supplemental foods—such as whole wheat bread or other whole grains—that are not provided to non-breastfeeding mothers. Because partially breastfed infants require less formula than fully formula- fed infants, this change may have reduced the potential for waste in the program. Age at which partially breastfed infants receive formula: To further support the successful establishment of breastfeeding, the revised regulations disallow the routine issuance of formula to partially breastfeeding infants during the first month after birth. While mothers of partially breastfed infants are not to automatically receive formula, local agency staff have some discretion and may determine, based on an assessment of the nutritional needs of the infant, that it is appropriate to make a small amount of formula available. According to the IOM, the fact that partially breastfed infants, under previous regulations, could receive up to the same amount of formula from birth as fully formula-fed infants may have interfered with mothers’ milk production and success at continued breastfeeding. Between implementation of the revised regulations in fiscal year 2010 and fiscal year 2014, the number of infant participants who were either partially or fully breastfed increased nearly 4 percent (see fig. 3). Formula amounts for older fully formula-fed infants: Under the revised regulations, fully formula-fed infants 6 through 11 months of age are authorized to receive less formula than they were authorized to previously. In addition, to introduce infants of this age to a variety of nutritious foods, the revised regulations, for the first time, added infant food fruits and vegetables to their food packages. Because the addition of fruits and vegetables provides infants greater amounts of nutrients, in its 2005 report, IOM determined that less formula is needed for infants that receive these foods. According to USDA, the revised maximum formula amount authorized for these infants is based on a scientific review of the calorie and nutrient needs of infants at this age. While federal regulations specify the maximum amount of formula different categories of infants are authorized to receive, state and local agency staff have some flexibility in determining precise amounts to provide, depending on an infant’s nutritional needs. Staff at local WIC agencies play a critical role in determining infants’ feeding categories, and they have the authority to provide them with less formula than the maximum amount allowed for each category, if nutritionally warranted. Nutrition specialists, such as physicians or nutritionists, working at the local agency perform nutritional assessments for prospective participants as part of certification procedures. They use the nutritional assessment information to appropriately target food packages to participants. WIC staff also provide regular nutrition and breastfeeding education services to established participants. In the guidance USDA issued in 2009 to assist state and local agency staff in implementing the provisions of the new food package regulations, staff were directed to tailor the amount of infant formula to the assessed needs of breastfed infants rather than to routinely issue food packages with standard quantities of infant formula to these infants. Further, the guidance stated that the maximum amount of formula authorized in the regulations is rarely warranted for partially breastfed infants. Even with USDA’s guidance, it is possible that some participants receive formula that they do not need or cannot use, and state and local policies on how participants should handle excess formula vary. WIC provided formula to a monthly average of nearly 1.8 million infant participants in 2013, according to USDA. Although federal regulations define the sale of WIC food benefits as a participant violation, one USDA official told us there is no federal guidance that addresses how local agencies should instruct participants to handle unused formula. Rather, according to USDA, state, and local WIC officials, state and local agencies establish their own policies and procedures. Officials from 7 of the 12 states we spoke with told us that participants are instructed to return unused formula to local agencies. Further, officials from the 2 local agencies we spoke with explained that it is relatively common for participants to return infant formula to their local agency, and one added that participants often return formula for health reasons, as pediatricians sometimes prescribe specialized formula because of food allergies. Officials we spoke to from two states said that the returned formula is given to participants who have lost their benefit vouchers. Officials from three other states said that returned formula is donated to food banks or hospitals. USDA does not have data that can be used to determine the national extent of online sales of WIC formula; however, officials in 5 of the 12 states we spoke to said that they have found WIC formula offered for sale online by some participants. USDA officials told us that the department has not conducted a comprehensive study to assess the national extent of online sales. According to the officials, the department does not collect data on this issue, in part because it is not the department’s responsibility to sanction WIC participants for program violations. Rather, it is the responsibility of state agencies to establish procedures to prevent and address participant violations, including attempts to sell WIC food benefits. Of the officials that we spoke to from 12 states, those from 9 states mentioned that the procedures they have established to identify this violation include regularly monitoring online advertisements. Officials in 3 of these states said that through their monitoring efforts, they have found fewer than 0.5 percent of their WIC participants attempting to sell WIC infant formula online. Officials in 2 other states did not estimate percentages but stated that the incidence is low. Consistent with these state accounts, our own monitoring of a popular e- commerce website for 30 days in four large metropolitan areas found few posts in which individuals explicitly stated they were attempting to sell WIC-provided formula. Specifically, we identified 2,726 posts that included the term “formula,” and 2 of these posts explicitly stated that the origin of the formula was WIC. In both posts, the users indicated they were selling the WIC formula because they had switched to different brands of formula. A posting from late June 2014 included the container size in the title and stated: “I am looking to sell 5 [brand name] 12.5oz cans (NOT OPENED) because is super picky and does not want to drink it no matter what i do. will drink the kind for some reason. I told my WIC office to switch me to another brand but they say it might take 3 months. Im asking 35$ but best offer will do since the brand I buy is from so Im not looking to make a profit here if you consider each can is 16$ at the store. please text if interested!! A posting from early July 2014 included the brand, type, and container size in the title and stated: “I have 7 powder cans of they dnt expire for another year at least just got them from my wic n we ended up switching formulas so its $65.oo for pick up all 7 cans or $70 if i have to drive.” formula-fed WIC infant participants each month, averaged across all ages. Beyond the 346 posts that matched these three criteria, we found another 135 that met at least one, but not all, of the criteria. However, since we did not investigate any of these posts further, we do not know if any or all of these 481 posts were attempts to sell WIC formula. A posting from mid-June 2014 stated: “$10 a can! 14 -12.9 oz Cans of [brand name and type] Formula. Expiration Date is - July 1, 2015. Please take it all. I will not separate the formula! NOT FROM WIC!!! is now 14 months and no longer needs this. Email only please A posting from mid-June 2014: “ Turn A Year Already, and we Just bought her 7 Brand New Cans of . She no longer needs Formula. Selling each Can for $10. Brand New, NOT Open. 12.4 Oz. EXP. 1 March. 2016.” Through our monitoring efforts, and through interviews with USDA and state and local WIC officials, we identified a number of key challenges associated with distinguishing between WIC-obtained formula sales and other sales: Each state’s specific WIC-contracted formula brand is typically available for purchase at retail stores by WIC participants and non- WIC participants alike, without an indicator on the packaging that some were provided through WIC. There are a number of reasons why individuals may have excess formula. For example, a WIC participant may obtain the infant’s full monthly allotment of formula at one time; alternatively, non-WIC parents may purchase formula in bulk at a lower cost to save money. In either case, if the infant then stops drinking that type of formula, parents may attempt to sell the unused formula. Individuals posting formula for sale online are able to remain relatively anonymous, so WIC staff may not have sufficient information to link the online advertisement with a WIC participant. According to one WIC official we spoke with, staff in that state identify approximately one posting a week with sufficient detail about the seller—such as name or contact information—for staff to pursue. A WIC official from another state said that staff previously used phone numbers to identify WIC participants posting formula for sale, but they believe participants then began to list other people’s phone numbers on posts. Advertisements for infant formula sales can be numerous online, and formula for sale originates from varied sources. For example, through our literature search, we found multiple news reports on stolen infant formula advertised for sale online. According to USDA, state, and local WIC officials, because of these challenges, the return on investment for monitoring online sales of WIC infant formula is low. One USDA official noted that it is difficult for states to prove that participants are selling WIC food benefits, which increases the amount of time and effort state staff need to spend to address these cases. Officials from one state WIC agency and one local WIC agency we spoke to said that efforts by state and local agency staff to identify and address online WIC formula sales result in few confirmed cases and draw away scarce resources from other aspects of administering the program. One USDA official said that states that sanction a participant for attempting to sell WIC formula without sufficient evidence that it occurred will likely have the violation overturned during the administrative appeal process. These cases also appear unlikely to result in court involvement, as when we asked the 19 officials from 12 states how these cases were addressed, only one said that a couple had gone through the legal system.should design and implement internal controls based on the related costs and benefits. According to USDA, because of the substantial risks associated with improper payments and fraud related to WIC vendor transactions, both USDA and the states have focused their oversight efforts in recent years on addressing vulnerabilities in the management of this area, rather than focusing on possible participant violations. However, because the use of the Internet as a marketplace has substantially increased in recent years and the national extent of online sales of WIC food benefits is unknown, USDA and the states have insufficient information to assess the benefits of broadening their oversight efforts to include this participant violation, as discussed in more detail later in this report. USDA has taken steps aimed at clarifying that the online sale of WIC benefits is a participant violation. For example, in 2013, USDA proposed regulations that would expand the definition of program violation to include offering to sell WIC benefits, specifically including sales or attempts made online. Earlier, in 2012, USDA issued guidance to WIC state agencies clarifying that the sale of, or offer to sell, WIC foods verbally, in print, or online is a participant violation.that USDA expects states to sanction and issue claims against participants for all program violations, but it did not provide direction to states on ways to prevent online sales of WIC foods, including formula. That same year, USDA also sent letters to four e-commerce websites— through which individuals advertise the sale of infant formula—requesting This guidance stated that they notify their customers that the sale of WIC benefits is prohibited, and two of the companies agreed to post such a notification. More generally, USDA has highlighted the importance of ensuring WIC program integrity through guidance issued in recent years aimed at encouraging participants to report WIC program fraud, waste, and abuse to the USDA Office of the Inspector General (OIG). For example, in 2012, USDA disseminated a poster developed by the OIG and attached it to a guidance document describing its purpose, which includes informing WIC participants and staff how to report violations of laws and regulations relating to USDA programs. The following year, USDA issued additional guidance that encouraged states to add contact information for the OIG to WIC checks or vouchers, or to their accompanying folders or sleeves. USDA indicated that the intent of both guidance documents was to increase program integrity by making it easier for participants to report incidents of suspected fraud, waste, and abuse to the OIG. However, neither guidance document specifically directed states to publicize the fact that attempting to sell WIC benefits, either online or elsewhere, qualifies as an activity that should be reported. WIC Policy Memorandum #2013-4, OIG Hotline Information on WIC Food Instruments (June 10, 2013). require participants be informed of this violation through other means. In our review of rights and responsibilities statements from 25 states’ WIC policy and procedure manuals, we found that 7 did not require local agency staff to inform participants that selling WIC benefits is against program rules. Beyond this approach, some state officials we spoke to reported using other methods to inform participants of this program rule, but some methods may not reach all participants. For example, while officials from two states said that a statement instructing participants not to sell their benefits is printed either on the food voucher or the envelope containing the voucher given to participants, an official from another state said that local agencies in that state display a poster informing participants that selling WIC benefits online is prohibited. Both USDA officials and officials we spoke with from two states noted that some WIC participants do not know that selling food benefits is a program violation. Inconsistent communication to participants about this violation conflicts with federal internal control standards, which call for agency management to ensure that there are adequate means of communicating with external stakeholders that may have a significant impact on the agency achieving its goals. Participants who are unaware of this prohibition may sell excess formula online, thus inappropriately using program resources. In addition, we found that states vary in the ways they identify attempted sales of WIC formula through monitoring efforts. According to the state and local WIC officials we spoke with, the method of monitoring used to identify online sales of WIC formula and the level of effort devoted to this activity vary across states. For example, while officials from 10 states said that state or local staff perform monitoring activities, an official from another state said that the state contracts with a private investigative firm to perform this task. Also, officials in one state said that a number of staff within the state office, as well as a number of those in local agencies, search social media websites daily. In contrast, officials from another state said that staff spend about a half day each week monitoring online sites for attempted sales of WIC food benefits, and an official from a different state said that staff monitor for such sales only when time allows. USDA has signaled through recently proposed regulations and policy guidance that it considers the sale of WIC foods—including formula—to be a risk to program integrity; however, the department has not worked with states on developing ways to address these sales. A USDA official told us that the department would like to provide more support to states in pursuing likely cases of participant fraud related to the online sale of WIC food benefits, but it has not yet determined how to be of assistance. USDA officials told us that some of their regional meetings with state WIC staff have included discussions of states’ approaches to monitoring online sales of WIC food benefits, but these meetings were focused broadly on program integrity issues rather than on this type of monitoring. Without general information on monitoring techniques—such as promising search terms or online sites where states may want to focus their efforts—some states may be missing opportunities to better target their limited resources. Federal internal control standards call for agencies to analyze risks from both external and internal sources, and employ mechanisms to identify and deal with any special risks brought on by changes in economic or industry conditions. The standards also note that the attitude and philosophy of management toward control operations such as monitoring can have a profound effect on internal control. See GAO-14-641. tools. While these efforts may help to assist state WIC staff monitoring attempted online sales of WIC formula to some extent, they are not directly applicable to WIC due to differences in benefit delivery. Specifically, SNAP benefits are provided through an electronic benefits transfer card that can be used to purchase a wide variety of foods at an authorized vendor by swiping the card and entering a personal identification number. Therefore, efforts to identify SNAP benefit trafficking focus on monitoring online sales of these cards and their personal identification numbers, which can be traced back to unique SNAP participants. In contrast, WIC participants generally receive benefits through vouchers that they use to purchase specific foods at an authorized vendor, and they are required to sign the voucher at the time As a result, efforts to identify online sales of WIC benefits of purchase.involve monitoring for formula after it has been purchased, and it is difficult to determine whether the formula was provided through WIC. 42 U.S.C. § 1786(f)(1), 7 C.F.R. § 264.4(a). One related required element is the state agency’s plan for collecting and maintaining information on cases of participant fraud and abuse. We also found that the guidance USDA provides to states on developing their state plans, while relatively detailed in some respects, does not direct states to describe their plan for identifying program violations, including sales of WIC food benefits. area. According to federal internal control standards, agency management needs operating information to determine compliance with laws and regulations. Also, the standards note that factors outside management’s control or influence can affect an agency’s ability to achieve all its goals. Because state agencies are responsible for addressing program violations under the regulations, monitoring the actions they take is key to ensuring program integrity. WIC provides essential supplemental nutrition and assistance to low- income families, and infant formula, in particular, plays a critical role in the nutritional well-being of the many infant participants receiving it through the program. By better tailoring the food packages, USDA has better aligned the maximum amount of infant formula local agencies are allowed to provide to certain participants. For example, the reduction USDA made to the amount of formula that can be provided to partially breastfed infants reduced the amount of excess formula provided to them, which may have helped reduce opportunities for sales of excess formula. However, it is clear that some participants are selling WIC formula online, inappropriately using program resources. Although the extent of this activity is unknown, explicitly informing WIC participants that such sales are against WIC rules could help improve program integrity by preventing some online sales of WIC infant formula. However, USDA has not required state agencies to inform participants of this prohibition, and some states are not requiring local staff to do so. As a result, some participants may attempt to sell excess WIC formula online without the knowledge that it is against program rules. Monitoring online classified advertisements for attempted sales of WIC formula is another way to ensure program integrity. As the technological environment has changed with the increased use of e-commerce, actions needed to ensure WIC participants do not inappropriately use infant formula have changed as well. While USDA officials believe that states monitor for online sales of WIC formula, USDA currently does not collect information to confirm this. As a result, USDA is unable to effectively oversee state efforts to control this participant violation, and the agency lacks assurance that efforts are taking place nationwide. State officials we spoke to have responded to the inherent challenges involved in monitoring websites for infant formula sales with varied approaches, which likely yield varied outcomes, and some officials told us the return on investment for their monitoring efforts is low. However, because USDA and state agencies lack information from a national perspective about online sales of WIC food benefits and cost-effective approaches for identifying and addressing these sales, states are likely to be poorly positioned to strike the appropriate balance of costs and benefits when determining how to target their resources to ensure program integrity. To better ensure that WIC participants are aware of the prohibition against selling WIC formula, and to assist states’ efforts to prevent and address online formula sales, we recommend that the Secretary of Agriculture direct the Administrator of FNS to take the following three actions: Instruct state agencies to include in the rights and responsibilities statement that participants are not allowed to sell WIC food benefits, including online. Require state agencies to articulate their procedures for identifying attempted sales of WIC food benefits in their WIC state plans and analyze the information to ascertain the national extent of state efforts. Collect information to help assess the national extent of attempted online sales of WIC formula benefits and determine cost-effective techniques states can use to monitor online classified advertisements. We provided a draft of this report to the Secretary of Agriculture for review and comment. In oral comments, USDA officials agreed with all three recommendations. Regarding the first two recommendations, officials noted that the department will incorporate into the WIC regulations requirements that states include the prohibition against the sale of WIC food benefits in participant rights and responsibilities statements and report to USDA in their WIC state plans on their procedures for identifying attempted sales of these benefits. USDA officials noted that because the regulatory process can be lengthy, in the interim period, the department will consider issuing guidance recommending these as best practices to state agencies. Regarding the third recommendation, officials said they would make it a priority to explore options for using available resources to assess the extent of online sales of WIC formula and to identify and share best practices, cost-effective techniques, or new approaches with state agencies to use in monitoring online advertisements. Specifically, they may draw on funds designated for addressing high-priority programmatic issues, and collaborate with stakeholders and other FNS program staff on monitoring strategies. We agree with the department’s planned approaches to addressing the recommendations, and we believe these efforts will help to improve WIC program integrity. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To obtain and evaluate information about online sales of infant formula provided by the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), we used a variety of methods. We reviewed relevant federal laws, regulations, and U.S. Department of Agriculture (USDA) guidance to determine requirements related to the provision of formula to infants receiving WIC, as well as to identify federal, state, and local roles related to preventing and addressing online sales of WIC formula. We also interviewed USDA officials and reviewed USDA’s data on WIC infant participants who breastfeed, which was available for fiscal years 2010 through 2014. As part of our interviews with USDA officials, we assessed the reliability of these data and determined that they were sufficiently reliable for the purpose of this study. To help us understand how federal WIC food requirements were recently modified, we also reviewed a 2005 report published by the National Academies’ Institute of Medicine, since many of the changes were based on the recommendations of this report. To determine the role USDA regulations and guidance play in preventing and addressing online sales of WIC formula, we reviewed a non-generalizable sample of 25 states’ WIC policy and procedure manuals to determine how consistent states’ policies and procedures are in preventing and addressing online formula sales. To obtain some information about the extent of online sales of WIC formula in four metropolitan areas, we conducted monitoring of online classified advertisements to sell formula using one popular e-commerce website. To obtain additional information relevant to our study, we also interviewed an official from the National WIC Association, as well as 19 state and local WIC officials representing 12 states. In addition, we assessed USDA’s controls against GAO standards for Internal Control in the Federal Government. We conducted this performance audit from April 2014 through December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To understand efforts to prevent and address online sales of WIC formula, we reviewed a non-generalizable sample of 25 states’ WIC policy and procedure manuals, including each state’s WIC participant rights and responsibilities statement. We selected these 25 states primarily based on varied size of WIC caseloads and geographic dispersion, and as a group, the states provided services to approximately For each two-thirds of all WIC participants in the U.S in fiscal year 2013.state, we reviewed the sections of the WIC policy and procedure manual relevant to participant violations, as well as the WIC participant rights and responsibilities statement. We analyzed each state’s rights and responsibilities statement to determine if it indicated that (1) the sale of WIC food benefits is a participant violation, and (2) the sale of WIC food benefits online is a participant violation. We reviewed these statements because federal regulations require that all WIC participants be informed of their rights and responsibilities during the certification process. However, federal regulations do not require state agencies to inform participants of this type of program violation. For each state, we also reviewed the state’s policies on addressing the participant violation of selling WIC benefits, such as through written warnings or program disqualifications for a specified time period. To obtain some information on the extent of online sales of WIC formula in four metropolitan areas, we conducted limited monitoring of online classified advertisements to sell formula using one popular e-commerce website that operates local websites. The selected areas included Chicago, IL; Dallas-Fort Worth, TX; Los Angeles, CA; and New York, NY. We selected these areas based on (1) high number of births—defined as over 100,000 or more in the metropolitan area between July 1, 2012 and July 1, 2013, (2) high state WIC caseloads—defined as over 275,000 participants in fiscal year 2013, and (3) geographic dispersion. Our monitoring results are not generalizable to all localities across the United States. We conducted our monitoring of potential attempts to sell WIC formula during a 30-day period, from mid-June 2014 through mid-July 2014. During that period, we conducted manual search queries of the section of the website specified for infant- and child-related sales on each non- holiday weekday to detect postings containing the key terms “formula” or “WIC.” We also conducted manual search queries of the same section of the website using the name of the formula brand equivalent to the WIC- contracted formula brand specific to the metropolitan area’s state. We reviewed the text of each posting that contained the term “WIC,” as well as each posting containing the state-specific WIC-contracted formula brand. During our reviews of the text of the postings, we also identified those that met additional criteria generally consistent with WIC formula provided to participants. Specifically, the formula type and volume of formula container advertised for sale were equivalent to a type and volume of formula provided to WIC participants in the state in which the posting was made, and the amount of formula advertised for sale represented approximately 85 percent or more of the maximum amount of formula authorized for fully formula-fed infant WIC participants each month, averaged across all ages. We included the latter criterion to help us identify individuals attempting to sell relatively large quantities of formula. While this provides no information about the individuals’ relationship to the WIC program, we included this criterion because participants receiving WIC checks or vouchers for formula may be likely to purchase the infant’s entire monthly allotment of formula at one time. This may result in WIC participants having multiple cans of unused formula, for example, if the infant switches formulas during the month, as noted in the two posts we found that specifically stated the individuals were attempting to sell WIC formula online. However, a relatively large amount of formula could also indicate intent to traffic WIC benefits in bulk in order to make a profit. When designing our monitoring approach, we relied on lessons learned by the GAO team that recently conducted a study to assess online sales of other federal nutrition benefits. For example, we based our decision to use the popular e-commerce website on the experience of the other GAO team, which used both that website and a social media website. The team found that monitoring efforts were most effective when using the e- commerce website. Similarly, the other team used both an automated and a manual approach to monitor the e-commerce website but found the manual approach to be more effective—as measured by more postings indicative of potential sales of these benefits—than the automated Our monitoring was conducted to provide some information approach.on the extent to which WIC participants may have attempted to sell WIC formula in four metropolitan areas over 30 days. As a result, we conducted our monitoring activities for illustrative purposes, and we did not intend to use them as evidence for our own investigations into potential WIC participant violations. However, we provided information to USDA officials about the two posts that explicitly stated the formula for sale was from WIC. Our monitoring approach had some limitations. In the posts we reviewed, individuals might have been selling WIC formula, but did not clearly state in the post that the formula offered for sale was from WIC. As a result, our finding likely undercounts the number of individuals who offered WIC formula for sale in the four metropolitan areas during our monitoring period. Further, because of the general lack of structure to online sales, our findings on the numbers of posts offering formula for sale that met our criteria may over- or under-represent the true number. Specifically, while we excluded duplicate posts we identified on individual days, we were unable to identify duplicate posts across different days. Therefore, if someone posted formula for sale that met our criteria on Tuesday, and then because the formula did not immediately sell, they posted the same formula for sale on Friday, we would have counted those both as posts offering formula for sale. As a result, while our methods provide the total number of formula posts during our monitoring period, excluding duplicate posts on the same day, we cannot be certain that each post represented a new offer of formula for sale. In addition, our findings may understate the total number of posts from individuals attempting to sell WIC formula online because, although we chose search terms we thought individuals would be most likely to use, it is possible some individuals used different terms. We also had to make some assumptions about posts that did not include sufficient detail in the text of the post. For example, we included in our findings posts that did not explicitly state the formula brand name, type, or container volume in the text but included a photo that showed formula matching the criteria we identified. However, it is possible that the photo in some posts did not, in fact, match the formula being offered for sale, resulting in our findings overstating the true number. In addition to the contact named above, Rachel Frisk (Assistant Director), Sara Pelton (analyst in charge), and Bryant Torres made key contributions to this report. Other contributors included James Bennett, Susan Bernstein, Sarah Cornetto, Celina Davidson, Michael Hartnett, James Healy, Ashley McCall, Jean McSween, and Almeta Spencer.
|
WIC provides supplemental foods and assistance to low-income pregnant and postpartum women, infants, and young children. Approximately half of U.S. infants born each year receive WIC benefits, and infant formula is a key component of the food package many receive. Recent news reports suggest that some participants have attempted to sell WIC formula online, and the Internet has substantially increased as a marketplace in recent years. GAO was asked to provide information about online sales of WIC formula. GAO assessed: (1) how USDA determines the amount of formula to provide to participants, (2) what is known about the extent to which participants sell WIC formula online, and (3) steps USDA has taken to prevent and address the online sale of WIC formula. GAO reviewed relevant federal laws, regulations, and USDA guidance; monitored advertisements to sell formula on one e-commerce website in four metropolitan areas; reviewed a non-generalizable sample of policy manuals from 25 states that as a group serve about two-thirds of WIC participants, and that were selected for their varied WIC caseloads and geography; and interviewed USDA and state and local WIC officials. In recent years, the U.S. Department of Agriculture (USDA) has more closely aligned the amount of formula it authorizes states to provide through the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) with current nutrition science and the nutritional needs of participating infants. These changes were also made, in part, to encourage and support breastfeeding. Specifically, the agency issued revised regulations in 2007, finalized in 2014, that reduced the amount of formula authorized for partially breastfed infants and delayed the age when these infants may begin receiving formula. The regulations also reduced the amount of formula authorized for older fully formula-fed infants because they added infant fruits and vegetables for the first time. USDA has not conducted any nationwide studies on the extent of online sales of WIC formula by program participants, but information gathered from state WIC officials and GAO's own limited monitoring suggest that some WIC formula is offered for sale online, though program rules prohibit such sales. In 30 days of monitoring one online classifieds website in four large metropolitan areas, GAO found 2 posts in which individuals attempted to sell formula specifically identified as WIC—among 2,726 that advertised infant formula generally. A larger number, 346 posts, advertised formula matching the brand, type, container volume, and amount provided to WIC participants, but did not indicate the source of the formula. Because WIC participants purchase the same brands and types from stores as non-WIC customers, monitoring attempted online sales of WIC formula can present a challenge. State officials GAO spoke with cited other challenges to monitoring online sales, such as the fact that individuals posting formula for sale online are able to remain relatively anonymous, and their posts may contain insufficient information to allow staff to identify them as WIC participants. USDA has taken steps to clarify that attempting to sell WIC formula online is a participant violation but has provided limited assistance to states in preventing and addressing these sales. For example, USDA has not specifically directed states to instruct participants that selling WIC formula is against program rules, which could lead to participants making these sales unknowingly and using program resources inappropriately. GAO's review of 25 state policy and procedure manuals found 7 that did not require local agency staff to inform participants of the prohibition. Further, although states are responsible for controlling participant violations—including sales of WIC benefits—USDA is responsible for determining compliance with the WIC statute and regulations. However, because the department has not required states to describe their procedures for controlling these violations, USDA is unable to both oversee and assess state efforts to ensure program integrity in this area. The department is also unable to assist states' efforts because it has not assessed the national extent of these sales or techniques for addressing them. Through interviews with 19 state and local WIC agency officials from 12 states, GAO found that states vary in their approaches and the amount of resources devoted to monitoring attempted WIC formula sales. In addition, due to monitoring challenges, some officials expressed concerns about the return on investment for these efforts. Without information on cost-effective monitoring techniques—such as promising search terms or online sites where states may want to focus their efforts—some states may be missing opportunities to better target their limited resources. GAO recommends that USDA require state agencies to inform WIC participants that selling WIC formula is against program rules and describe in their state plans how they identify attempted sales. GAO also recommends that USDA assess online sales, including techniques states can use to monitor them. USDA agreed with GAO's recommendations.
|
GSA administers the federal government’s SmartPay® purchase card program, which has been in existence since the late 1980s. The purchase card program was created as a way for agencies to streamline federal acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. The purchase card can be used for simplified acquisitions, including micropurchases, as well as to place orders and make payments on contract activities. The FAR designated the purchase card as the preferred method of making micropurchases. In addition, part 13 of the FAR, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. Figure 1 shows the dramatic increase in purchase card use since the inception of the SmartPay® program. As shown in figure 1, during the 10-year period from fiscal year 1996 through 2006, acquisitions made using purchase cards increased almost fivefold—from $3 billion in fiscal year 1996 to $17.7 billion in fiscal year 2006. Figure 2 provides further information on the number of purchase cardholder accounts. As shown, the number of purchase cardholder accounts peaked in 2000 at more than 670,000, but since then the number of purchase cardholder accounts has steadily decreased to around 300,000. Cardholder (in thouand) As the contract administrator of the program, GSA contracts with five different commercial banks in order to provide purchase cards to federal employees. The five banks with purchase card contracts are (1) Bank of America, (2) Citibank, (3) Mellon Bank, (4) JPMorgan Chase, and (5) U.S. Bank. GSA also has created several tools, such as the Schedules Program, so that cardholders can take advantage of favorable pricing for goods and services. Oversight of the purchase card program is also the responsibility of OMB. OMB provides overall direction for governmentwide procurement policies, regulations, and procedures to promote economy, efficiency, and effectiveness in the acquisition processes. Specifically, in August 2005, OMB issued Appendix B to Circular No. A-123, Improving the Management of Government Charge Card Programs, that established minimum requirements and suggested best practices for government charge card programs. From July 1, 2005, through June 30, 2006, GSA reported that federal agencies purchased over $17 billion of goods and services using government purchase cards. Our analysis of transaction data provided by the five banks found that micropurchases represented 97 percent of purchase card transactions and accounted for almost 57 percent of the dollars expended. Using purchase cards for acquisitions and payments over the micropurchase limit of $2,500 represented about 3 percent of purchase transactions and accounted for more than 44 percent of the dollars spent from July 1, 2005, through June 30, 2006. Internal control weaknesses in agency purchase card programs exposed the federal government to fraudulent, improper, and abusive purchases and loss of assets. Our statistical testing of two key transaction-level controls over purchase card transactions over $50 from July 1, 2005, through June 30, 2006, found that both controls were ineffective. In aggregate, we estimated that 41 percent of purchase card transactions were not properly authorized or purchased goods or services were not properly received by an independent party (independent receipt and acceptance). We also estimated that 48 percent of purchases over the micropurchase threshold were either not properly authorized or independently received. Further, we found that agencies could not provide evidence that they had possession of, or could otherwise account for, 458 of 1,058 accountable and pilferable items. According to Standards for Internal Control in the Federal Government, internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives and should occur at all levels and functions of an agency. The controls include a wide range of activities, such as approvals, authorizations, verifications, reconciliations, performance reviews, and the production of records and documentation. For this audit, we tested those control activities that we considered to be key in creating a system that prevents and detects fraudulent, improper, and abusive purchase card activity. To this end, we tested whether (1) cardholders were properly authorized to make their purchases and (2) goods and services were independently received and accepted. As shown in table 1, we estimated that the overall failure rate for the attributes we tested was 41 percent, with failure rates of 15 percent for authorization and 34 percent for receipt and acceptance. Lack of proper authorization. As shown in table 1, 15 percent of all transactions failed proper authorization. According to Standards for Internal Control in the Federal Government, transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority, as this is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. To test authorization, we accepted as reasonable evidence various types of documentation, such as purchase requests or requisitions from a responsible official, e-mails, and other documents that identify an official government need, including blanket authorizations for routine purchases with subsequent review by an approving official. The lack of proper authorization occurred because (1) the cardholder failed to maintain sufficient documentation, (2) the agency’s policy did not require authorization, or (3) the agency lacked the internal controls and management oversight to identify purchases that were not authorized— increasing the risk that agency cardholders will misuse the purchase card. Failure to require cardholders to obtain appropriate authorization and lack of management oversight increase the risk that fraudulent, improper, and other abusive activity will occur without detection. Lack of independent receipt and acceptance. As depicted in table 1, our statistical sampling of executive agency purchase card transactions also found that 34 percent of transactions failed independent receipt and acceptance, that is, goods or services ordered and charged to a government purchase card account were not received by someone other than the cardholder. According to Standards for Internal Control in the Federal Government, the key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. Segregating duties entails separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling related assets. The standards further state that no one individual should control all key aspects of a transaction or event. As evidence of independent receipt and acceptance, we accepted any signature or initials of someone other than the cardholder on the sales invoice, packing slip, bill of lading, or any other shipping or receiving document. We found that lack of documented, independent receipt extended to all types of purchases, including pilferable items such as laptop computers. Independent receipt and acceptance helps provide assurance that purchased items are only acquired for legitimate government need and not for personal use. Although we did not test the same number of attributes as in previous audits of specific agencies’ purchase card programs, for those attributes we tested, the estimated governmentwide failure rates shown in this report are lower than the failure rates we have previously reported for certain individual agencies. Table 2 provides failure rates from our prior work related to proper approval and independent receipt and acceptance for certain individual agencies. As shown, estimated failure rates for independent receipt and acceptance from previous audits were as high as 87 percent for one Army location (as reported in 2002) and, most recently, 63 percent for DHS (as reported in 2006). In contrast, we are estimating a 34 percent failure rate for this audit. Because prior audits have been restricted to individual agencies, we cannot state conclusively that the lower failure rate is attributable to improvements in internal controls governmentwide. However, some agencies with large purchase card programs, such as DOD, have implemented improved internal controls in response to our previous recommendations. Further, in 2005, OMB also issued Appendix B to Circular No. A-123 prescribing purchase card program guidance and requirements. These changes are positive steps in improving internal controls over the purchase card program. While only 3 percent of governmentwide purchase card transactions from July 1, 2005, through June 30, 2006, were purchases above the micropurchase threshold of $2,500, these transactions accounted for 44 percent of the dollars spent during that period. Because of the large dollar amount associated with these transactions, and additional requirements related to authorization than are required for micropurchases, we drew a separate statistical sample to test controls over these larger purchases. Specifically, we tested (1) proper purchase authorization and (2) independent receipt and acceptance. As part of our test of proper purchase authorization, we looked for evidence that adequate competition was obtained. If competition was not obtained, we asked for supporting documentation showing that competition was not required, for example, that the purchase was acquired from sole-source vendors. We estimated that 48 percent of the purchase card transactions over the micropurchase threshold failed our attribute tests. As shown in table 3, for 35 percent of purchases over the micropurchase threshold, cardholders failed to obtain proper authorization. Additionally, in 30 percent of the transactions, cardholders failed to provide sufficient evidence of independent receipt of the goods or services. Lack of proper authorization for purchases over the micropurchase limit. As table 3 indicates, 35 percent of purchases over the micropurchase limit were not properly authorized. To test for proper authorization, we looked for evidence of prior approval, such as a contract or other requisition document. For purchases above the micropurchase threshold, we also required evidence that the cardholder either solicited competition or provided reasonable evidence for deviation from this requirement, such as sole source justification. Of the 34 transactions that failed proper authorization, 10 transactions lacked evidence of competition. For example, one Army cardholder purchased computer equipment totaling over $12,000 without obtaining and documenting price quotes from three vendors as required by the FAR. The purchase included computers costing over $4,000 each, expensive cameras that cost $1,000 each, and software and other accessories—items that are supplied by a large number of vendors. In another example of failed competition, one cardholder at DHS purchased three personal computers totaling over $8,000. The requesting official provided the purchase cardholder with the computers’ specifications and a request that the item be purchased from the requesting official’s preferred vendor. We found that the cardholder did not apply due diligence by obtaining competitive quotes from additional vendors. Instead, the cardholder asked the requesting official to provide two “higher priced” quotes from additional vendors in order to justify obtaining the computers from the requesting official’s preferred source. In doing so, the cardholder circumvented the rules and obtained the items without competitive sourcing as required by the FAR. Lack of independent receipt and acceptance. As shown in table 3, we projected that 30 percent of the purchases above the micropurchase threshold did not have documented evidence that goods or services ordered and charged to a government purchase card account were received by someone other than the cardholder. Our testing of a nonrepresentative selection of accountable and pilferable property acquired with government purchase cards found that agencies failed to account for 458 of the 1,058 accountable and pilferable property items we tested. The total value of the items was over $2.7 million, and the purchase amount of the missing items was over $1.8 million. We used a nonrepresentative selection methodology for testing accountable property because purchase card data did not always contain adequate detail to enable us to isolate property transactions for statistical testing. Because we were not able to take a statistical sample of these transactions, we were not able to project inventory failure rates for accountable and pilferable property. Similarly, because the scope of our work was restricted to purchase card acquisitions, we did not audit agencies’ controls over accountable property acquired using other procurement methods. However, the extent of the missing property we are reporting on may not be restricted to items acquired with the government purchase cards, but may reflect control weaknesses in agencies’ management of accountable property governmentwide. The lost or stolen items included computer servers, laptop computers, iPods, and digital cameras. Our prior reports have shown that weak controls over accountable property purchased with government purchase cards increases the risk that items will not be reported and accounted for in property management systems. We acknowledge agency officials’ position that the purchase card program was designed to facilitate acquisition of goods and services, including property, and not specifically to maintain accountability over property. However, the sheer number of accountable property purchases made “over the counter” or directly from a vendor increases the risk that the accountable or pilferable property would not be reported to property managers for inclusion in the property tracking system. Unrecorded assets decrease the likelihood of detecting lost or stolen government property. In addition, if these items were used to store sensitive data, this information could be lost, stolen, or both without the knowledge of the government. Failure to properly account for pilferable and accountable property also increases the risk that agencies will purchase property they already own but cannot locate—further wasting tax dollars. Although each agency establishes its own threshold for recording and tracking accountable property, additional scrutiny is necessary for sensitive items (such as computers and related equipment) and items that are easily pilfered (such as cameras, iPods, and personal digital assistants (PDA)). Consequently, for this audit, we selected $350 as the threshold for our accountable property test. Standards for Internal Control in the Federal Government provides that an agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for, and limited access to, assets such as cash, securities, inventories, and equipment, which might be vulnerable to risk of loss or unauthorized use. Failure to maintain accountability over property, including highly pilferable items, increases the risk of unauthorized use and lost and stolen property. Our accountable asset work consisted of identifying accountable and pilferable properties associated with transactions from both the statistical sample and data-mining transactions, requesting serial numbers from the agency and vendors, and obtaining evidence—such as a photograph provided by the agency—that the property was recorded, could be located, or both. In some instances, we obtained the photographs ourselves. We then evaluated each photograph to determine whether the photograph represented the accountable or pilferable item we selected for testing. Property items failed our physical property inventory tests for various reasons, including the following: the agency could not locate the item upon request and reported the item as missing, the agency failed to provide photographs, or the agency provided photographs of items where the serial numbers did not match the items purchased. In many instances, we found that agencies failed to provide evidence that the property was independently received or entered into the agency property book. Weak controls over accountable and pilferable property increase the risk that property will be lost or stolen and also increase the chance that the agency will purchase more of the same item because it is not aware that the item has already been purchased. The following descriptions further illustrate transactions that failed our property tests: The Army could not properly account for 16 server configurations containing 256 items that it purchased for over $1.5 million dollars. Despite multiple inquiries, the Army provided photographs of only 1 configuration out of 16, but did not provide serial numbers for that configuration to show that the photograph represented the items acquired as part of the transaction we selected for testing. Further, when we asked for inventory records as an acceptable alternative, the Army could not provide us evidence showing that it had possession of the 16 server configurations. A Navy cardholder purchased general office supplies totaling over $900. As part of this purchase, the cardholder bought a Sony digital camera costing $400 and an iPod for $200. In supporting documentation provided, the Navy stated that the cardholder, approving official, and requester had no recollection of requesting or receiving the iPods. To find out whether these pilferable items could have been converted for personal use and effectively stolen, we asked the Navy to provide a photograph of the camera and iPod, including the serial number. However, the Navy informed us that the items were not reported on a property tracking system and therefore could not be located. We found numerous instances of fraud, waste, and abuse related to the purchase card program at dozens of agencies across the government. Internal control weaknesses in agency purchase card programs directly increase the risk of fraudulent, improper, and abusive transactions. For instance, the lack of controls over proper authorization increases an agency’s risk that cardholders will improperly use the purchase card. As discussed in appendix II, our work was not designed to identify all instances of fraudulent, improper, and abusive government purchase card activity or estimate their full extent. Therefore, we did not determine and make no representations regarding the overall extent of fraudulent, improper, and abusive transactions governmentwide. The case studies identified in the tables that follow represent some of the examples that we found during our audit and investigation of the governmentwide purchase card program. We found numerous examples of fraudulent and potentially fraudulent purchase card activities. For the purpose of this report, we define fraudulent transactions as those where a fraud case had been adjudicated or was undisputed or a purchase card account had been compromised. Potentially fraudulent transactions are those transactions where there is a high probability of fraud, but where sufficient evidence did not exist for us to determine that fraud had indeed occurred. As shown in table 4, these transactions included (1) acquisitions by cardholders that were unauthorized and intended for personal use and (2) purchases appropriately charged to the purchase card but involving potentially fraudulent activity that went undetected because of the lack of integration among the processes related to the purchase, such as travel claims or missing property. In a few instances, agencies have taken actions on the fraudulent and potentially fraudulent transactions we identified. For example, some agency officials properly followed policies and procedures and filed disputes with the bank against fraudulent purchases that appeared on the card, and subsequently obtained refunds. However, in the most egregious circumstances, such as repeated fraudulent activities by the cardholders, sometimes over several years, the agencies did not take actions until months after the fraudulent activity occurred, or after we selected the transactions and requested documentation from the agencies for the suspicious transactions. Table 4 illustrates instances where we found fraud, or indications of fraud, from our data mining and investigative work. The following text further describes three of the fraudulent cases from table 4: Case 1 involves a cardholder who embezzled over $642,000 from the Forest Service’s national fire suppression budget from October 10, 2000, through September 28, 2006. This cardholder, a purchasing agent and agency purchase card program coordinator, wrote approximately 180 checks to a live-in boyfriend with whom the cardholder shared a bank account. Proceeds from the checks were used for personal expenditures, such as gambling, car and mortgage payments, dinners, and retail purchases. Although the activities occurred repeatedly over a 6-year period, the embezzled funds were undetected by the agency until USDA’s Office of Inspector General received a tip from a whistleblower in 2006. In June 2007, the cardholder pled guilty to one count of embezzlement and one count of tax fraud. As part of the plea agreement, the cardholder agreed to pay restitution of $642,000. Further, in November 2007, the cardholder was sentenced to 21 months imprisonment followed by 36 months supervised release. Case 2 involves a potential theft of government property. A Navy cardholder purchased 19 pilferable items totaling $2,200 from CompUSA without proper authorization or subsequent review of the purchase transaction. After extensive searches, the Navy provided evidence that only 1 of the 19 items listed on the invoice—an HP LaserJet printer purchased for $150—was found. Other items that were lost or stolen included five iPods; a PDA; iPod travel chargers, adapters, flash drives, leather accessories, and two 17-inch LCD monitors—all highly pilferable property that can be easily diverted for personal use. According to officials from the Navy, at the time of the purchase, the command did not have a requirement for tracking highly pilferable items. Additionally, all members involved in the transaction had since transferred and the agency did not have the capability to track where the items might have gone. Navy officials also informed us that the command issued a new policy requiring that pilferable items be tracked. Case 4 involves a USPS postmaster who fraudulently used the government purchase card for personal gain. Specifically, from April 2004, through October 2006, the cardholder made more than 15 unauthorized charges from various online dating services totaling more than $1,100. These were the only purchases made by this cardholder during our audit period, yet the cardholder’s approving official did not detect any of the fraudulent credit card activity. According to USPS officials, this person was also under an internal administrative investigation for viewing pornography on a government computer. Based on the administrative review, the cardholder was removed from his position in November 2006 after working out an agreement with USPS in which he was authorized to remain on sick leave until his retirement date in May 2007. In April the USPS Office of Inspector General issued a demand letter and recovered the fraudulent Internet dating service charges. Our data mining identified numerous examples of improper and abusive transactions. Improper transactions are those purchases that although intended for government use, are not permitted by law, regulation, or government/agency policy. Examples we found included (1) purchases that were prohibited or otherwise not authorized by federal law, regulation, or government/agency policy and (2) split purchases made to circumvent the cardholder single-purchase limit or to avoid the need to obtain competition on purchases over the $2,500 micropurchase threshold. Abusive purchases are those where the conduct of a government organization, program, activity, or function fell short of societal expectations of prudent behavior. We found examples of abusive purchases where the cardholder (1) purchased goods or services at an excessive cost (e.g., gold plated) or (2) purchased an item for which government need was questionable. Table 5 identifies examples of improper and abusive purchases. The following text further describes four of the cases in table 5: Case 2 relates to a cardholder who is a 20-year veteran at FAS, a unit within USDA. At the end of fiscal year 2006, the cardholder purchased two vehicles—a Toyota Land Cruiser and Toyota Sienna—on two separate days for two separate USDA offices overseas. Although the vehicles appeared to have been shipped overseas for a legitimate government need, our investigative work found that these purchases were made in violation of USDA purchase card policies and with the implicit agreement by FAS policyholders as follows: According to written communications at FAS, the requester for one of the cars had a “large chunk of money that needed to be used before the end of the fiscal year (2006).” The requester requested that the vehicle be purchased in the United States, and then shipped overseas because it was not possible to finalize the purchase during fiscal year 2006 if the agency was to purchase the vehicle in the country where the office was located. The cardholder stated that he wrote three checks (two at $25,000 each and a third at $7,811) to purchase the Land Cruiser because the checks have a $25,000 limit printed on them. The convenience check fee on the three checks was over $1,000. Pursuant to our investigation, the cardholder informed his supervisor that he intentionally violated agency policy, which requires that vehicles be acquired through the GSA unless a waiver is obtained. The cardholder stated that he disagreed with USDA policy requiring GSA involvement in car acquisition because it was too cumbersome and that USDA needed to issue new policies. We reviewed supporting documentation showing that the vehicles were shipped overseas to the units that purchased them, but we did not perform work to determine whether the year-end purchase was necessary. Agency management did not take action when they were made aware of the cardholder’s significant violation of agency policy. In case 3, four DOD cardholders purchased over $77,000 in clothing and accessories at high-end clothing and other sporting goods stores, including over $45,000 at high-end retailers such as Brooks Brothers. The Brooks Brothers invoices showed that the cardholders paid about $2,300 per person for a number of servicemembers for tailor-made suits and accessories—$7,000 of which were purchased a week before Christmas. According to the purchase card holder, DOD purchased these items to provide servicemembers working at American embassies with civilian attire. While the Department of Defense Financial Management Regulation authorizes a “civilian clothing allowance” when servicemembers are directed to dress in civilian clothing when performing official duty, the purchase card transactions made by these individuals are far greater than the maximum allowable initial civilian clothing allowance of $860 per person. Case 7 relates to the $13,500 that USPS spent on food at the National Postal Forum in Orlando, Florida, in 2006. For this occasion, USPS paid for 81 dinners averaging over $160 per person for customers of the Postal Customer Council at an upscale steak restaurant. Further, USPS paid for over 200 appetizers and over $3,000 of alcohol, including more than 40 bottles of wine costing more than $50 each and brand- name liquor such as Courvoisier, Belvedere, and Johnny Walker Gold. In case 9, a NASA cardholder purchased two 60GB iPods for official data storage. During the course of our audit, we found that the iPods were used for personal use, such as to store personal photos, songs, and video clips. Further, we question the federal government’s need to purchase iPods for data storage when other data storage devices without audio and video capabilities were available at lower cost. The purchase card continues to be an effective tool that helps agencies reduce transaction costs for small purchases and provides flexibility in making acquisitions. While the overall failure rates associated with governmentwide purchase card transactions have improved in comparison to previous failure rates at specific agencies, breakdowns in internal controls over the use of purchase cards leave the government highly vulnerable to fraud, waste, and abuse. Problems continue to exist in the area of authorization of transactions, receipt and acceptance, and accountability of property bought with purchase cards. This audit demonstrates that continued vigilance over purchase card use is necessary if agencies are to realize the full potential of the benefits provided by purchase cards. We are making the following 13 recommendations to improve internal control over the government purchase card program and to strengthen monitoring and oversight of purchase cards as part of an overall effort to reduce instances of fraudulent, improper, and abusive purchase card activity. We recommend that the Director of OMB: Issue a memorandum reminding agencies that internal controls over purchase card activities, as detailed in Appendix B of OMB Circular No. A-123, extend to the use of convenience checks. Issue a memorandum to agency heads requesting the following: Cardholders, approving officials, or both reimburse the government for any unauthorized or erroneous purchase card transactions that were not disputed. When an official directs a cardholder to purchase a personal item for that official, and management later determines that the purchase was improper, the official who requested the item should reimburse the government for the cost of the improper item. Consistent with the goals of the purchase card program, to streamline the acquisition process, we recommend that the Administrator of GSA, in consultation with the Department of the Treasury’s Financial Management Service: Provide agencies guidance on how cardholders can document independent receipt and acceptance of items obtained with a purchase card. The guidelines should encourage agencies to identify a de minimis amount, types of purchases that do not require documenting independent receipt and acceptance, or both and indicate that the approving official or supervisor took the necessary steps to ensure that items purchased were actually received. Provide agencies guidance regarding what should be considered sensitive and pilferable property. Because purchase cards are frequently used to obtain sensitive and pilferable property, remind agencies that computers, palm pilots, digital cameras, fax machines, printers and copiers, iPods, and so forth are sensitive and pilferable property that can easily be converted to personal use. Instruct agencies to remind government travelers that when they receive government-paid-for meals at conferences or other events, they must reduce the per diem claimed on their travel vouchers by the specified amount that GSA allocates for the provided meal. Provide written guidance or reminders to agencies: That cardholders need to obtain prior approval or subsequent review of purchase activity for purchase transactions that are under the micropurchase threshold. That property accountability controls need to be maintained for pilferable property, including those items obtained with a purchase card. That cardholders need to timely notify the property accountability officer of pilferable property obtained with the purchase card. That property accountability officers need to promptly record, in agency property systems, sensitive and pilferable property that is obtained with a purchase card. That, consistent with the guidance on third-party drafts in the Department of the Treasury’s Treasury Financial Manual, volume 5, chapter 4-3000, convenience checks issued on the purchase card accounts should be minimized, and that convenience checks are only to be used when (1) a vendor does not accept the purchase cards, (2) no other vendor that can provide the goods or services can reasonably be located, and (3) it is not practical to pay for the item using the traditional procurement method. That convenience check privileges of cardholders who improperly use convenience checks be canceled. We received written comments on a draft of this report from the Acting Controller of OMB (see app. III) and the Administrator of GSA (see app. IV). In response to a draft of our report, OMB agreed with all three recommendations. OMB agreed that the efficiencies of the purchase card program are not fully realized unless federal agencies implement strong and effective controls to prevent purchase card waste, fraud, and abuse. To that end, OMB noted that it had proactively designated government charge card management as a major focus area under Appendix B of Circular No. A-123, Improving the Management of Government Charge Card Programs. With respect to the recommendations contained in this report, OMB is proposing to issue further guidance reminding agencies that Appendix B extends to convenience checks as well as government charge cards, and that agency personnel have financial responsibility with regard to unauthorized and erroneous purchase card transactions. While GSA wholly or partially concurred with four recommendations, GSA generally disagreed with the majority of our recommendations. Specifically, GSA stated that it was not within the scope of its authority to issue guidance to agencies with respect to asset accountability and receipt and acceptance of items purchased with government purchase cards, as these are not strictly purchase card issues. Further, GSA stated that there are more effective ways to deal with purchase card misuse or abuse than issuing “redundant” policy reminders or guidance. It also took exception to our testing methodology. We agree with GSA that the problems we identified with property accountability and receipt and acceptance go beyond the bounds of strictly purchase card issues. However, our work over the last several years has consistently shown substantial problems with property accountability and independent receipt and acceptance of goods and services, problems that arose because of the flexibility provided by the purchase card program. We do not believe that our recommendations related to policy guidance and reminders to strengthen internal controls are redundant—our previous recommendations in this area had been targeted at specific agencies we audited. With respect to governmentwide purchase card issues, GSA’s role as the purchase card program manager puts it in a unique position to identify challenges to agency internal control systems and assist agencies with improving their internal controls governmentwide. We are encouraged by OMB’s support for aggressive and effective controls over purchase cards, and believe that GSA can seek OMB support to overcome the perceived lack of authority. We believe that GSA has a number of tools already at its disposal, such as online training and annual conferences, where GSA could easily remind cardholders and approving officials to pay particular attention to governmentwide issues, including asset accountability and independent receipt and acceptance of goods and services identified in this report. We also reiterate support for our testing methodology, which included systematic testing of key internal controls through statistical sampling. The following contains more detailed information on GSA’s comments, along with our response. GSA concurred with 3 of 10 recommendations. Specifically, GSA concurred with 2 recommendations to improve controls over convenience checks and 1 recommendation related to approval of purchases below the micropurchase threshold. Specifically, GSA agreed to provide written guidance to agencies that convenience check use should be minimized, and that improper use of convenience checks would result in cancellation of convenience check privileges. As part of its concurrence, GSA provided that it is not practical to strictly prohibit the use of convenience checks given the unique nature of some suppliers or services acquired by agencies and vendor refusal to accept purchase cards. It was not our intent to completely eliminate the use of convenience checks. As such, we clarified our recommendation to require only that the cardholder make a “reasonable”—not absolute—effort to locate other vendors that can provide the same goods and services and that accept the purchase card prior to using convenience check. The requested revision is consistent with our intent and therefore we have made the necessary change to our recommendations. With respect to the third recommendation related to approval of micropurchases, GSA agreed that cardholders need to obtain prior approval or subsequent review of purchase card activity for purchase transactions that are under the micropurchase threshold. However, GSA believed that OMB needed to take the lead and incorporate this change in its Circular No. A-123. GSA offered to help OMB revise Circular No. A-123 in this regard. GSA stated that it partially concurred with our recommendation to remind travelers to reduce the per diem claims on their travel vouchers when meals are provided by the government. However, based on its response, it appears that GSA substantially agrees with our recommendation, and that the GSA Office of Governmentwide Policy will issue this guidance. In actuality, GSA concurred with our recommendation but disagreed that this was a purchase card issue. Further, GSA took exception as to whether the requirement to deduct per diem applies to continental breakfasts, stating that continental breakfasts did not constitute “full breakfasts.” Thus, GSA stated that it needs to convene stakeholders in the GSA travel policy community to consider whether the requirement for deducting per diem should be applied to continental breakfasts. We disagree with this assessment. If the costs of the continental breakfasts were in fact not significant, we would not have reported on this finding; however, the basis of our recommendation rests primarily on the fact that GSA itself paid for continental breakfasts costing $23 per person, which was greater than the portion of government per diem established by GSA for breakfast in any city in the United States. GSA then proceeded to reimburse the same employees the breakfast portion of per diem—in effect paying twice for breakfasts. We disagree with GSA that this is an appropriate treatment of continental breakfast, as it implies that it is appropriate for taxpayers to pay twice for a government traveler’s meal. Consequently, we reiterate the need for GSA to promote prudent management of taxpayer’s money, and our support for requiring travelers to reduce their per diem if they took advantage of the continental breakfasts provided. GSA disagreed with all of our recommendations related to receipt and acceptance and controls over accountable and pilferable property. GSA stated that these issues were not within the purview of the GSA SmartPay® program or the scope of GSA SmartPay® contracts. Further, GSA stated that other approaches would be more effective at addressing purchase card abuse and misuse than issuing “redundant” policy guidance and reminders. With respect to receipt and acceptance, GSA stated that it did not have the authority to encourage agencies to identify a de minimis amount, types of items that do not require receipt and acceptance, or both, or to determine how approving officials should document receipt and acceptance. With respect to accountable property, GSA did not believe that it should provide reminders to agencies that computers and similar items are sensitive and pilferable property that can easily be converted to personal use. GSA argued that what constitutes sensitive and pilferable property is defined by agencies and is not within its purview. GSA also believes that it does not have authority to remind cardholders to maintain accountability of, and notify property managers when, pilferable property is acquired with purchase cards. Finally, GSA does not believe that it can issue reminders to property managers to record, in a timely manner, pilferable property acquired with purchase cards in their property management systems. GSA suggested we modify these recommendations accordingly. With respect to receipt and acceptance, we agree that GSA alone should not issue guidance concerning agencies’ internal controls over purchase cards and related payment process. We reiterate that we did not ask GSA to take actions in isolation—instead, we recommended that GSA work with the Department of the Treasury’s Financial Management Service to provide guidance on improving internal controls while at the same time streamlining the acquisition process. After all, streamlining the acquisition process is a key objective of the purchase card program. We believe this could be achieved, in part, by requiring independent receipt and acceptance only for items above a de minimis amount. Further, governmentwide guidance in this area would not be redundant—the fact that no current guidance exists demonstrates the need for consistent policy governmentwide that all agencies can follow. Consistent guidance is crucial to engendering taxpayers’ confidence in the purchase card program—as we stated above, our previous audits and our current work showed that ineffective receipt and acceptance of goods and services acquired with the purchase card is a widespread, governmentwide problem. Furthermore, OMB indicated that it was extremely concerned about purchase card abuse and supported our recommendations designed to improve internal controls over the program. We believe that GSA can adopt a proactive approach and coordinate with OMB to obtain its support to overcome the perceived obstacles. In our opinion, the purchase card program will continue to expose the federal government—and the taxpayers—to fraud, waste, and abuse, unless GSA helps facilitate a governmentwide solution. Similarly, GSA argued that it did not have the authority to take the recommended actions with respect to property accountability. As with independent receipt and acceptance, our work continues to demonstrate that accountability for property acquired with purchase cards is ineffective across many agencies. For example, the purchase card program provides cardholders the ability to acquire sensitive and pilferable items directly from vendors. This process results in cardholders bypassing the normal property receipt and acceptance procedures, which increases the risk that the item will not be recorded in an agency’s list of accountable property. GSA needs to recognize this risk (and other inherent risks) created by purchase card use and proactively work with agencies to improve the accountability of property acquired with government purchase cards. We also believe that our recommendations fully take into account the extent of GSA’s authority—to that end, our recommendation called for GSA to provide agencies guidance and reminders to improve internal controls over asset accountability. Even though GSA already issued guidance related to the proper use of the purchase card program through online training, refresher courses, and annual conferences, GSA should go a step further and address control weaknesses related to property accountability and receipt and acceptance. GSA’s position contrasted sharply with OMB, which, in its comments on our report, expressed support for aggressive and effective controls over purchase cards. We believe that GSA can take advantage of the diverse tools already at its disposal, such as online training and annual conferences, with which GSA could easily remind cardholders and approving officials to pay particular attention to governmentwide issues, including asset accountability and independent receipt and acceptance of goods and services identified in this report. Overall, our recommendations are focused on GSA taking a proactive approach to improve the success of the purchase card program. Last year, the federal government spent nearly $18 billion using purchase cards. While the purchase card program has achieved significant savings, a program of this magnitude needs to focus on both preventive and detective controls to prevent fraud, waste, and abuse. In its response, GSA also pointed out that the new SmartPay® 2 contract should provide better management tools to agencies. However, the changes GSA identified in SmartPay® 2 were mostly related to data mining for fraud, waste, and abuse after a potentially fraudulent or improper transaction had taken place, but did not address the issues we raised in this report. As our previous work indicated, while detection can help reduce fraud, waste, and abuse, preventive controls are a more effective and less costly means to minimize fraud, waste, and abuse. The recommendations we made, to which GSA took exception, were meant to improve these up-front controls. GSA also took exception to our methodology, arguing that we improperly failed items as part of our control testing. GSA argued that some unauthorized purchases were still appropriate purchases. We believe that this argument is flawed. Standards for Internal Control in the Federal Government states that transactions should be authorized and executed only by persons acting within the scope of their authority. In other words, authorization is the principal means of assuring that only valid transactions are initiated or entered into and, consequently, without authorization, adequate assurance does not exist that the items purchased were for authorized purposes only. Our statistical sampling was designed to test authorization control, and the results we reported reflected items that did not pass this attribute. Such attribute testing is a widely accepted and statistically valid methodology for internal control evaluations. GSA also stated that our report did not adequately address the areas of personal responsibility and managerial oversight. We disagree. We recommended that OMB require agencies to hold cardholders financially responsible for improper and wasteful purchases, and OMB agreed to implement our recommendations; we believe that this would contribute to holding cardholders accountable to management for their actions. Further, our past reports on purchase card management have always focused on managerial oversight. However, it is not feasible within the scope of a governmentwide audit to test managerial oversight at every government agency. Consequently, we focused on providing GSA, the manager of the governmentwide purchase card program, with recommendations that could contribute to improving management oversight at the agencies. Finally, GSA disagreed with our characterization that travelers who did not reduce the per diem claimed on their travel voucher when dinners were provided may be engaging in potentially fraudulent activities. Because we are unable to establish that these travelers acted with the requisite knowledge and willfulness necessary to establish either a false statement under 18 U.S.C. §1001 or a false claim, we have characterized such activities as potentially fraudulent. GSA’s and OMB’s comments are reprinted in appendixes III and IV. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of this report to the Director of OMB and the Administrator of GSA. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-1117. Washington, D.C.: September 28, 2006. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-957T. Washington, D.C.: July 19, 2006. Lawrence Berkeley National Laboratory: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-987R. Washington, D.C.: August 6, 2004. Lawrence Livermore National Laboratory: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-986R. Washington, D.C.: August 6, 2004. Pacific Northwest National Laboratory: Enhancements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-988R. Washington, D.C.: August 6, 2004. Sandia National Laboratories: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-989R. Washington, D.C.: August 6, 2004. VHA Purchase Cards: Internal Controls Over the Purchase Card Program Need Improvement. GAO-04-737. Washington, D.C.: June 7, 2004. Purchase Cards: Increased Management Oversight and Control Could Save Hundreds of Millions of Dollars. GAO-04-717T. Washington, D.C.: April 28, 2004. Purchase Cards: Steps Taken to Improve DOD Program Management, but Actions Needed to Address Misuse. GAO-04-156. Washington, D.C.: December 2, 2003. Forest Service Purchase Cards: Internal Control Weaknesses Resulted in Instances of Improper, Wasteful, and Questionable Purchases. GAO-03-786. Washington, D.C.: August 11, 2003. HUD Purchase Cards: Poor Internal Controls Resulted in Improper and Questionable Purchases. GAO-03-489. Washington, D.C.: April 11, 2003. FAA Purchase Cards: Weak Controls Resulted in Instances of Improper and Wasteful Purchases and Missing Assets. GAO-03-405. Washington, D.C.: March 21, 2003. Purchase Cards: Control Weaknesses Leave the Air Force Vulnerable to Fraud, Waste, and Abuse. GAO-03-292. Washington, D.C.: December 20, 2002. Purchase Cards: Navy is Vulnerable to Fraud and Abuse but Is Taking Action to Resolve Control Weaknesses. GAO-02-1041. Washington, D.C.: September 27, 2002. Purchase Cards: Control Weaknesses Leave Army Vulnerable to Fraud, Waste, and Abuse. GAO-02-732. Washington, D.C.: June 27, 2002. Government Purchase Cards: Control Weaknesses Expose Agencies to Fraud and Abuse. GAO-02-676T. Washington, D.C.: May 1, 2002. Purchase Cards: Control Weaknesses Leave Two Navy Units Vulnerable to Fraud and Abuse. GAO-02-32. Washington, D.C.: November 30, 2001. We performed a forensic audit of executive agencies’ purchase card activity for the 15 months ending September 30, 2006. Specifically, we (1) determined the effectiveness of internal controls intended to minimize fraudulent, improper, and abusive transactions by testing two internal control attributes related to transactions taken from two statistical samples and (2) identified specific examples of potentially fraudulent, improper, and abusive transactions through data mining and investigations. We obtained the databases containing agency purchase and other government charge card transactions for the 12-month period ending June 30, 2006, from Bank of America, Citibank, JP Morgan Chase, Mellon Bank, and U.S. Bank. The databases contained purchase, travel, and fleet card transactions. Using information provided by the banks, we queried the databases to identify transactions specifically related to purchase cards. We performed other procedures—including reconciliation to purchase card data that the General Services Administration (GSA) published—to confirm that the data were sufficiently reliable for the purposes of our report. Our statistical sampling work covered purchase card activity at executive agencies. We define executive agencies as federal agencies that are required to follow the Federal Acquisition Regulation (FAR), including executive departments, independent establishments, and wholly owned federal government corporations as defined by the United States Code. We excluded transactions from the legislative and judicial branches, entities under treaty with the United States, and federal agencies with specific authority over their own purchase card programs. To assess compliance with key internal controls, we extracted and tested two statistical (probability) samples of 96 transactions each. The first sample consisted of transactions exceeding $50 taken from a population of over 16 million purchase card transactions totaling almost $14 billion. We also selected a second sample from the population of over 600,000 transactions totaling nearly $6 billion that exceeded the $2,500 micropurchase threshold. We selected this second sample because of additional acquisition requirements associated with purchases over the micropurchase threshold, and the high dollar amount associated with these transactions. Specifically, while only 3 percent of governmentwide purchase card transactions from July 1, 2005, through June 30, 2006, were over the micropurchase threshold, they accounted for 44 percent of the total dollars spent during that period. With our probability sample, each transaction in the population had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Because we followed a probability procedure based on random selection, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the samples of executive agency purchase card activity have sampling errors (confidence interval widths) of plus or minus 10 percentage points or less. Our audit of key internal controls focused on whether agencies provided adequate documentation to substantiate that (1) purchase card transactions were properly authorized and (2) goods and services acquired with purchase cards were independently received and accepted. As part of our tests of internal controls, we reviewed applicable federal laws and regulations related to the FAR and purchase card uses. We also identified and applied the internal control principles contained in Standards for Internal Control in the Federal Government, Audit Guide: Auditing and Investigating the Internal Control of Government Purchase Card Programs, and agencies’ purchase card policies and procedures. Furthermore, for purchases exceeding the micropurchase threshold of $2,500, we tested FAR requirements that the cardholder use required vendors and promote competition by soliciting bids—or justify the departure from this requirement in writing. To determine whether a transaction was properly authorized, we reviewed documentation to ascertain if an individual other than the cardholder was involved in the approval of the purchase. To determine that proper authorization existed, we used reasonable evidence for authorization of micropurchases from $50 to $2,500, such as purchase requests from responsible officials, requisitions, e-mails, and other documents that identify an official government need, including blanket authorizations for routine purchases with subsequent approval. For purchase card transactions exceeding the micropurchase threshold of $2,500, we required prior purchase authorization, such as a contract, a requisition, or other approval document. Additionally, we looked for evidence that the cardholder used required vendors (as required by the Javits-Wagner-O’Day Act (JWOD)) and solicited quotes to promote competition (or provided evidence justifying departure from this requirement, such as an annotation justifying the use of a sole source). To determine whether goods or services were independently received and accepted, we reviewed supporting documentation provided by the agency. For each transaction, we compared the quantity, price, and item descriptions on the vendor invoice and shipping receipt to the purchase requisition to verify that the items received and paid for were actually the items ordered. We also determined whether evidence existed that a person other than the cardholder was involved in the receipt of the goods or services purchased. We concluded that independent receipt and acceptance existed if the vendor invoice, shipping documents, and receipt materially matched the transaction data, and if the signature or initial of someone other than the cardholder was on the sales invoice, packing slip, bill of lading, or any other shipping or receiving document indicating receipt. For statistical sample and data-mining transactions containing accountable or highly pilferable property, we performed an inventory to determine whether executive agencies maintained accountability over the physical property items obtained with government purchase cards. Because each agency had its own threshold for accountable property, we were not able to test accountable property against each agency’s threshold for this governmentwide audit. Consequently, we defined accountable property as any property item exceeding a $350 threshold and containing a serial number. We defined highly pilferable items as items that can be easily converted to personal use, such as cameras, laptops, cell phones, and iPods. We selected highly pilferable property at any price if it was easily converted to personal use. The purchase card data provided by the banks did not always contain adequate details to enable us to isolate property transactions for statistical testing. Because we were not able to take a statistical sample of these transactions, we were not able to project failure rates for accountable and pilferable property. Consequently, our tests of property accountability were performed on a nonrepresentative selection of property that we identified when a transaction selected for statistical sampling or data mining contained accountable and pilferable property. For these property items, we identified serial numbers from supporting documentation provided by the agency and, in some cases, by contacting the vendors themselves. To minimize travel costs associated with conducting a physical inventory governmentwide, we requested that each agency provide photographs of the property items, which we compared against the serial numbers originally provided. When we were unable to obtain serial numbers from supporting documentation or from the vendors, we gave the agency the benefit of the doubt and accepted the serial numbers shown in agency-provided photographs as long as the product(s) and quantity matched. In some isolated instances, we performed the physical inventory ourselves. To identify examples of fraudulent, improper, and abusive purchase card activity, we data mined purchase card transactions from July 1, 2005, through September 30, 2006. This period contained an additional 3 months of data subsequent to the period included in our statistical samples. For data-mining purposes, we also included transactions from federal agencies that had been granted specific authority over their own purchase card programs, such as the U.S. Postal Service. In general, we analyzed purchase card data for merchant category codes and vendor names that were more likely to offer goods, services, or both that are on executive agencies’ restricted/prohibited lists, personal in nature, or of questionable government need. We identified split purchases by extracting multiple purchase transactions made by the same cardholder at the same vendor on the same day. For year-end purchases, we identified transactions from purchase card accounts where year-end activity is high compared to the rest of the year. With respect to convenience checks, we used various criteria, including identifying instances where convenience checks were written to cash or payees not normally associated with procurement needs and where a large number of convenience checks were written to a single payee, among others. We analyzed the banks’ databases for detailed transaction data, whenever available, for accountable property and highly pilferable items. We then requested and reviewed supporting documentation for over 550 transactions among the thousands we identified. We conducted investigative work, which included additional inquiries and data analysis, when applicable. While we identified fraudulent, improper, and abusive transactions, our work was not designed to identify and we cannot determine the extent of fraudulent, improper, or abusive transactions occurring in the population of governmentwide purchase card transactions. We assessed the reliability of the data provided by (1) performing various testing of required data elements, (2) reviewing financial statements of the five banks for information about the data and systems that produced them, and (3) interviewing bank officials knowledgeable about the data. In addition, we verified that totals from the databases agreed with the total purchase card activity provided by GSA and published on its Web site, in totality and for selected agencies. We determined that the data were sufficiently reliable for the purposes of our report. We conducted this performance audit from September 2006 through February 2008, in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Gregory D. Kutz, (202) 512-6722 or kutzg@gao.gov. In addition to the contact above, Tuyet-Quan Thai, Assistant Director; James Ashley; Beverly Burke; Bruce Causseaux; Sunny Chang; Dennis Fauber; Danielle Free; Jessica Gray; Ryan Guthrie; Ken Hill; Ryan Holden; Aaron Holling; John Kelly; Delores Lee; Barbara Lewis; Andrew McIntosh; Richard McLean; Aaron Piazza; John Ryan; Barry Shillito; Chevalier Strong; Scott Wrightson; Tina Wu; and Michael Zola made key contributions to this report.
|
Over the past several years, GAO has issued numerous reports and testimonies on internal control breakdowns in certain individual agencies' purchase card programs. In light of these findings, GAO was asked to analyze purchase card transactions governmentwide to (1) determine whether internal control weaknesses existed in the government purchase card program and (2) if so, identify examples of fraudulent, improper, and abusive activity. GAO used statistical sampling to systematically test internal controls and data mining procedures to identify fraudulent, improper, and abusive activity. GAO's work was not designed to determine the overall extent of fraudulent, improper, or abusive transactions. Internal control weaknesses in agency purchase card programs exposed the federal government to fraud, waste, abuse, and loss of assets. When testing internal controls, GAO asked agencies to provide documentation on selected transactions to prove that the purchase of goods or services had been properly authorized and that when the good or service was delivered, an individual other than the cardholder received and signed for it. Using a statistical sample of purchase card transactions from July 1, 2005, through June 30, 2006, GAO estimated that nearly 41 percent of the transactions failed to meet either of these basic internal control standards. Using a second sample of transactions over $2,500, GAO found a similar failure rate--agencies could not demonstrate that 48 percent of these large purchases met the standard of proper authorization, independent receipt and acceptance, or both. Breakdowns in internal controls, including authorization and independent receipt and acceptance, resulted in numerous examples of fraudulent, improper, and abusive purchase card use. These examples included instances where cardholders used purchase cards to subscribe to Internet dating services, buy video iPods for personal use, and pay for lavish dinners that included top-shelf liquor. GAO identified some of the case studies, including one case where a cardholder used the purchase card program to embezzle over $642,000 over a period of 6 years from the Department of Agriculture's Forest Service firefighting fund. This cardholder was sentenced to 21 months in prison and ordered to pay full restitution. In addition, agencies were unable to locate 458 items of 1,058 total accountable and pilferable items totaling over $2.7 million that GAO selected for testing. These missing items, which GAO considered to be lost or stolen, totaled over $1.8 million and included computer servers, laptop computers, iPods, and digital cameras. For example, the Department of the Army could not adequately account for 256 items making up 16 server configurations, each of which cost nearly $100,000.
|
The Housing Act of 1949 authorized new rural lending programs to farmers, which were administered by RHS’s predecessor, the Farmers Home Administration, within the U.S. Department of Agriculture (USDA). RHS now facilitates homeownership, develops rental housing, and promotes community development through loan and grant programs in rural communities. Over the decades, Congress changed the requirements for rural housing eligibility—for example, by changing population limits— and rural housing programs have evolved to serve low- and moderate- income people of all occupations. The current definition of rural considers factors such as whether an area is contained in an MSA, is “rural in character,” and “has a serious lack of mortgage credit for lower- and moderate-income families.” RHS’s Section 521 Rental Assistance Program is the agency’s largest line-item appropriation, with an annual budget of more than $500 million. The program provides rental subsidies for approximately 250,000 tenants who pay no more than 30 percent of their income for rent (RHS pays the balance to the property owner). The units in which the tenants live are created through RHS’s Section 515 Multifamily Direct Rural Rental Housing Loans and Section 514 Multifamily Housing Farm Labor Loans programs. The Section 515 and 514 programs provide developers loans subsidized with interest rates as low as 1 percent to help build affordable rental housing for rural residents and farm workers. RHS staff determine which areas are eligible for RHS housing programs by interpreting statutory requirements and agency guidance; however, their determinations involve judgment and may be open to question. Additionally, some eligibility requirements often result in areas with similar characteristics receiving different designations. For example, the requirement that an eligible area cannot be part of an MSA often results in ineligibility for what appears to be a rural area. Also, the “lack of credit” in rural areas remains an eligibility requirement, even though USDA has reported that a lack of income and ability to pay the mortgage appear to be the greater problems than a lack of credit for rural Americans. Section 520 of the Housing Act of 1949, as amended, defines rural for most RHS housing programs. Using the statute and instructions promulgated by the national office, state and local (together, field) offices determine the boundaries to delineate eligible areas from ineligible areas—a task field office officials acknowledged is time-consuming, based on judgment, and can be problematic. The statutory definition generally identifies eligible rural areas as those with populations up to 20,000 and defines “rural” and “rural areas” as any open country or any place, town, village, or city that is not part of or associated with an urban area. Specifically, there are several population levels at which communities may be determined eligible, but as a community’s population increases, the statute imposes additional requirements that include being “rural in character” (a concept that is not defined in the statute), having a serious lack of mortgage credit, or not being located within an MSA. Certain communities with populations above 10,000 but not exceeding 25,000 may be “grandfathered in,” based on prior eligibility if they still met the “rural in character” and “lack of credit” criteria. USDA’s instructions give its field offices flexibility in implementing the statute. Field office officials said that drawing the eligibility boundaries required an element of judgment because “rural in character” is open to interpretation—even with the overall national guidance on the statute and review of census populations, MSA standards, maps, aerial photographs, and visits to communities. Even when local supervisors fully understand the local conditions and rural character of an area, finding a way to equitably decide on a boundary is sometimes problematic. For instance, field staff in Maryland told us that in response to December 2002 national guidance, they stopped using natural features such as rivers or mountains as eligibility boundaries for communities. Maryland now uses only roads. Figure 1 shows a new boundary, a road that divides the eligible area on the left from the ineligible area on the right. RHS local office officials told us that the “road only” criteria forced them to find the nearest public road to a populated section of Hagerstown, which happens to go through farmland. The result is that apparently similar rural areas received different designations. Figure 2 shows an area in Brookside, Ohio, where the city line divides the eligible from the ineligible area. The Maryland example illustrates that using the only physical boundary available resulted in one piece of farmland receiving a rural designation and the other not. The Brookside example shows that using a political boundary also did not necessarily result in a readily discernible urban-rural difference. Our analysis of RHS eligible areas nationwide, compared with census data, found approximately 1,300 examples where communities with populations at or below 10,000 were within or contiguous with urban areas that had populations of 50,000 or more. The statute states that eligible communities cannot be a part of or associated with an urban area. Some field staff determinations of eligibility in these cases might be questionable as some of these communities, despite their low populations, might not be considered rural, and thus, eligible. For example, field staff told us that Belpre, Ohio, is eligible for RHS programs because it meets both the population and “rural in character” requirements. However, Belpre is contiguous with Parkersburg, West Virginia, which has a population of more than 33,000 (see fig. 3). In addition, the 2000 census considers Belpre, along with Parkersburg and Vienna, West Virginia, as part of an urbanized area because its total population exceeds 50,000. Although it is across the Ohio River from Parkersburg, bridges have connected Belpre and Parkersburg for decades and, according to a Belpre city employee, many people from Belpre work in Parkersburg. Furthermore, most of Belpre has a population density of 1,000 people or more per square mile, which the Census Bureau considers “densely settled” and a measure of urbanization. For these reasons, it is unclear whether Belpre meets the eligibility requirements. Changes to the way eligibility is defined might allow RHS to better designate “rural” areas and treat communities with similar characteristics more consistently. For instance, eliminating the MSA requirement and “grandfathering” might help RHS better serve its clients. To illustrate, we found rural communities with populations exceeding 10,000 that were directly impacted by the MSA and “grandfather” restrictions. Because MSAs are county-based and may contain both urban and rural areas, the MSA restriction and the grandfathering of certain communities resulted in some communities being eligible while others with similar demographic profiles were ineligible. We looked at two communities within the Bakersfield, California, MSA, which is basically rural outside the environs of Bakersfield (see fig. 4). Lamont was grandfathered because it lost eligibility when its population went above 10,000 at the 1980 census. Taft’s population was already over 10,000 prior to the 1980 census, so Taft was not eligible for grandfathering. The right side of the figure shows what would happen if MSAs and grandfathered eligibility were removed from the equation and a density- based system such as the Census Bureau’s urbanized areas/urban clusters were used to indicate changes in population. Taft would be in its own urban cluster outside of the Bakersfield urbanized area, which happens to include Lamont. Based on our visit, we believe this scenario, where the more rural community would be the one eligible, is more in line with the overall purpose of the legislation than the current situation. In another example, by eliminating the MSA criterion, RHS could review the eligibility of Washington Court House and Circleville, Ohio, based on population and rural character criteria. Additionally, using density-based mapping could help RHS draw boundaries around these communities, which although Census-designated as “urban clusters,” still meet rural housing program population requirements (see fig. 5). The statute imposes a requirement to demonstrate a serious lack of mortgage credit for lower- and moderate-income families in communities with populations of 10,001 to 25,000. RHS has a policy stating that a serious lack of mortgage credit at rates and terms comparable with those offered by the agency exists in all rural areas. However, a study by USDA’s Economic Research Service concluded that credit problems in rural areas are primarily limited to sparsely populated or remote rural areas; such communities generally do not fall into the population range specified above. Many of the RHS officials and industry experts with whom we spoke also saw the primary “credit” problem as lack of income rather than lack of credit. Additionally, eligibility requirements for RHS programs are based on income levels. The agency uses funding set asides, funding allocations, application reviews, and state-level strategic plans to determine areas and populations of greatest need. As a result, RHS program activity already is focused on income issues, and given RHS’s blanket policy, the “lack of credit” requirement is not central to determining participant eligibility. We reported that weaknesses in RHS’s budget estimation and oversight of rental assistance funds had resulted in largely overestimated budget levels and increased the risk that the agency was not efficiently or appropriately budgeting and allocating resources. Additionally, RHS lacked sufficient internal control to adequately monitor the disbursement of rental assistance funds. In March 2004, we reported that since 1990, RHS had consistently overestimated its budget needs for the rental assistance program. Concern had arisen about this issue because in early 2003 RHS reported hundreds of millions of dollars in unexpended balances tied to its rental assistance contracts. Specifically, in estimating needs for its rental assistance contracts, RHS used higher inflation factors than recommended, did not apply the inflation rates correctly to each year of the contract, and based estimates of future spending on recent high usage rather than average rates. First, the agency used inflation factors that were higher than those recommended by OMB for use in the budget process. Second, RHS did not apply its inflation rate separately to each year of a 5-year contract, but instead compounded the rate to reflect the price level in the fifth year and applied that rate to each contract year. The result was an inflation rate that was more than five times the rate for the first year. For example, using these two methods, RHS overestimated its 2003 budget needs by $51 million or 6.5 percent. Third, RHS based its estimates of future expenditure rates on recent maximum expenditures, rather than on the average rates at which rental assistance funds were being expended. Additionally, our analysis of rental assistance payment data showed that the agency had overestimated its budget needs almost every year since 1990, the earliest year for which we gathered data. Where we were able to obtain sufficient data from RHS, our analysis showed that if RHS had used and correctly applied OMB inflation rates to its base per-unit rates, its estimates would have been closer to actual expenditures (see fig. 6). We also reported that RHS was not adhering to internal control standards regarding segregation of duties, rental assistance transfers, and tenant income verification reviews. A single employee within the agency was largely responsible for both the budget estimation and allocation processes for the rental assistance program. According to GAO internal control standards, key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. Moreover, RHS did not have a comprehensive policy for transferring rental assistance. As a result, insufficient guidance on the transfer process limited RHS’s ability to move unused rental assistance to properties that had tenants with the greatest need. Finally, because RHS conducts reviews infrequently and covers a small percentage of tenant files, the agency cannot reasonably ensure that tenants’ income and assets, and ultimately rental assistance payments, are adequately verified. RHS’s national, state, and local offices share responsibility for monitoring the rental assistance program, with the local offices performing the primary supervisory review every 3 years. These triennial supervisiory reviews are RHS’s primary tool for detecting misreporting of tenant income, which may result in unauthorized rental assistance payments. But the shortcomings in the review process increase the risk that RHS will provide rental assistance to tenants that may not be eligible. Alternate methods of verifying tenant information, such as internal database checks and wage matching, also have limited effectiveness but could help improve internal control if properly designed or implemented. Today we are releasing a report addressed to the RHS Administrator on internal control issues in the Information Resource Management (IRM) databases. We issued the report as a follow-up to our work addressing the definition of rural used for rural housing programs. During the earlier review, we identified several issues that raised concerns about the accuracy of the information in the IRM databases. For example, while we originally intended to geocode (match) 5 years of the national RHS housing loan and grant portfolio to specific communities, the time needed to ensure the reliability of the data required us to limit much of our analysis to five states. In reviewing 29,000 records for five states we found incorrect, incomplete, and inconsistent entries. For example, over 8 percent of the community names or zip codes were incorrect. Additionally, inconsistent spellings of community names distorted the number of unique communities in the database. More than 400 entries lacked sufficient information (street addresses, community names, and zip codes) needed to identify the community to which the loan or grant had been made. As a result, some communities served by RHS were double counted, others could not be counted, and the ability to analyze the characteristics of communities served was compromised. Since these data form the basis of information used to inform Congress (and the public) about the effectiveness of RHS programs, data accuracy is central to RHS program management and the ability of Congress and other oversight bodies to evaluate the agency and its programs. While the agency has worked to improve its management information systems (for example, since 2002, the agency has spent $10.3 million to improve its management information systems including developing single and multifamily program data warehouses which were designed to improve its reporting capabilities), the system still relies upon information collected and entered from field offices. However, RHS does not have procedures for second-party review of the data in IRM systems. Moreover, while the IRM databases have edit functions in place that are intended to prevent the entry of nonconforming data (such as the entry of a community name in a street address field), the functions are not preventing incorrect or incomplete entries. Until RHS can demonstrate that its edit functions or other data entry design features can ensure the accuracy and completeness of the data in the IRM databases, second-party review would be necessary. Our 2002 report to this subcommittee on RHS’s Section 515 multifamily program concluded that with little new construction and limited prepayment at that time, maintaining the long-term quality of the aging housing stock in the program portfolio had become the overriding issue for the program. We found that RHS did not have a process to determine and quantify the portfolio’s long-term rehabilitation needs. As a result, RHS could not ensure that it was spending its limited funds as cost- effectively as possible, providing Congress with a reliable or well- supported estimate of what was needed to ensure the physical and fiscal “health” of the multifamily portfolio, and prioritizing those needs relative to the individual housing markets. We recommended that USDA undertake a comprehensive assessment of long-term capital and rehabilitation needs for the Section 515 portfolio. We also recommended that USDA use the results of the assessment to set priorities for immediate rehabilitation needs and develop an estimate for Congress on the amounts and types of funding needed to deal with long-term needs. In response to our recommendation, RHS commissioned a consulting firm to assess the condition and rehabilitation needs of its multifamily portfolio. RHS released the study in November 2004. The principal findings—that the housing stock represented in the portfolio is aging rapidly and that property reserves and cash flows are not sufficient for basic maintenance or long-term rehabilitation needs—are in line with our findings in 2002. The study concludes that continuing the status quo would put undue stress on the rental assistance budget and proposes leveraged solutions that combine market-based solutions with private-sector funding as a more cost-effective alternative to using only federal dollars. In addition, the study concludes that while its proposed solutions will cost more than current budget levels, delaying actions to address the portfolio’s physical, fiscal, and market issues will result in even greater budget needs in the future. RHS has made progress in improving program management over the past few years. For example, when we began our work on the multifamily loan program in June 2001, agency officials could not provide us with the number of properties in the portfolio or a list of where properties were located. Today, with the exception of some database errors we pointed out that RHS officials have committed to correct, RHS knows where its multifamily properties are located and has developed a revitalization strategy to deal with the physical, fiscal, and market issues identified. However, the agency still faces challenges in areas that include the basic question of how best to determine what areas are rural, how to best manage rental assistance (the largest budget item in RHS), and how to ensure that data entered into management information systems are accurate. Despite these challenges, opportunities exist to provide more flexibility and improve existing processes that could better help RHS serve its clients while responding to the challenges of current fiscal and budget realities. For example, while determining what areas are eligible for rural housing programs will always require an element of judgment, several changes to the current eligibility requirements could help RHS make more consistent eligibility determinations. If MSAs were removed from the eligibility criteria, RHS officials could make determinations for more communities based on population data and “rural character.” And, using an alternative measure such as the Census Bureau’s urbanized areas and urban cluster classifications as a guide could help RHS better draw boundaries around rural areas, because the density-based measures provide finer-scale information. Additionally, eligible communities within MSAs would not need to be “grandfathered” based on previous eligibility, a provision which essentially gives these communities an advantage over similar though ineligible towns located in MSAs. Finally, the “lack of credit” requirement could be removed with no detriment to RHS housing programs. We noted further opportunities for improvement in RHS’s largest program—the rental assistance program, which has an annual budget of over $500 million and provides rental subsidies to about 250,000 rural tenants. Problems with its budget estimating processes caused the agency to consistently overstate its spending needs, resulting in hundreds of millions of dollars in unexpended balances. Consistently overstating funding needs for one program also undermines the congressional budget process by making funds unavailable for other programs. In addition, RHS’s internal controls had not provided reasonable assurance that rental assistance resources were being used effectively. We questioned whether internal control weaknesses were preventing rental assistance funds from going to properties with the neediest tenants. RHS has recently moved on a number of fronts to correct the many rental assistance program shortcomings identified in our reports. For example, RHS has told us that it will follow OMB budget estimation guidance, that it is correcting the program’s segregation of duty issues, has issued standardized guidelines on rental assistance transfers, and is revamping its supervisory review process. While it is too early for us to fully review the impact of these changes, we believe that changes in how rental assistance budgets are estimated and the application or strengthening of internal controls, consistent with our recommendations, would result in greater efficiency and resource savings in this pivotal program. Finally, in reviewing RHS property data for selected states, we identified various errors that raise questions about the accuracy of agency’s data. Although the agency is making efforts to improve its data systems, our findings suggest additional measures could ensure more accurate data entry and reporting, particularly at the field level. In addition to improving the accuracy of the information, such an effort could ensure that RHS’s investment in system upgrades would provide more meaningful and useful information to the agency itself, Congress, and the public. To improve eligibility determinations in rural housing programs, we suggested that Congress may wish to consider eliminating the MSA criterion, recommending that RHS use density measures as a basis for its eligibility decisions, phasing out the practice of “grandfathering” communities, and eliminating the “lack of credit” requirement. To help the agency verify tenant information, we also suggested that the Congress consider giving RHS access to the Department of Health and Human Services’ National Directory of New Hires (New Hires), which includes centralized sources of state wage, unemployment insurance, and new hires data for all 50 states, and it would provide nationwide data for wage matching. Congress already granted HUD the authority to request and obtain data from New Hires in January 2004, and as part of its initiative to reduce improper rent subsidies for its rental assistance program, HUD is making New Hires information available to public housing authorities who are responsible for, among other things, verifying tenant income and calculating rent subsidies correctly. HUD plans to make the data from the new hires database available to property owners by fiscal year 2006. To more accurately estimate rental assistance budget needs, we recommended that the Secretary of Agriculture require program officials to use and correctly apply the inflation rates provided by OMB in its annual budget estimation processes. To ensure that rental assistance funds are effectively distributed to properties that have tenants with the greatest need, we recommended that the Secretary of Agriculture require program officials to establish centralized guidance on transferring unused rental assistance, improve sampling methods to ensure a sufficient number of tenant households are selected for supervisory reviews, and improve tenant verification of information, including more effective use of alternate methods of income verification. To improve data entry and accuracy, we recommend that RHS formally advise field staff to establish a second-party review of data in the IRM databases are accurate and complete, require correction of errors in existing information, and ensure that system edit functions are properly functioning. USDA generally agreed with our matters for congressional consideration, stating that our report on eligibility articulates how the use of MSAs has resulted in disparate treatment of some communities. USDA added that applying a density-based measure might have merit but that further study would be needed to properly define such a measure for nationwide application. We concur with this position. In addition, USDA stated that the “lack of credit” requirement could be removed with no detriment to RHS housing programs. USDA initially disagreed with our finding that its rental assistance budget estimates were too high, questioning whether we demonstrated that using inflation rate projections from the President’s Budget would provide a more accurate budget estimate. However, USDA has now reported that it will adopt OMB estimates, and it appears that RHS now agrees with our report findings. USDA also generally agreed with most of our recommendations on monitoring and internal controls. RHS has recently issued regulations and an asset management handbook on transferring unused rental assistance and expanded guidance on income verification. Also, it appears that RHS is acting on our recommendation to improve sampling methods to ensure a sufficient number of tenant households are selected for supervisory reviews; that is, the agency has informed us that it is revamping that process. Finally, the RHS Administrator has generally agreed to implement our recommendations on the IRM databases. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or members of the Subcommittee may have. For more information regarding this testimony, please contact William B. Shear at (202) 512-4325 or shearw@gao.gov or Andy Finkel at (202) 512- 6765 or finkela@gao.gov. Individuals making key contributions to this testimony also included Martha Chow, Katherine Trimble, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The rural America of 2005 is far different from the rural America of the 1930s, when the federal government first began to provide housing assistance to rural residents. Advances in transportation, computer technology, and telecommunications, along with the spread of suburbia, have linked many rural areas to urban areas. These changes, along with new fiscal and budget realities, raise questions about how Rural Housing Service (RHS) programs could most effectively and efficiently serve rural America. This testimony is based on a report on how RHS determines which areas are eligible for rural housing programs, three reports on RHS's rental assistance budgeting and distribution processes, and a report we are releasing today on internal control issues with RHS's loans and grants databases. GAO found that while RHS has significantly improved the housing stock in rural America and has made progress in addressing problems, several issues prevent the agency from making the best use of resources. Specifically: (1) Statutory requirements for program eligibility, including those related to metropolitan statistical areas (MSA), "grandfathering" communities, and demonstrating a "serious lack of mortgage credit," are of marginal utility. For example, using density measures rather than MSAs might allow RHS to better differentiate urban and rural areas, and phasing out the "grandfathering" of communities could better ensure that RHS makes more consistent eligibility determinations; (2) RHS has consistently overestimated its rental assistance budget needs by using higher inflation rates than recommended by the Office of Management and Budget and incorrectly applying those rates. Also RHS lacked sufficient internal controls to adequately monitor the use of rental assistance funds, particularly for fund transfers and income verifications. RHS has been taking actions that should correct many of the rental assistance shortcomings GAO identified; and (3) GAO found incorrect, incomplete, and inconsistent entries in RHS's loans and grants databases. Until RHS can demonstrate that its system edit functions or other design features can ensure the accuracy of data in its databases, second-party review is necessary to meet internal control standards.
|
SARS is an emerging respiratory disease that has been reported principally in Asia, Europe, and North America. SARS is believed to have originated in Guangdong Province, China in mid-November 2002. However, early cases of the disease went unreported, which then delayed identification and treatment of the disease allowing it to spread. On February 11, 2003, WHO received its first official report of an atypical pneumonia outbreak in China. This report stated that 305 individuals were affected by atypical pneumonia and that 5 deaths had been attributed to the disease. SARS was transmitted out of the Guangdong Province on February 21, 2003, by a physician who became infected after treating patients in the province. Subsequently, the physician traveled to a hotel in Hong Kong and began suffering from flu-like symptoms. Days later, other guests and visitors at the hotel contracted SARS. As infected hotel patrons traveled to other countries, such as Vietnam and Singapore, and sought medical attention for their symptoms, they spread the disease throughout each country’s hospitals as well as in some communities. Simultaneously, the disease began spreading around the world along international air travel routes as guests from the hotel flew homeward to Toronto and elsewhere. Scientific evidence indicates that SARS is caused by a previously unrecognized coronavirus. Transmission of SARS appears to result primarily from close person-to-person contact and contact with large respiratory droplets emitted by an infected person who coughs or sneezes. After contact, the incubation period for SARS—the time it takes for symptoms to appear after an individual is infected—is generally within a 10-day period. Clinical evidence to date also suggests that people are most likely to be contagious at the height of their symptoms. However, it is not known how long after symptoms begin that patients with SARS are capable of transmitting the virus to others. There is no evidence that SARS can be transmitted from asymptomatic individuals. Currently, there is no definitive test to identify SARS during the early phase of the illness, which complicates diagnosing infected individuals. As a result, the early diagnosis of SARS relies more on interpreting individuals’ symptoms and identification of travel to locations with SARS transmission. SARS symptoms include fever, chills, headaches, body aches, and respiratory symptoms such as shortness of breath and dry cough—making SARS difficult to distinguish from other respiratory illnesses, such as the flu and pneumonia. The initial symptoms can be quite mild, and gradually increase in severity, often peaking in the second week of illness. In some individuals, the disease might progress to the point where insufficient oxygen is getting to the blood. CDC has established for health care providers criteria used for the identification of individuals with SARS, called case definitions. In the absence of a definitive diagnostic test for the disease in its early phase, reported cases of SARS are classified into two categories based on clinical and epidemiologic criteria—“suspect” and “probable.” These case definitions continue to be refined as more is learned about this disease. A “suspect” case of SARS includes the following criteria: respiratory illness, and recent travel to an area with current or previously documented suspected transmission of SARS, and/or close contact within 10 days of the onset of symptoms with a person known or suspected to have SARS. A “probable” case of SARS includes the following criteria: all the criteria for “suspect” cases and evidence in the form of chest x-ray findings of pneumonia, acute respiratory distress syndrome (ARDS), or an unexplained respiratory illness resulting in death with autopsy findings of ARDS. The final determination of whether cases meeting the definitions for “suspect” and “probable” SARS are due to infection with the SARS virus is based on results of testing a blood specimen obtained 28 days after the onset of illness. Furthermore, there is no specific treatment for SARS. In the absence of a rapid diagnostic test, it can be very difficult to distinguish clinically between individuals with SARS and individuals with atypical pneumonia. Therefore, CDC currently recommends that individuals suspected of having SARS be managed using the same diagnostic and therapeutic strategies that would be used for any patient with serious atypical pneumonia. In mild cases of SARS, management at home may be appropriate, while more severe cases may require treatment, such as intravenous medication and oxygen supplementation, that necessitates hospitalization. In 10 to 20 percent of SARS cases, patients require mechanical ventilation. As of July 11, 2003, the mortality rate for SARS was approximately 10 percent, but the mortality rates in individuals over 60 years of age approached 50 percent. As of July 11, 2003, WHO reported that there were an estimated 8,427 “probable” cases from 29 countries, with 813 deaths from SARS. China, Hong Kong, Singapore, Taiwan, and Canada reported the highest number of cases. As of July 15, 2003, the United States identified 211 SARS cases in 39 states (including Puerto Rico), with no related deaths. Of these cases, 175 are classified as “suspect” cases, while 36 are classified as “probable.” In the United States, 34 of the 36 “probable” cases contracted SARS through international travel. However, in the other affected countries, SARS spread extensively among health care workers. For example, of the 138 diagnosed cases in Hong Kong as of March 25, 2003, that were not due to travel, 85 (62 percent) occurred among health care workers; among the 144 cases in Canada as of April 10, 2003, 73 (51 percent) were health care workers. In the United States, the Healthcare Infection Control Practices Advisory Committee (HICPAC), a federal advisory committee made up of 14 infection control experts, develops recommendations and guidelines regarding general infectious disease control measures for CDC. Important components of these infectious disease control measures are the following: case identification and contact tracing, transmission control, and exposure management. Case Identification and Contact Tracing. Case identification and contact tracing are considered by health care providers to be important first steps in the containment of infectious diseases in both the community and health care settings. Case identification is the process of determining whether or not a person meets the specific definitions for a given disease. Generally, health care providers interview patients in order to obtain the history, signs, and symptoms of the patient’s complaint and perform a physical examination. Tests, such as blood tests or x-rays, can be performed to provide additional information to help determine the diagnosis. Public awareness of the symptoms of a disease can help case identification to the extent that individuals who believe they exhibit the symptoms seek medical attention. Contact tracing involves the identification and tracking of individuals who may have been exposed to a person with a specific disease. Transmission Control. Transmission control measures decrease the risk for transmission of microorganisms through proper hand hygiene and the use of personal protective equipment, such as masks, gowns, and gloves. These measures also include the decontamination of objects and rooms. The types of transmission control measures used are based on how an illness is transmitted. For example, some categories of transmission are as follows: Direct contact: person-to-person contact (e.g., two people shaking hands) and physical transfer of the microorganism between an infected person and an uninfected person. Indirect contact: contact with a contaminated object, such as secretions from an infected person on a doorknob or telephone receiver. Droplet: eye, nose, or mouth of an uninfected person coming into contact with droplets (larger than 5 micrometers) containing the microorganism from an infected person, for example an infected person sneezing without covering his/her mouth with a tissue. Airborne: contact with small droplets (5 micrometers or smaller) or dust particles containing the microorganism, which are suspended in the air. Exposure Management. Exposure management is the separation of infected individuals from noninfected individuals through isolation or quarantine. Isolation refers to the separation of individuals who have a specific infectious illness from healthy individuals and the restriction of their movement to contain the spread of that illness. Quarantine refers to the separation and restriction of movement of individuals who are not yet ill, but who have been exposed to an infectious agent and are potentially infectious. The success of these infectious disease control measures—case identification and contact tracing, transmission control, and exposure management—depends, in part, on the frequent and timely exchange of information. Public health officials and health care providers need to be informed about any modifications of existing infectious disease control measures, the geographic progression of an outbreak, and reports of disease occurrence. Likewise, elevating public knowledge about an infectious disease and its symptoms will enable infected individuals to seek medical attention as soon as possible to contain the spread. Infectious disease experts emphasized that existing infectious disease control measures played a pivotal role in containing the spread of SARS in both health care and community settings. The combinations of measures that were used depended on either the prevalence of the disease in the community or the number of SARS patients served in a health care facility. No new measures were introduced to contain the SARS outbreak in the United States; instead, experts said strict compliance with and additional vigilance to enforce the use of current measures was sufficient. The successful implementation of all of the infectious disease control measures depended, in part, on effective communication among health care professionals and the general public. To prevent the spread of SARS, public health authorities worked to identify every individual who might have been infected with the disease. Rapid identification of these individuals was critical, but the lack of an effective and timely diagnostic test that could be used during the early stages of the disease to identify those who actually had SARS was an obstacle in halting its spread. Experts acknowledged that identification of individuals who might have been infected with the SARS virus was likely to include many people who did not have SARS because the case definition of an individual with SARS is not highly specific and the disease resembles other respiratory illnesses, such as pneumonia and the flu. The long incubation period for SARS provided health care workers the opportunity to identify cases and close contacts of infected individuals before those who actually had the SARS virus could spread the disease to others. An important part of case identification is screening individuals for symptoms of a disease. CDC recommended that when individuals called for appointments and as soon as possible after the individual arrived in a health care setting, all individuals should be screened with targeted questions concerning SARS-related symptoms, close contact with a SARS suspect case patient, and recent travel. For SARS, public health and hospital officials in California and New York said hospital emergency room or other waiting room staff routinely used questionnaires to screen incoming patients for fever, cough, and travel to a country with active cases of SARS. They said that hospitals’ signs in various locations generally used by incoming patients and visitors also included these criteria and asked individuals to identify themselves to hospital staff if they met them. According to these officials, an individual identified as a potential SARS case generally was given a surgical mask and moved into a separate area for further medical evaluation. CDC officials said that these measures were also important for physicians in private practice. The New York City and California health departments used e-mail health alert notices to inform private physicians, such as family practitioners and pediatricians, about these case identification procedures. These notices directed physicians to information posted on the health departments’ Web sites. In addition, officials from these health departments provided information about SARS case identification, among other topics, during local meetings for members of the medical community, including physicians in private practice. Toronto, which experienced a much greater prevalence of SARS than the United States, used somewhat different case identification practices. At the height of the outbreak in Toronto, everyone entering a hospital was required to answer screening questions and to have their temperature checked before they were allowed to enter. Toronto public health department officials said this heightened screening was useful for case identification and had an added benefit of educating staff and visitors about SARS symptoms. As a further measure, Toronto health officials established SARS assessment clinics, also known as fever clinics; persons suspecting they might have SARS were asked to go to the clinics rather than directly to hospital emergency rooms to avoid infecting other individuals. However, officials acknowledged several limitations to using these assessment clinics. Because there was no follow-up to an initial assessment, some SARS cases that were in the early stages were not identified, but later these individuals went to hospital emergency rooms. Other difficulties included finding physicians to staff the clinics and implementing hospital-level infectious disease control measures at these separate clinics. For example, some clinics were set up in non-hospital locations—one assessment clinic was set up in a tent near a hospital emergency room entrance, while another was situated in a hospital ambulance bay where emergency personnel transfer patients into the hospital. Contact tracing—the identification and tracking of individuals who had close contact with a “suspect” or “probable” case—is an important component of case identification. Contact tracing to identify individuals at significant risk for SARS required significant local health department resources. In New York City, four teams from the communicable disease bureau, comprised of either a physician or nurse and several field workers, interviewed each suspect or probable case in order to identify contacts. They then called each contact to advise them of their exposure and provided information on monitoring for symptoms of SARS and receiving treatment if necessary. The calls were also to ensure that the contacts were following infection control measures in the home. Each contact received routine calls during a 10-day period—an average of four calls each from a team member. A New York City health department official characterized the process of contact tracing as labor and time intensive. Standardized forms and electronic contact and case databases helped the teams manage contact tracing. Additionally, routine weekly meetings with other health department divisions ensured that if assistance was needed from these departments, they would be up-to-date. Furthermore, New York City developed procedure manuals that would allow staff from other departments to be trained quickly if needed to assist members of the communicable disease bureau. The health department official emphasized that the electronic database created to log information about SARS contacts was an important tool to facilitate contact tracing. Toronto officials agreed that daily contact tracing required a large amount of resources. Adding to Toronto’s difficulties, its health department did not have an electronic case or contact database, but had to rely on separate paper files for each individual. Experts recommended a combination of transmission control measures because not all modes of SARS transmission are known. The primary mode of transmission is direct person-to-person contact, although contact with body fluids and contaminated objects, and possibly airborne spread, may play a role. Therefore, multiple infection control practices that are used for each type of transmission are included in SARS infection control guidelines. Some combination of practices was recommended for both health care settings and in the community, with more intensive infection control procedures recommended for health care settings. According to several experts, the simple “things your mother taught you,” such as washing your hands and covering your mouth and nose with a tissue when sneezing or coughing were effective in reducing the spread of SARS. CDC prepared SARS guidelines for transmission control measures for both inpatient (such as hospitals) and outpatient (such as physician offices) health care settings. These recommendations combined what the CDC calls “standard” hospital transmission control measures with transmission control measures specific to contact and airborne transmission. For the inpatient setting, the guidelines included: Routine standard precautions, including hand washing. In addition to standard precautions, CDC recommended eye protection—such as goggles or a face shield. Contact precautions, such as the use of a gown and gloves for encounters with the patient or his/her environment. Airborne precautions, such as an isolation room with negative pressure relative to the surrounding area, and the use of an N-95 filtering disposable respirator for persons entering the room. The CDC guidelines suggested that if an isolation room was not available, patients should be placed in a private room, and all persons entering the room should wear N- 95 respirators (or respirators offering comparable protection) to protect the wearer from particles expelled by a sick person, such as in coughing or sneezing. CDC recommended that, where possible, a test to ensure that the N-95 respirators fit properly should be conducted. If N-95 respirators were not available for health care personnel, then surgical masks should be worn. Generally, the material of N-95 respirators is designed to filter smaller particles than a surgical mask, and they also are designed to seal more tightly to the face. The health department and hospital officials we spoke with said they generally adopted these CDC guidelines for transmission control in inpatient settings. Officials said one of the most effective practices to contain SARS was frequent hand washing with soap and water. CDC guidelines also allow the use of waterless alcohol-based hand rubs after coming in contact with “suspect” or “probable” SARS patients or their environments. Additionally, a hospital and a health department official said careful cleaning of SARS patient rooms was an important hygiene measure. Inpatient facilities in the United States generally saw few SARS patients. In New York and California, the hospital officials stated that because of the small number of cases that were seen in each hospital, usually only one or two at a time, the hospitals were able to manage SARS patients in available isolation rooms. Because of the greater prevalence of SARS in Toronto, all 22 acute care hospitals were directed to have a SARS unit with negative pressure to the rest of the hospital, individual rooms, and specific staff who only cared for SARS patients. Toronto health department officials later were able to designate four hospitals as SARS hospitals and direct all SARS patients to these four facilities. The use of face masks or N-95 respirators was highly recommended by experts as an effective means of transmission control for SARS in inpatient settings. In one study of health care workers who had extensive contact with SARS patients in five Hong Kong hospitals, researchers found that no health care worker who consistently used either type of face covering became infected. Experts also noted that the use of N-95 respirators and isolation rooms was especially important for high-risk medical procedures, such as intubation, where a patient’s secretions are likely to be transformed into a fine spray and spread for a longer distance than large droplets. Officials cautioned, however, that there can be difficulties in the use of N-95 respirators. One public health official said that compliance may be limited in hospitals in several ways—either staff has never been properly fitted for the respirators, or some staff who were fitted many years ago should have a more recent fitting. In Canada, Ontario’s health ministry directed health care workers in the province (which includes Toronto) to employ an additional level of protective equipment when conducting high-risk medical procedures that was not recommended in the United States. For example, health care workers used a protective system that included a hood, a full-face respirator, and a complete body covering such as long-sleeved floor-length gowns and gloves. The CDC guidelines for outpatient settings included the same standard and contact precautions outlined for inpatient settings. Reflecting the different types of facilities likely available in a physician office compared to a hospital, for example, outpatient guidelines did not advocate the use of specialized isolation rooms. Instead, for outpatient settings, the guidelines advised health care personnel to separate the potential SARS patient from others in a reception area as soon as possible, preferably in a private room with negative pressure relative to the surrounding area. At the same time, the guidelines said that a surgical mask should be placed over the patient’s nose and mouth—if this was not feasible, the patient should be asked to cover his or her mouth with a disposable tissue when coughing, talking, or sneezing. Transmission control guidelines for community settings incorporated many of the same types of measures for containing the spread of SARS as recommended for health care settings. CDC published SARS transmission control guidelines for two community settings—the workplace and households. The workplace guidelines recommended frequent hand washing with soap and water or waterless alcohol-based hand rubs. Along with handwashing, guidelines for household transmission control included the following: Infection control precautions should be continued for SARS patients for 10 days after respiratory symptoms and fever are gone. SARS patients should limit interactions outside the home and should not go to work, school, out- of-home day care, or other public areas during the 10-day period. During this 10-day period, each patient with SARS should cover his or her mouth and nose with a tissue before sneezing or coughing. If possible, a person recovering from SARS should wear a surgical mask during close contact with uninfected persons. If the patient is unable to wear a surgical mask, other people in the home should wear one when in close contact with the patient. Disposable gloves should be considered for any contact with body fluids from a SARS patient. Immediately after activities involving contact with body fluids, gloves should be removed and discarded, and hands should be washed. Gloves should not be washed or reused, and were not intended to replace proper hand hygiene. SARS patients should avoid sharing eating utensils, towels, and bedding with other members of the household, although these items could be used by others after routine cleaning, such as washing or laundering with soap and hot water. Frequent use should be made of common household cleaners for disinfecting toilets, sinks, and other surfaces touched by patients with SARS. Exposure management methods such as isolation and quarantine are important infectious disease control measures. These measures were particularly effective for SARS because of its long incubation period during which infected individuals could be isolated before they become contagious. In fact, experts stated that isolation of infected individuals and quarantine measures used for exposed individuals were critical for the containment of SARS. Isolation of SARS infected individuals occurred in both health care and home settings. In Toronto, patients were typically isolated in the hospital—even in cases where individuals were not ill enough to need hospitalization. During the height of Toronto’s outbreak, all 22 acute care hospitals were directed to have separate SARS units. On the other hand, in the United States, individuals were hospitalized only if they needed intensive medical treatment. According to an infectious disease expert who consulted with the CDC, this practice was prompted by concerns that grouping SARS cases together, such as in a hospital ward, could increase the likelihood of spread to both health care workers and other hospital patients. For home isolation in New York City, each patient and contact was given detailed information that included instructions on what to do if ill, reminders of the importance of calling ahead before going to a physician’s office or other health care settings, and information on how to travel to a health care setting without coming in contact with others. These instructions also included guidelines for transmission control measures to be used in the home. For all probable cases, the New York City health department conducted a home assessment to ensure that a SARS patient could be adequately isolated at home, which included the need for such things as adequate ventilation and bathrooms that would not be shared by noninfected individuals. Quarantine of exposed individuals was based on different parameters, depending on the number of “suspect” or “probable” SARS cases in the community. CDC officials said the agency’s guidance reflected the fact that there was little or no transmission of SARS in the United States, and therefore quarantine was less warranted because there were so few cases in a community. CDC’s guidance advised individuals who were exposed but not symptomatic to monitor themselves for symptoms—such as fever, a cough, and difficulty breathing, and further advised home isolation and medical evaluation if symptoms began. CDC officials also advised transfer to a hospital only if the illness became severe. In contrast, Toronto, which experienced a high level of person-to-person transmission, used a more conservative quarantine standard. Individuals who did not have symptoms but had been in contact with SARS infected individuals were ordered to stay in their homes and avoid public gatherings for 10 days. Thousands people were asked to undergo quarantine in their homes in the Toronto area. During the outbreak, exposed Toronto health care workers were restricted to “work quarantine”—they were only allowed to travel to and from work alone in their vehicles, but they were not allowed to have visitors or visit public places. Quarantine efforts in Toronto again required a high level of resources. Daily phone calls required 60 staff per 1,000 people who were quarantined in the Toronto area; these staff worked 7 days a week to follow up with twice-daily calls to each individual. According to health officials, rapid and frequent communications of crucial information about SARS—such as the level of outbreak worldwide and recommended infectious disease control measures—were vital components of the efforts to contain the spread of SARS. Since March 2003, health organizations have shared extensive SARS-related information and guidelines with health care workers. For example, WHO scheduled numerous press briefings that updated the health community about the status of international SARS containment and prevention efforts. WHO, with CDC support, sponsored a videoconference broadcast globally to discuss the latest findings of the outbreak and prevention of transmission in health care settings (which was also available for computer download). CDC activated its Emergency Operations Center and devoted over 800 medical experts and support personnel worldwide to provide round-the-clock coordination and response to the SARS outbreak. CDC also had regular conference calls and information-sharing sessions with various medical professional associations and state and local health departments and laboratories. At the state level, the California health department utilized the California Health Alert Network to send e-mails with SARS information (often based on CDC information) to all local health departments and many hospitals and physicians. The New York City health department hosted a symposium specifically for health care workers, to share the latest available SARS information. Hospital officials we spoke with also offered training seminars for their health care personnel on the signs and symptoms of SARS, recommended screening questions, and appropriate infectious disease control measures. Furthermore, hospitals kept their patients informed about SARS via posters and flyers throughout their facilities, especially in emergency room waiting areas. Health organizations maintained open and frequent communications in the community setting to facilitate the containment of SARS. For example, in a 2-week period early in the SARS outbreak, CDC conducted nine telephone press conferences with the media to keep the public informed about the latest SARS information, including numbers of “suspect” and “probable” SARS cases, laboratory and surveillance findings, travel advisories, and CDC’s efforts nationally and worldwide. CDC also distributed more than two million health alert notices to travelers entering the United States from China, Hong Kong, Singapore, Taiwan, Vietnam, or Toronto. These cards, printed in eight languages, asked individuals to monitor their health for at least 10 days and to contact their health care provider if they exhibited SARS symptoms. A state and a local health official also stressed the importance of informing and educating the general public in workplaces and schools on the signs and symptoms of SARS, an effort which was intended to foster self-identification, minimize panic, and assuage fears of being infected. Public health officials also concurred that collaboration between federal, state, and local health agencies as well as the medical community was crucial in containing the spread of SARS. Through the collaboration of all the appropriate players, coordination of prevention activities could be maintained, roles could be identified and assigned, available resources could be shared, and subsequent evaluations could be conducted. For instance, the Toronto health department maintained active communications with its local, provincial, and national governments in regard to isolation and quarantine practices, travel jurisdictions, and other SARS-related matters. The health department published directives for all Toronto area health care providers, outlining their SARS-related roles and responsibilities. The health department also maintained ongoing contact with identified liaisons at Toronto hospitals where SARS patients were hospitalized. Furthermore, the city of Toronto activated its local emergency operations center, which brought together emergency medical services, police, and community neighborhood planners to work together to contain SARS. Throughout Toronto’s efforts, numerous briefings and teleconferences were organized to keep all players abreast about the latest SARS information in the community. While no one knows whether there will be a resurgence of SARS, federal, state, and local health care officials we interviewed agree that it is necessary to prepare for the possibility. As part of these preparations, CDC, along with national associations that represent state and local health officials, and others, is involved in developing SARS-specific guidelines for using infectious disease control measures and contingency response plans. In addition, these associations have collaborated with CDC to develop a checklist of preparedness activities for state and local health officials. Such preparation efforts also improve the health care system’s capacity to respond to other infectious disease outbreaks, including those precipitated by bioterrorism. However, implementing these plans may prove difficult due to limitations in both hospital and workforce capacity. A large-scale SARS outbreak could create overcrowding, as well as shortages in medical equipment (including N-95 respirators) and in health care personnel, who are at higher risk for infection due to their more frequent exposure to a contaminated environment. At the federal level, CDC has begun contingency planning for a SARS outbreak, having convened a task force of infection control experts who are responsible for developing SARS-specific guidelines and recommendations, which address various infection control measures. The task force plans to publish its guidelines and recommendations by September 2003. CDC is collaborating with several professional associations, such as the Council of State and Territorial Epidemiologists, ASTHO, and NACCHO, to develop these response plans that vary according to the prevalence of the disease and the type of setting (i.e., health care or community) in which control measures need to be implemented. At the state and local levels, health departments are also in the process of developing contingency response plans for SARS. To facilitate this, ASTHO and NACCHO, in collaboration with CDC, published a checklist for state and local health officials to use in the event of a SARS resurgence. The SARS preparations have been modeled after a checklist designed for pandemic influenza. The checklist encompasses a broad spectrum of preparedness activities, such as legal issues related to isolation and quarantine, strategies for communicating information to health care providers, and suggestions for ensuring other community partners such as law enforcement and school officials are prepared (see app. I for a copy of the checklist). In specific local preparedness efforts, California and New York, which had the highest number of SARS cases in the United States, are also preparing for a large-scale SARS outbreak. For example, California health department officials said they were developing a plan for surge capacity by considering staff rotations or details of health department specialists to maintain a high level of response during a potential SARS outbreak. Similarly, officials with the New York City health department said they had created a formal procedure manual, which outlines the roles of reallocated staff from various teams in the department, to help contain a large-scale SARS outbreak. While hospital officials we spoke with stated that they are taking steps to ensure that they have the necessary preparations to address a large-scale SARS outbreak, hospitals may still be limited in their capacity to respond. Because of the inability to precisely determine if someone has SARS, many people may be treated who do not have the virus. In the event of a large- scale outbreak, this imprecision may result in severe overcrowding in health care settings—especially if a SARS resurgence occurs during a peak season for another respiratory disease like influenza. This could strain the available capacity of hospitals. For example, public health officials with whom we spoke said that in the event of a large-scale SARS outbreak, entire hospital wards (along with their staff) may need to be used as separate SARS isolation facilities. Moreover, certain hospitals within a community might need to be designated as SARS hospitals. We recently reported that most hospitals lack the capacity to respond to large-scale infectious disease outbreaks. Most emergency departments have experienced some degree of crowding and therefore, in some cases, may not be able to handle a large influx of patients during a potential outbreak of SARS or another infectious disease. Few hospitals have adequate staff, medical resources, and equipment, such as N-95 respirators, needed to care for the potentially large numbers of patients that may seek treatment. We reported that in the seven cities we visited, hospital, state, and local officials indicated that hospitals needed additional equipment and capital improvements—including medical stockpiles, personal protective equipment, quarantine and isolation facilities, and air handling and filtering equipment—to enhance preparedness. According to our survey of over 2,000 hospitals, the availability of medical equipment varied greatly among hospitals, and few hospitals reported having the equipment and supplies needed to handle a large-scale infectious disease outbreak. Half the hospitals we surveyed had, for every 100 staffed beds, fewer than 6 ventilators, 3 or fewer personal protective equipment suits, and fewer than 4 isolation beds. Workforce capacity issues may also hinder implementation of infectious disease control measures. Health officials noted that there is a lack of qualified and trained personnel, including epidemiologists, who would be needed in the event of a SARS resurgence. This shortage could grow worse if, in the event of a severe outbreak, existing health care workers became infected as a result of their more frequent exposure to a contaminated environment or became exhausted working longer hours. Workforce shortages could be further exacerbated because of the need to conduct contact tracing. According to WHO officials, an individual infected with SARS came into contact with, on average, 30 to 40 people in Asian countries—all of whom had to be contacted and informed of their possible exposure. In contrast, New York City health department officials said that infected individuals came into contact with 4 people on average. In addition, the monitoring of individuals placed under isolation and quarantine may strain resources if widespread isolations and quarantines are needed. For example, follow-up with isolated or quarantined individuals requires significant resources. Officials of the New York City Department of Health and Mental Hygiene said that they made home visits to SARS cases when officials became concerned that these individuals were not following infection control measures or were not remaining in their homes. Similarly, Canadian public health officials said that they, and in some cases Canadian police, made home visits to check compliance with quarantine orders. These officials also described the difficulty in providing necessary resources (food, medicines, masks, and thermometers) to individuals under isolation or quarantine. In Canada, police and the Red Cross had to help deliver food to those under isolation or quarantine. The global spread of SARS was contained through an unprecedented level of international scientific collaboration and the use of well-established infection control measures that have been used effectively in the past to control diseases. Although questions remain about SARS, especially about the ways it can be transmitted, many lessons were learned that could be helpful to the United States in the event of a resurgence. Lessons to carry forward are the importance of early identification of infected individuals and their contacts, the effectiveness of safety precautions to control transmission and ensure the protection of health care workers, and the need to use, in some cases, isolation and quarantine. Swift and unfettered communication among heath care workers, public health officials, government agencies, as well as the public provided the essential backbone to support ongoing efforts to contain the disease. Although SARS is currently believed to be contained, now is the time to prepare for the possibility of a future outbreak. Some preparations are already underway and encompass, in large part, approaches similar to those for pandemic influenza and are also part of general bioterrorism preparedness. Worldwide disease surveillance would facilitate prompt identification of a resurgence of SARS, allowing rapid implementation of infectious disease control measures that would reduce both the spread of SARS and the risk of a large outbreak. Should a large-scale outbreak occur in the near term, limitations in the capacity of our nation’s health system to undertake effective and rapid implementation of infectious disease control measures could prove problematic. A major SARS outbreak would necessitate rapid escalation of infectious disease control resources including health care workers, emergency room and hospital capacity, and the requisite control and support equipment. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For more information regarding this testimony, please contact Marjorie Kanof at (202) 512-7101. Bonnie Anderson, Karen Doran, John Oh, Danielle Organek, and Krister Friday also made key contributions to this statement. SARS Outbreak: Improvements to Public Health Capacity are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03- 769T. Washington, D.C.: May 7, 2003. Smallpox Vaccination: Implementation of National Program Faces Challenges. GAO-03-578. Washington, D.C.: April 30, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities. GAO-03-460. Washington, D.C.: March 14, 2003. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
SARS is a highly contagious respiratory disease that infected more than 8,000 individuals in 29 countries principally throughout Asia, Europe, and North America and led to more than 800 deaths as of July 11, 2003. Due to the speed and volume of international travel and trade, emerging infectious diseases such as SARS are difficult to contain within geographic borders, placing numerous countries and regions at risk with a single outbreak. While SARS did not infect large numbers of individuals in the United States, the possibility that it may reemerge raises concerns about the ability of public health officials and health care workers to prevent the spread of the disease in the United States. GAO was asked to assist the Subcommittee in identifying ways in which the United States can prepare for the possibility of another SARS outbreak. Specifically, GAO was asked to determine 1) infectious disease control measures practiced within health care and community settings that helped contain the spread of SARS and 2) the initiatives and challenges in preparing for a possible SARS resurgence. Infectious disease experts emphasized that no new infectious disease control measures were introduced to contain SARS in the United States. Instead, strict compliance with and additional vigilance to enforce the use of current measures was sufficient. These measures--case identification and contact tracing, transmission control, and exposure management--are well established infectious disease control measures that proved effective in both health care and community settings. The combinations of measures that were used depended on either the prevalence of the disease in the community or the number of SARS patients served in a health care facility. For SARS, case identification within health care settings included screening individuals for fever, cough, and recent travel to a country with active cases of SARS. Contact tracing, the identification and tracking of individuals who had close contact with someone who was infected or suspected of being infected, was important for the identification and tracking of individuals at risk for SARS. Transmission control measures for SARS included contact precautions, especially hand washing after contact with someone who was ill, and protection against respiratory spread, including spread by large droplets and by smaller airborne particles. The use of isolation rooms with controlled airflow and the use of respiratory masks by health care workers were key elements of this approach. Exposure management practices--isolation and quarantine--occurred in both health care and home settings. Effective communication among health care professionals and the general public reinforced the need to adhere to infectious disease control measures. While no one knows whether there will be a resurgence of SARS, federal, state, and local health care officials agree that it is necessary to prepare for the possibility. As part of these preparations, CDC, along with national associations representing state and local health officials, and others, is involved in developing both SARS-specific guidelines for using infectious disease control measures and contingency response plans. In addition, these associations have collaborated with CDC to develop a checklist of preparedness activities for state and local health officials. Such preparation efforts also improve the health care system's capacity to respond to other infectious disease outbreaks, including those precipitated by bioterrorism. However, implementing these plans during a large-scale outbreak may prove difficult due to limitations in both hospital and workforce capacity that could result in overcrowding, as well as potential shortages in health care workers and medical equipment--particularly respirators.
|
Creosote is derived by distilling tar; the type of creosote most commonly used for wood-treating is manufactured from coal tar. Polycyclic aromatic hydrocarbons—chemicals formed during the incomplete burning of coal, oil, gas, or other organic substances—generally make up 85 percent of the chemical composition of creosote. EPA classifies some of the polycyclic aromatic hydrocarbons in creosote, such as benzo(a)pyrene, as probable human carcinogens. Some polycyclic aromatic hydrocarbons also may have noncarcinogenic health effects, such as decreased liver or kidney weight. From approximately the early 1910s to the mid-1950s, the Federal Creosote site was a wood-treatment facility. Untreated railroad ties were delivered to the site and, to preserve them, coal tar creosote was applied to the railroad ties at a treatment plant located on the western portion of the property (see fig. 1 for an illustration of the site). Residual creosote from the treatment process was discharged into two canals that led to two lagoons on the northern and southern parts of the site, respectively. After treatment, the railroad ties were moved to the central portion of the property, where excess creosote from the treated wood dripped onto the ground. The treatment plant ceased operations in the mid-1950s. During the late 1950s and early 1960s, the area where the treatment plant was formerly located was developed into a 15-acre commercial and retail property known as the Rustic Mall. Through the mid-1960s, other areas of the property, including the former canal, lagoon, and drip areas, were developed into a 35-acre residential neighborhood known as the Claremont Development, which was made up of 137 single-family homes that housed several hundred residents. Issues with creosote contamination at the site became apparent in April 1996, when the New Jersey Department of Environmental Protection (NJDEP) responded to an incident involving the discharge of an unknown thick, tarry substance from a sump located at one of the residences in the Claremont Development. Later, in January 1997, the Borough of Manville responded to complaints that a sinkhole had developed around a sewer pipe in the Claremont Development. Excavation of the soil around the sewer pipe identified a black, tar-like material in the soil. After an initial site investigation, EPA found contamination in both the surface and subsurface soils as well as in the groundwater beneath the site. In 1999, EPA placed the site on the NPL and divided it into three smaller units, called operable units (OU). OU1 consisted of the source contamination (free-product creosote) in the lagoon and canal areas of the Claremont Development. OU2 included other soil contamination in the Claremont Development, such as residually contaminated soil at properties over and near the lagoon and canal areas and the drip area of the former wood- treatment facility. OU2 also included contamination at a nearby day-care facility. OU3 included the Rustic Mall soil contamination as well as groundwater contamination throughout the site. EPA completed all major site cleanup work in November 2007, and the site was declared “construction complete” in March 2008. Ultimately, EPA performed cleanup activities on 93 of the 137 properties in the residential area as well as on the commercial portion of the site. EPA’s ongoing activities at the site include monitoring groundwater contamination, conducting 5-year reviews of contamination levels to ensure that the remedy remains protective of human health and the environment, and selling properties that EPA acquired during the remedial action. According to EPA officials, the agency could remove the site from the NPL as early as 2011; however, this decision will depend on the results of contamination monitoring at the site. Most Superfund sites progress through the cleanup process in roughly the same way, although EPA may take different approaches on the basis of site-specific conditions. After listing a site on the NPL, EPA initiates a process to assess the extent of the contamination, decides on the actions that will be taken to address that contamination, and implements those actions. Figure 2 outlines the process EPA typically follows, from listing a site on the NPL through deletion from the NPL. In the site study phase of the cleanup, EPA or a responsible party conducts a two-part remedial investigation/feasibility study (RI/FS) process. The first part of this process—the remedial investigation— consists of data collection efforts to characterize site conditions, determine the nature of the waste, assess risks to human health and the environment, and conduct treatability testing as necessary to evaluate the potential performance and cost of the treatment technologies that are being considered. During the second part of the RI/FS process—the feasibility study—EPA identifies and evaluates various options to address the problems identified through the remedial investigation. EPA also develops cleanup goals, which include qualitative remedial action objectives that provide a general description of what the action will accomplish (e.g., preventing contamination from reaching groundwater) as well as preliminary quantitative remediation goals that describe the level of cleanup to be achieved. According to EPA guidance, it may be necessary to screen out certain options to reduce the number of technologies that will be analyzed in detail to minimize the resources dedicated to evaluating less promising options. EPA screens technologies on the basis of the following three criteria: effectiveness: the potential effectiveness of technologies in meeting the cleanup goals, the potential impacts on human health and the environment during implementation, and how proven and reliable the technology is with respect to the contaminants and conditions at the site; implementability: the technical and administrative feasibility of the technology, including the evaluation of treatment requirements and the relative ease or difficulty in achieving operation and maintenance requirements; and cost: the capital and operation and maintenance costs of a technology (i.e., each technology is evaluated to determine whether its costs are high, moderate, or low relative to other options within the same category). After screening the technologies that it has identified, EPA combines selected technologies into remedial alternatives. EPA may develop alternatives to address a contaminated medium (e.g., groundwater), a specific area of the site (e.g., a waste lagoon or contaminated hot spot), or the entire site. EPA guidance states that a range of alternatives should be developed, varying primarily in the extent to which they rely on the long- term management of contamination and untreated wastes. In addition, containment options involving little or no treatment, as well as a no-action alternative, should be developed. EPA then evaluates alternatives using the nine evaluation criteria shown in table 1 and documents its selected alternative in a record of decision (ROD). Next, either EPA or a responsible party may initiate the remedial action that was documented in the ROD. Like the RI/FS, implementation of the remedial action is divided into two phases. The first phase is the remedial design, which involves a series of engineering reports, documents, and specifications that detail the steps to be taken during the remedial action to meet the cleanup goals established for the site. For EPA-led remedial actions, EPA may either select a private contractor to perform the remedial design or, under a 1984 interagency agreement with the Corps, assign responsibility for designing the remedial action to the Corps, which may select and oversee a private contractor to perform the design work. The second phase is the remedial action phase, where the selected remedy, as defined by the remedial design, is implemented. Similar to the design phase, for EPA-led remedial actions, EPA may either select a private contractor to perform the remedial action or assign the remedial action to the Corps, which would be responsible for contractor selection and oversight during the remedial construction. When physical construction of all remedial actions is complete and other criteria are met, EPA deems the site to be “construction complete.” Most sites then enter into an operation and maintenance phase, when the responsible party or the state maintains the remedy while EPA conducts periodic reviews to ensure that the remedy continues to protect human health and the environment. For example, at a site with soil contamination, the remedial action could be to build a cap over the contamination, while the operation and maintenance phase would consist of monitoring and maintaining the cap. Eventually, when EPA determines, with state concurrence, that no further remedial activities at the site are appropriate, EPA may delete the site from the NPL. The extent of the contamination in a residential area at the Federal Creosote site was the primary factor that influenced EPA’s risk assessment conclusions, remedy selection decisions, and site work priorities. EPA determined that risk levels were unacceptable given the site’s residential use. EPA then selected remedies for the site, taking into account space constraints and other challenges associated with a residential cleanup. Finally, EPA placed a high priority on scheduling and funding site work because the contaminated area was residential, thereby reaching key cleanup milestones relatively quickly. From the spring of 1997 to the summer of 2001, EPA conducted multiple rounds of sampling and risk assessment at the Federal Creosote site and concluded that human health risks exceeded acceptable levels. Specifically, EPA assessed the air, groundwater, surface soil, and subsurface soil as part of an initial site investigation and an RI/FS process. See appendix III for a timeline of EPA’s risk assessment activities. EPA’s initial investigation of site contamination, which began in 1997, included such efforts as assessing whether contamination was affecting public drinking water supplies; investigating the nature of the bedrock and the aquifer underlying the site; collecting soil samples from 30 properties selected on the basis of their proximity to the lagoons, canals, and drip area of the former wood-treatment facility; and collecting approximately 1,350 surface soil samples (up to 3 inches below the ground surface) from 133 properties in and near the residential development. From this initial investigation, EPA concluded that site contamination posed unacceptable human health risks. For example, while EPA found that contamination did not pose short-term health risks that could require an evacuation of residents, EPA found that the contamination was extensive and uncontrolled; had impacted soil, sediment, and groundwater in the area; and likely posed long-term health risks. For soil contamination in particular, EPA determined that, in some areas, the contamination was within 2 to 3 feet of the ground surface; in other areas, EPA found that the contamination was covered by little or no fill material. According to a site document, one resident had discovered a large amount of buried tar when installing a fence on his property. As a result of its concerns that surface soil contamination could pose a risk to residents, EPA developed a surface soil risk assessment in January 1999. EPA concluded that soil contamination levels at 27 properties in the residential area posed long- term human health risks, including carcinogenic or noncarcinogenic risks (or both), that exceeded acceptable levels. In addition to soil contamination, EPA’s initial investigation determined that creosote had contaminated groundwater in the soil as well as in fractures in the bedrock underlying the site, which was a potential source of drinking water. Furthermore, EPA’s aquifer investigation showed that groundwater from the site had the potential to influence the Borough of Manville’s municipal water supply wells, although Region 2 officials said the nature of the fractures made it difficult for EPA to determine whether site contamination would actually affect the wells. According to Region 2 officials, the purpose of a remedial investigation is to collect enough data to determine whether there is a need to take a remedial action. These officials said that an RI/FS for OU1 was not necessary because EPA had obtained much more information from its initial investigation on the extent of contamination at properties over the lagoon and canal source areas than is typically available to support taking an action. Also, according to EPA, the data that were collected during this initial investigation were equivalent in scope to that of a remedial investigation. Therefore, because EPA was trying to address the source contamination in the residential area on an expedited basis, the agency chose to incorporate these data into an Engineering Evaluation/Cost Analysis because it allowed EPA to evaluate remedial alternatives in a more streamlined way, as compared with an RI/FS report. However, for OU2 and OU3, EPA initiated an RI/FS process in 1998 to more fully characterize the extent of soil and groundwater contamination throughout the site. EPA’s OU2 soil evaluation determined that elevated levels of creosote contamination close to the surface in the residential area were generally found near the lagoons and canals, while the drip area generally had residual levels of contamination close to the surface. Underlying the site, EPA found that free-product creosote rested on a clay layer approximately 6 to 10 feet below the surface, although in some areas the layer was not continuous, and the creosote had migrated as deep as the bedrock, roughly 25 to 35 feet underground. On the basis of these findings, in April 2000, EPA developed a human health risk assessment for soil contamination in the residential area using a sample of six representative properties: two properties each represented the lagoon and canal areas, the drip area, and the remaining residential area, respectively. EPA found that soil contamination exceeded acceptable risk levels at the lagoon and canal and drip areas, but not at properties representing other areas of the Claremont Development. Furthermore, EPA’s OU3 soil analysis revealed that contamination was generally in three main areas of the mall, with several other “hot spots” of contaminated material. EPA also determined that most of the soil contamination was within the first 2 feet below the ground surface; however, in certain areas, contamination was as deep as 35 feet below the surface. EPA noted that it did not collect soil samples from under the mall buildings, although, according to a site document, EPA thought it likely that contamination remained under at least a portion of one of the buildings. EPA assessed the human health risks from exposure to soil contamination in June 2001. At the time of EPA’s assessment, OU3 was a commercial area. However, the Borough of Manville and the mall owner had indicated that the area could be redeveloped for a mixed residential/commercial use. Therefore, EPA evaluated risks for OU3 under both residential and commercial use scenarios, and found that risks exceeded acceptable levels for residential use at some areas of the mall and for commercial use at one area. Finally, EPA’s OU3 RI/FS investigation determined that contaminated groundwater in the soil above the bedrock had not migrated far from the original source areas of the lagoons and canals. However, free-product creosote had penetrated as deep as 120 feet into the fractured bedrock, and groundwater contamination in the bedrock had moved through the fractures toward two nearby rivers. On the basis of these results, in July 2001, EPA evaluated the potential human health risks from groundwater contamination to on-site and off-site residents (i.e., residents who lived on or near the site) and commercial workers, and found that risks for on-site residents and workers exceeded acceptable levels for carcinogenic and noncarcinogenic contaminants. The Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR) also evaluated the risks from site contamination and published a series of studies that expressed concern about site contamination levels. Between May 1997 and February 1999, ATSDR published five health consultations that responded to EPA requests to answer specific questions, such as whether consuming vegetables grown in site soils posed a health threat. For example, ATSDR’s first consultation concluded that subsurface soil contamination levels posed a threat to residents if the contamination was dug up, or if similar levels of contamination were discovered in surface soils. Then, in September 2000, ATSDR published a public health assessment that evaluated site contamination and concluded that past and present exposures to surface soil (at that time) did not represent an apparent health hazard. However, the assessment also stated that this conclusion did not rule out the need for remedial action because subsurface contamination posed a long-term hazard if soil 2 feet below the ground in certain areas was disturbed. ATSDR and EPA officials told us that ATSDR’s conclusion that surface soil contamination did not pose a public health hazard did not mean that EPA’s action to remediate the site was unwarranted. In particular, officials from both agencies cited differences in the agencies’ risk assessment views and processes as a reason why they could reach alternative conclusions about site risks. For example, ATSDR officials indicated that ATSDR’s assessment focused on conditions in the first 6 inches of soil to evaluate what contamination exposures residents may have been subject to in the past and at the time of the assessment. However, the officials said that EPA’s risk assessment would have been more focused on the hypothetical situation where subsurface soil contamination is brought to the surface in the future. Therefore, the officials said that, in fact, ATSDR would have had very serious concerns if the site had not been remediated because of the potential for high levels of contamination in the subsurface soil to be brought to the surface through activities such as tree planting or house remodeling. ATSDR also had concerns about potential exposures to groundwater contamination. As a result, the officials stated that ATSDR’s assessment recommended that EPA continue its plans to implement a remedial action to remove source material from the site. On the basis of its conclusions about site risks, EPA set cleanup goals for different areas of the site that, when achieved, would reduce risks to acceptable levels for residential use. For example, EPA established site- specific qualitative objectives for its remedial actions, such as preventing human exposure to contamination, cleaning up areas of source contamination to allow for unrestricted land use and prevent future impacts to groundwater quality, and minimizing disturbance to residents and occupants of the Rustic Mall during a remedial action. EPA also developed quantitative remediation goals to identify the level at which remedial actions would need to be implemented to protect human health. According to site documents, there were no federal or state cleanup standards for soil contamination at the time of the cleanup effort. Therefore, EPA established risk-based remediation goals that would reduce excess carcinogenic risks to a level of 1 in 1 million, and that were consistent with New Jersey guidance for residential direct contact with soil. For the groundwater contamination, EPA used both federal and state chemical-specific standards to set risk-based remediation goals. According to site documents and Region 2 officials, risk levels required a remedial action regardless of the site’s future use. The officials said that EPA considered what level of waste could be left on-site while still allowing for unrestricted residential use of properties; however, they noted that, with unrestricted residential use, there is a very low threshold for the level of waste that can be left on-site. They said that even the residually contaminated soil was sufficiently contaminated that EPA dug between 10 and 14 feet deep to allow for unrestricted use of residents’ properties. Similarly, EPA determined that source material in the Rustic Mall needed to be remediated because of the potential future residential use of the site. According to a site document, EPA determined that, under a current use scenario (at the time of its risk assessment in 2001), there were likely no unacceptable human health risks from contamination under the mall because contaminants were covered by buildings and pavement. However, the contamination could be exposed if these co were removed during site redevelopment. Therefore, EPA identified the level of site cleanup required on the basis of the most conservative futu use scenario. To select remedies to address the soil and groundwater contamination at the Federal Creosote site, EPA identified potential remedial technologies from agency guidance as well as from other publications and databases that listed potentially available technologies. After identifying potential technologies, EPA screened out less viable technologies, combined selected technologies into remedial alternatives, evaluated the alternatives, and selected a preferred remedy for each OU. See appendix III for a timeline of EPA’s remedy selection efforts. Region 2 officials told us that, to identify technologies for site remediation, EPA identifies a range of technologies on a site-specific basis. According to agency guidance, EPA prefers three technologies for treating the type of soil contamination found at the Federal Creosote site: bioremediation— using microbes to degrade contaminants and convert them to carbon dioxide, water, microbial cell matter, and other products; low temperature thermal desorption (LTTD)—heating contaminated material to temperatures less than 1,000 degrees Fahrenheit to physically separate contaminants from soils; and incineration—heating contaminated material to temperatures greater than 1,000 degrees Fahrenheit to destroy contaminants. EPA also identified other technologies to cap, contain, excavate, extract, treat, or dispose of site soil or groundwater contamination, including a number of emerging or innovative technologies. For the soil contamination, the range of technologies EPA considered varied among the OUs at the site. During its remedy selection process for OU1, EPA primarily evaluated the three technologies preferred by agency guidance for soil contamination at wood-treatment sites. According to Region 2 officials, EPA considered a limited range of technologies for OU1 because, originally, the agency was evaluating whether it would need to evacuate residents to protect them from site contamination. Consequently, EPA conducted a more streamlined remedy selection process for OU1 to speed decision making. Alternatively, for OU2 (and later for OU3), EPA evaluated a wider range of technologies, including several emerging technologies. In addition, Region 2 officials stated that differences in the contamination between the OUs impacted the range of technologies considered. Specifically, the officials said that the OU1 material was the more sludge-like, free-product creosote, whereas the OU2 contamination might not have been visible. The officials noted that, with less contaminated soils, more treatment options might become viable, since some options that might have difficulty treating more highly contaminated material might successfully treat less contaminated material. However, while EPA considered a wider range of technologies for OU2 and OU3, in general, EPA screened out the emerging technologies in favor of those that were identified as preferred in its guidelines. Ultimately, EPA determined that off-site thermal treatment and disposal of the soil contamination would best achieve its cleanup goals and were consistent with residential use of the site. In implementing this remedy, EPA determined that it would need to purchase some houses—where contamination was inaccessible without demolishing the houses—and permanently relocate these residents, while residents in other houses would only need to be relocated temporarily. For the groundwater contamination, Region 2 officials said that EPA tried to determine how to clean up the contaminated groundwater in the fractured bedrock but ultimately concluded that none of the options would be effective; moreover, many of the options would be expensive and take a long time to implement. As a result, EPA determined that attenuation of the groundwater contamination over time, long-term monitoring, and institutional controls to prevent the installation of wells at the site would be the best alternative to address contamination in the fractured bedrock. To select this remedy, EPA invoked a waiver for technical impracticability, which allowed it to select an alternative that would not comply with requirements to clean up the groundwater to levels that would meet site cleanup goals. Region 2 officials stated that one of the presumptions EPA makes in using a waiver for technical impracticability is that it has put forth its best effort to remove source contamination. Therefore, according to the officials, on the basis of agency guidance, EPA needed to clean up the source material that was contaminating the groundwater to justify a waiver for technical impracticability. Moreover, the officials said that by removing the source material, EPA may have helped prevent the contaminated groundwater area from getting larger. Also, the officials said that, in their judgment, EPA’s action would help the contamination in the bedrock attenuate more quickly, although they were unable to quantify this impact. In selecting these remedies, EPA’s decisions were influenced by several challenges associated with a residential cleanup, including (1) space constraints that limited on-site implementation of actions, (2) a determination that some options would not achieve the site cleanup goals, and (3) concerns about some options’ community impacts. Space constraints. According to Region 2 officials, space constraints posed by the residential nature of the site limited EPA’s ability to remediate contamination on-site. For example, the officials said that soil contamination in the lagoons and canals was interspersed throughout the residential area. As a result of the lack of available open land and the residential nature of the site, a site document indicated that options for on- site treatment and disposal of excavated material were not considered for OU1. Also, while EPA considered on-site treatment technologies and alternatives for OU2 and OU3, Region 2 officials said EPA did not consider buying additional houses to create more open space. They said that once EPA determined that the majority of houses in the residential area could be saved, it tried to avoid demolishing as many homes as possible. The officials also noted that EPA could have placed a treatment facility in a corner of the Rustic Mall, but that the mall was still a functioning commercial area at the time EPA was selecting remedies. The mall was in the middle of the town, and, according to the officials, feedback from local citizens indicated that the community relied heavily on the mall. As a result, EPA did not formally consider taking over additional areas of the mall to create more open space as part of a remedial alternative. Region 2 officials acknowledged that, after EPA began the cleanup, the owner decided to demolish the mall. However, they stated that, when EPA made its remedy selection decisions, it did not have sufficient justification to purchase or demolish the mall. In particular, EPA Region 2 officials told us that the challenge of space constraints was a key factor in why EPA chose not to implement bioremediation or LTTD—two of EPA’s preferred remedies for treating creosote contamination—on-site. For example, the officials noted that bioremediation of excavated material on-site would have required a lot of space to store the material while it was being treated with microbes that would help degrade the contamination. Similarly, the officials said that there was not sufficient space to stockpile material for treatment using LTTD. That is, to operate an LTTD unit efficiently, the officials said that EPA would have needed to feed material into the unit constantly. However, they said doing so was not possible at the site because, while EPA might excavate 100 tons of soil on some days, on other days, EPA was unable to excavate as much since it needed to work by hand around residents’ houses. Given EPA’s inconsistent rate of excavation, the agency would have needed to stockpile material to ensure a constant flow into an LTTD unit. However, according to Region 2 officials, there was not enough space to stockpile contaminated material awaiting treatment, and, as a result, the officials estimated that EPA could have operated an on-site LTTD unit only 25 percent of the time, which they said would not have been cost-effective. Specifically, the officials said that it would take around 60,000 square feet for all of the operations associated with an LTTD unit. They noted that a space roughly this size was available in the northeast corner of the Rustic Mall. However, because of constraints, such as fire code access requirements for a bowling alley that bordered this area, the officials estimated that the total available space was actually only about 43,000 square feet. Also, EPA would have needed additional space for other facilities related to the cleanup. In addition, while EPA determined that bioremediation and LTTD could be used to treat contamination off-site, EPA found that they would be difficult to implement because of a lack of permitted commercial facilities. As a result, EPA relied on incineration because incineration facilities were the most readily available for off-site treatment of material from the site. Level of cleanup required. EPA had concerns about whether certain technologies would effectively treat contamination to required levels, given the residential nature of the site. For example, EPA determined it was unlikely that such technologies as bioremediation of contaminated material in place would achieve the agency’s soil remediation goals, because EPA was uncertain whether the bioremediation microbes could be distributed evenly in contaminated areas since some of the contamination was under residents’ homes. Region 2 officials also said it was unlikely that EPA could have achieved its cleanup goals using bioremediation because of the high levels of soil contamination at the site. They said that if contamination levels are high, the microbes introduced into the soil could be killed before they have a chance to degrade the contaminants. Moreover, because of the high contamination levels and treatment requirements at the site, the officials said they had concerns about the effectiveness of using LTTD. They stated that LTTD treats material using lower temperatures than incineration, and that it removes about 80 percent of the contamination each time material is passed through the unit. As a result, sometimes material must be treated multiple times before it meets residential standards. The officials indicated that this would have probably been the case with the Federal Creosote material because it was so highly contaminated. They said, given the nature of the contamination at the site, incineration was a more efficient method of treatment to achieve the agency’s remediation goals. While the high treatment levels required because of the residential nature of the site impacted EPA’s choices about individual soil remediation technologies, they also influenced decisions about whether to dispose of treated and untreated material on-site, or at an off-site location. According to Region 2 officials, if EPA disposed of excavated material on-site, the agency would have had to ensure, through treatment and testing, that the soil met residential standards. Consequently, the officials concluded that if EPA disposed of excavated material on-site, it would have had to treat and test the material more extensively than it did for off-site disposal. The officials said that only about 35 percent of the material excavated from the site needed to be thermally treated before it could be disposed of off-site. The rest of the excavated material could be disposed of without treatment at a hazardous or nonhazardous waste landfill. However, they said, if EPA had disposed of material on-site, it would have had to test and possibly treat 100 percent of the material to ensure that it met residential standards. Due to the potential expense of additional treatment and sampling, EPA determined that off-site disposal would be more cost- effective. For the groundwater contamination, according to site documents, EPA found that none of its remedial alternatives, including those based on extracting or treating the contamination in place, would be able to achieve its cleanup goals effectively and reliably within a reasonable time frame. For example, EPA found that some of the groundwater contaminants could take decades to move through the groundwater, and, as a result, it would take an extremely long time to remediate these contaminants using an extraction technology. Moreover, EPA estimated that the technology that was most likely to be able to achieve its remediation goals— extracting contaminants using steam—would cause significant disruption to the residential neighborhood and would be much more expensive than EPA’s other alternatives. On the basis of its experience at other sites, EPA determined that complete removal of the groundwater contamination in the bedrock at the site was not practicable. In addition, EPA found that several of the treatment technologies it considered would not be effective at treating the highly contaminated free-product creosote found in portions of the site. Community impacts. The residential nature of the site and the importance of the Rustic Mall to the community also influenced EPA’s remedy selection, given the effects that different technologies and alternatives might have on the community. For example, according to EPA, some of the substances that could be used to immobilize soil contamination in the ground were potentially more toxic than the creosote contamination. Also, certain options that treated contamination in place or extracted it from the soil or groundwater would have emitted heat or gas that could have posed risks to residents and the community. Moreover, EPA determined that some options would have significantly disrupted the community because of the need to install equipment, wells, and piping throughout the residential and commercial areas. Also, because EPA was implementing a remedial action in a residential neighborhood at the site, it was concerned about the length of the cleanup and other timing impacts on the community. Region 2 officials said that EPA generally does not use certain alternatives unless the agency has the flexibility to accomplish remediation over a long time frame on the basis of the current land use (e.g., the site is abandoned). Under these circumstances, EPA could use a remedy like bioremediation of contaminated material in place, which would cause long-term disruption if implemented in a residential neighborhood. Also, Region 2 officials said that, if EPA had used on-site LTTD to treat contaminated material, it could not have operated the unit in the most efficient way—24 hours a day— because the residents in houses within 200 feet of where the unit would have been located would have been negatively affected by its lights and noise during the night. However, the officials said, if EPA had only run the LTTD unit 8 hours a day, the cleanup effort would have taken much longer. The length of time involved was a particular concern in EPA’s evaluation of groundwater remediation alternatives. According to the Region 2 officials, the best alternative to extract contaminated groundwater from the bedrock would have taken 18 to 20 years to implement and would have covered the site with machinery. Finally, EPA factored future land use impacts into its remedy selection decisions. For example, EPA found that options that relied on containment or deed restrictions, but that left contamination under and around the residential community, were not viable alternatives. Region 2 officials said capping the contamination would not have supported use of the land as a residential area because residents would have had to sign agreements not to disturb the cap, which would have restricted their use of the properties. Also, because of these restrictions, the officials said it is likely that some owners would have refused to sign the necessary agreements, and EPA would have had to take an enforcement action. Similarly, EPA avoided certain remedies for the Rustic Mall because of the impacts that they could have on the community’s ability to redevelop the mall as well as on the operation of the mall. A Borough of Manville official told us that the Rustic Mall was the “hub of the town” and was located directly behind buildings on the town’s Main Street. As a result, he said the community was very opposed to alternatives that would have left or treated contamination on-site. He said that, in the town’s view, the contamination under the mall needed to be cleaned up. Otherwise, it would have been difficult to get tenants into the mall in the future, and the town might have ended up with a blighted area in the center of the community. He also said the community was concerned that no one would want to come and shop at the mall if there was a treatment facility in the parking lot. EPA placed a high priority on scheduling and funding the Federal Creosote site work because the contamination was in a residential area. According to Region 2 officials, it is rare to find source contamination, such as the free-product creosote, under a residential area, and most sites with the level and extent of contamination found at the Federal Creosote site are abandoned. The officials said EPA places the highest priority on addressing the principal threats at residential sites first. As evidence of this prioritization, EPA initiated efforts to study, select a remedy for, and begin cleanup of the residential part of the site before undertaking similar efforts for the Rustic Mall. For example, Region 2 officials said that EPA decided relatively early in the cleanup process to break the site into three OUs to allow work to proceed as quickly as possible. EPA determined that it needed to get to work immediately on OU1, and that the groundwater contamination and commercial area could wait until after EPA had decided what to do with the residential area. The Region 2 officials said that breaking the site into different OUs was important because EPA knew that it needed to relocate some OU1 residents, and this process can be time-consuming—one official noted that residents who must permanently relocate have 1 year to do so. While this process took less time at the Federal Creosote site, EPA did not know that would be the case initially. Moreover, the Region 2 officials said that the first couple of years EPA spent studying the site caused a great deal of anxiety for residents, because they did not understand the risks of remaining in their homes and could not sell their homes if the homes would need to be demolished. The officials said the OU1 ROD informed residents that most of the homes in the neighborhood would not need to be demolished, and this helped reduce residents’ anxiety. EPA also took steps to shorten the time needed to select, design, and implement the remedial actions. For example, Region 2 officials said that, because of the residential nature of the site, the site investigation process was both unusually extensive and expedited in comparison to other sites. Region 2 officials said that EPA began sampling early because, when the site was discovered, the agency was concerned that contamination risks could be so significant that residents might need to be evacuated. As a result, they said that the agency gathered a large amount of information about site contamination before listing the site on the NPL. The officials said this data collection effort helped EPA move forward with site work quickly because, with a large amount of data to use to gauge its overall approach to the site, EPA was able to compress the removal evaluation, listing process, and RI/FS into a relatively short amount of time. In addition, EPA tried to streamline work by configuring its sampling efforts to satisfy postexcavation requirements to confirm that contaminated material no longer remained on-site. Specifically, site documents show that to meet New Jersey requirements, EPA took samples on 30-by-30 foot grids to confirm that contamination was no longer present along the sides and bottom of an excavated area. Rather than wait until the excavation was completed to take additional samples to confirm that contamination was not present, EPA incorporated these requirements into earlier sampling efforts. As a result, if samples were clean, EPA could immediately backfill an area, which reduced the overall length of the cleanup effort. Finally, in an effort to expedite the cleanup effort, EPA Region 2 officials said that more of the region’s resources were devoted to the site relative to other sites that the region needed to address at that time. As a result of these efforts to prioritize and expedite site cleanup work, the Federal Creosote site reached key cleanup milestones in less time than some other site cleanups. Region 2 officials said that they completed the three RODs for the site in about 3 years, which they said is a very quick time frame to complete such analyses. They noted that issuing a ROD is an intensive process that at another site, for example, took over a decade. Also, the Federal Creosote site reached EPA’s construction complete stage more quickly than other megasites—that is, sites at which actual or expected total cleanup costs, including removal and remedial action costs, are expected to amount to $50 million or more. In July 2009, we reported that, based on EPA data through fiscal year 2007, the median length of time it took for megasites to reach construction complete after NPL listing was 14.8 years. However, according to EPA data, the Federal Creosote site reached construction complete in just over 9 years. Total site costs exceeded construction estimates at the Federal Creosote site by roughly $233 million, primarily because (1) EPA’s early construction estimates were not designed to include all site-related expenses and (2) additional quantities of contaminated material were discovered during the cleanup effort. Other factors, such as methodological variation for estimating site costs and contractor fraud, accounted for a smaller portion of the cost difference. According to our analysis, total site-related costs, including remedial construction and other response costs at the Federal Creosote site through the spring of 2009, were approximately $338 million, a roughly $233 million difference from the estimated remedial construction costs of $105 million. Total site costs were higher than construction estimates for several reasons. As shown in figure 3, of the $233 million difference, 39.6 percent (or about $92 million) is due to other response costs that were not included in EPA’s construction estimates; 47.5 percent (or about $111 million) is from an increase in remedial construction costs—mostly directly related to the discovery of additional contaminated material; and 12.9 percent (or about $30 million) is due to other factors—primarily differences in cost estimation methodology and, to a smaller extent, to a smaller extent, contractor fraud. contractor fraud. Other response costs not included in construction estimates (about $92 million) Remedial construction costs potentially related to greater contaminated soil quantities (about $111 million) EPA intentionally included only costs related to the construction and maintenance of the selected remedies rather than total sitewide costs in its early cost estimates, which follows its guidance, according to the agency. EPA prepares these preliminary estimates during the remedy selection process to compare projected construction costs across different remedial action alternatives. Specifically, the National Contingency Plan directs EPA to consider the capital costs of construction and any long-term operation and maintenance costs as part of the remedial alternative screening process. According to EPA guidance, these estimates are not intended to include all site-related expenses, and certain expenses, such as early site investigation and EPA enforcement costs, are beyond the scope of these early estimates because these costs are not linked to a specific remedial alternative and, therefore, would not affect the relative comparison of alternatives. For example, while site investigation studies were conducted for each operable unit, these studies were completed prior to remedy selection to inform the selection process and, therefore, were not linked to any particular remedy. Similarly, the removal cleanup of surface soils in the residential area occurred prior to remedy selection and, therefore, was not related to the construction costs of any particular remedial alternative. Table 2 summarizes costs for activities that were not included in EPA’s remedial construction cost estimates—other response costs—at the Federal Creosote site. During excavation, contractors discovered greater-than-expected amounts of contaminated material requiring remediation across all OUs, which contributed most to the difference between estimated and actual construction costs. Based on our analysis of EPA documents, the initial ROD estimates for the site indicated that approximately 154,100 to 164,400 tons of material would need to be excavated for treatment or disposal; however, EPA ultimately found that roughly 456,600 tons of material needed to be excavated—an increase of at least 178 percent. As shown in table 3, according to our analysis, increased amounts excavated from the OU1 and OU3 areas contributed the most to the difference between the estimated and actual excavated amounts across the site as a whole. According to EPA officials, it is common for EPA to remove more soil than originally estimated at Superfund sites because of the uncertainty inherent in using soil samples to estimate the extent of underground contamination. For example, EPA guidance indicates that the scope of a remedial action is expected to be continuously refined as the project progresses into the design stage and as additional site characterization data and information become available. However, both Corps and EPA officials stated that the Federal Creosote site posed a particular challenge for estimating soil quantities prior to excavation because of the way in which the waste moved at the site and, in some cases, because of access restrictions during sampling. According to EPA’s Remedial Project Manager (RPM) for the site, soil contaminants generally either stay in place or migrate straight down; however, while some of the creosote waste at the site stayed in place, some of the waste migrated both horizontally and vertically. The RPM said that this migration made it difficult to predict the waste’s location through sampling. For example, during excavation, contractors found seams of contaminated material, some of which led to additional pockets of creosote waste, while others did not. Given the diameter of the sampling boreholes (which were generally 2 to 4 inches wide) and the width of the seams of creosote waste (which in some cases were only 6 inches wide), the sampling process could not detect all of the creosote seams at the site, despite what EPA officials considered to be the extensive sampling during the early site investigations that formed the basis for the initial cost estimates. Additionally, sampling during the site investigations for the residential area as well as the Rustic Mall was limited by the location of buildings and access restrictions, according to EPA’s RPM. For example, site documents indicate that no samples could be taken from under the mall during the OU3 soil investigation because the buildings were being used. It was not until the mall owners decided to demolish the existing structures as part of a town revitalization plan that mall tenants left and EPA was able to take samples in the areas covered by the buildings. These areas were found to contain additional areas of creosote waste, as shown in figure 4. Although the mobility of the waste in the subsurface soil and sampling limitations hindered EPA’s ability to determine the total quantity of material requiring excavation during the pre-ROD site investigation when the initial cost estimates were prepared, soil sampling during this stage was generally successful at identifying which residential properties contained contamination, according to our analysis of site documents. For example, pre-ROD soil sampling allowed EPA to correctly identify 83 of the 93 residential properties that would eventually require remediation, as shown in figure 5. According to EPA guidance, because of the inherent uncertainty in estimating the extent of site contamination from early investigation data, cost estimates prepared during the RI/FS stage are based on a conceptual rather than a detailed idea of the remedial action under consideration. The guidance states that these estimates, therefore, are expected to provide sufficient information for EPA to compare alternatives on an “order of magnitude” basis, rather than to provide an exact estimate of a particular remedy’s costs. For example, the guidance also states that preliminary cost estimates prepared to compare remedial alternatives during the detailed analysis phase of the RI/FS process are expected to range from 30 percent below to 50 percent above actual costs. However, at the Federal Creosote site, actual construction costs were more than twice what EPA estimated. Specifically, we found that sitewide remedial construction costs increased by $141 million over EPA’s estimated amounts. According to site documents, increases in the quantity of material requiring excavation, transportation, treatment, or disposal resulted in higher construction costs across all OUs. Our analysis of site cost data indicated that construction costs potentially associated with the additional quantity of contaminated material accounted for most of this increase ($111 million, or about 78.7 percent). In particular, soil excavation, transportation, treatment, and disposal costs constituted approximately 56.1 percent ($62 million) of the increased construction costs potentially related to additional quantities of material, and 26.7 percent of the overall $233 million difference between estimated construction and total site costs, as shown in figure 6. According to EPA’s RPM, both the need to excavate greater amounts of material and the reclassification of excavated material from nonhazardous waste to hazardous waste affected excavation, transportation, treatment, and disposal costs. For example, the discovery of additional pockets of creosote waste increased the overall amount of material requiring excavation and treatment or disposal because, in addition to removing the waste itself, any soil overlying the contamination needed to be removed and disposed of to access the creosote waste. Additionally, if a pocket of creosote waste was unexpectedly discovered in an area of soil that had already been designated for excavation and disposal in a landfill without treatment because prior sampling indicated it was less contaminated, the overall amount of soil to be excavated would not be affected, but costs would increase because treatment is more expensive than landfill disposal. In addition, EPA and Corps officials said that the need to remediate greater quantities of material contributed to increases in other sitewide construction costs, such as general construction requirements and site restoration costs. Our analysis showed that such costs accounted for another 20.9 percent of the difference between estimated construction costs and total site costs—although the exact extent to which additional amounts of material contributed to the difference in costs is not clear. EPA’s RPM stated that the effect of increased quantities varied, depending on the OU. However, EPA and Corps officials said that in general, more extensive excavation would increase design engineering, inspection, and other costs as well as costs for general construction requirements and for site restoration, as shown in table 4. For example, the decision to remediate additional contaminated material under the Rustic Mall buildings led to increased design engineering costs because the original excavation plans were created under the assumption that the mall would remain standing, and further rounds of design sampling were needed to identify the extent and location of contamination once the buildings were demolished. Additionally, our analysis of site documents indicated that the increased time required to excavate additional material could have led to greater project costs for general construction requirements, such as temporary facility rental, site security, and health and safety costs. Similarly, site restoration costs, such as costs for backfill soil, could have increased because more backfill would be required to restore the site after excavation. According to the RPM, EPA and the Corps instituted certain controls at the site to minimize costs. In particular, the RPM stated that the Corps took steps to ensure that material was not unnecessarily excavated and sent for treatment and disposal. For example, if contractors found an unexpected pocket of creosote waste during excavation, they were required to notify the Corps official on-site, who would decide whether additional excavation was required depending upon visual inspection and additional testing, as needed. The contractor was not allowed to excavate beyond the original excavation limits without Corps approval. According to the RPM, the Corps’ approach of reevaluating the original excavation depth on the basis of additional sampling results and a visual inspection of the soil led to cost savings because in some areas less material needed to be excavated than originally planned. Furthermore, EPA and Corp officials stated that this process minimized unnecessary treatment and disposal costs that might be incurred if “clean” soil was sent for treatment or hazardous waste disposal. Additionally, EPA’s decision in November 2002 to allow treated soil to be disposed of in a nonhazardous waste facility if it met the facility’s criteria for contamination levels helped reduce unit costs for treatment and disposal because disposing of soil at a hazardous waste facility is more expensive. For example, in a bid for a contract to treat and dispose of soil following EPA’s decision, the selected subcontractor submitted a unit price for treatment and disposal at a nonhazardous waste facility that was $80 (or 16 percent) less than its unit price for treatment and disposal at a hazardous waste facility—which for that particular contract saved $800,000. Furthermore, on the basis of information gathered from site documents and from statements made by EPA and Corps officials, EPA and the Corps took other steps intended to minimize costs. For example, a Corps official said that reducing the duration of the project could help minimize certain site costs. Specifically, according to our analysis of site documents, to reduce the amount of time spent waiting for sampling results prior to backfilling an excavated area, EPA and the Corps incorporated state postexcavation sampling requirements into their design sampling plans for earlier investigations. Accordingly, unless additional excavation was required to meet the cleanup goals, these samples could be used to confirm that the boundaries of the excavation areas had been tested for contamination. Additionally, our analysis of site documents showed that the Corps tested various odor control measures before beginning excavation at certain areas of the site, which allowed it to use less expensive odor control alternatives than originally planned and saved approximately $1.1 million in implementation costs. These measures also helped to speed up the construction work. Finally, according to the RPM, the Corps was able to minimize costs by managing the work to avoid costly contractor demobilization and remobilization expenses. For example, the Corps dissuaded the contractors from removing idle equipment and worked with the RPM to resolve administrative or funding issues or questions about the work as they arose to prevent an expensive work stoppage. Other factors, including different cost-estimating methodologies and contractor fraud, explain a smaller portion of the difference between estimated construction and total site costs at the Federal Creosote site. In developing its estimates, EPA followed agency guidance, which states that as a simplifying assumption, most early cost estimates assume that all construction costs will be incurred in a single year. According to EPA, since the estimated implementation periods for EPA’s remedial actions were relatively short periods of time, EPA did not discount future construction costs in its estimates, and, therefore, these estimates were higher than they would have been otherwise. In accordance with our best practices regarding the use of discounting, we adjusted the initial cost estimates to reflect that costs were projected to accrue over several years and that, therefore, future costs should be discounted. However, by discounting future construction costs prior to adjusting for inflation, our discounted values were lower than EPA’s original estimates in site documents. According to our analysis, discounting estimated costs accounted for approximately 12 percent of the $233 million difference between estimated construction and total site costs (see fig. 7). Contractor fraud also contributed to the difference between estimated construction and total site costs, but to a small degree. However, while some parties have pled guilty to fraud, the full extent of the effect of fraud on site costs will not be known until all investigations are complete. Court documents alleged that employees of the prime contractor at the site, as well as some subcontractors, were engaged in various kickback and fraud schemes, which resulted in inflated prices for certain subcontractor services. For example, a subcontractor for soil treatment and disposal agreed to pay approximately $1.7 million in restitution to EPA for fraud in inflating its bid prices. In addition, court documents alleged that fraudulent price inflation also affected other site costs, including certain subcontracts for items such as wastewater treatment, backfill, landscaping services, and utilities. To date, our analysis of available court documents indicated that at least approximately $2.1 million in inflated payments may be directly attributable to fraud at the Federal Creosote site. On the basis of currently available information, this figure represents less than 1 percent of the difference between estimated construction and total site costs. However, since the fraud investigations are ongoing and additional charges may be filed, the full extent of contractor fraud is not currently known. See appendix I for more information about site-related fraud investigations. EPA managed the overall cleanup and communicated with residents through a dedicated on-site staff presence, among other actions. The Corps implemented the cleanup work by hiring and overseeing contractors; the Corps was less involved in selecting and overseeing subcontractors at the site. According to a 1984 interagency agreement between EPA and the Corps for the cleanup of Superfund sites, EPA maintains statutory responsibility for implementing the Superfund program. In addition to selecting the remedy at a site, EPA provides overall management of the cleanup, ensures that adequate funding is available, and manages relationships with other interested parties, such as residents. If EPA decides that Corps assistance is needed to conduct cleanup work, EPA establishes site- specific interagency agreements. These agreements outline the specific tasks and responsibilities of the Corps at the site and provide a proposed budget for the activities listed. Once the site-specific agreements are established, EPA’s primary responsibilities are to make sure that the work continues without interruption and that adequate funding is available, according to EPA officials. EPA officials also noted that the agency does not have the authority to direct Corps contractors at the site; rather, all instruction and direction to contractors goes through the Corps. To fulfill its project management and community outreach responsibilities, EPA dedicated a full-time RPM to the Federal Creosote site, according to Region 2 officials. Although RPMs generally have two or more sites for which they are responsible at any given time, Region 2 officials stated that the size and complexity of the site required a higher level of EPA involvement. For example, the officials said that the relatively large size of the site and stringent cleanup goals meant that a large area was excavated, and the complexity of the cleanup process led to a greater number of questions from the Corps and its contractors that required EPA’s attention. According to the officials, the RPM was on-site at least two to three times per week; however, during some segments of the work, he was on-site almost every day. They noted that the design phase in particular required close coordination with the Corps because design activities for different areas of the site occurred simultaneously and were often concurrent with construction. Consequently, the RPM said he was on-site working with the Corps and its design contractor to design new phases of the work; revise existing designs; and answer any questions regarding ongoing construction activity, such as whether to excavate additional pockets of waste found during the construction phase. According to the RPM, although the Corps was required to ask EPA for approval only to expand excavation to properties that were not included in the RODs, in practice, Corps officials kept him informed whenever additional excavation was required, and, in many cases, he made the decision regarding whether to broaden or deepen the excavated area. To monitor project progress and funding, the RPM had weekly on-site meetings with the Corps and received weekly and monthly reports on progress and site expenditures, according to EPA officials. At the weekly meetings, the RPM would answer Corps questions regarding the work and be informed of any contracting or subcontracting issues that might delay or stop work at the site. Moreover, as part of EPA’s oversight of site progress, the RPM said he reviewed Corps documents regarding any changes in the scope of the work. Because EPA provided funding to the Corps on an incremental basis, the RPM also closely monitored the rate of Corps expenditures to ensure sufficient funding to continue the work, according to EPA officials. The RPM explained that he also reviewed Corps cost information for unusual charges and, with the exception of a few instances of labor charge discrepancies, most of the time the Corps reports did not contain anything surprising. In the few instances where the RPM found a discrepancy, he contacted Corps officials, and they were able to explain the reason for the discrepancy—for example, a problem with the Corps’ billing software. The RPM stated that, under the interagency agreement with the Corps, he did not review contractor invoices or expenditures because the Corps had both the responsibility and the expertise necessary to determine whether the contractor charges were appropriate, given the assigned work. Additionally, EPA officials stated that the residential nature of the site necessitated a substantial investment in community relations to manage residents’ concerns about the contaminated material under their homes and the Rustic Mall. As part of these efforts, EPA used such tools as flyers, newsletters, resident meetings, and media interviews to communicate with concerned citizens. According to the RPM, managing community relations required the second largest commitment of his time, after designing the work. He said that he spent a great deal of time working with residents to help them understand the situation during the early site investigation stage, when it was not clear who was going to need to move out of their homes and residents were concerned about their health and property. The RPM said that he also worked personally with residents during the design and implementation of the remedy to minimize the impact to the community and to inform it of any additional actions needed, such as excavating contamination across a property line or closing roads. According to site documents and a local official, EPA’s community relations efforts were successful at reducing residents’ anxieties. For example, in a summary of lessons learned from the cleanup effort, site documents indicate that EPA’s policy of promptly responding to community inquiries and the regular presence of EPA personnel at the site helped to establish and preserve a high level of public acceptance and trust with the community. Also, a Borough of Manville official noted that the continuity provided by having one RPM dedicated to the site for the duration of the project was particularly helpful in maintaining good communication because it allowed EPA officials to know almost all of the residents on a first-name basis and encouraged their participation in the cleanup process. For example, the RPM stated that he worked closely with residents to address their concerns and minimize impacts to the community during the excavation of contaminated material and the restoration of affected areas of the neighborhood. Similarly, according to the Borough of Manville official, EPA and the contractors effectively coordinated with town officials to ensure that the cleanup effort went smoothly. For example, to minimize disruption, EPA consulted with town officials about which roads would be best to use, considering the routes and weight limitations of trucks leaving the site. In the official’s view, EPA’s outreach efforts ensured that residents and the community as a whole had sufficient information to feel comfortable about the cleanup. Consequently, despite the size and scope of the cleanup effort, the official could recall very few complaints from residents. At the Federal Creosote site, the Corps selected and oversaw private contractors’ design and implementation of the remedial action; however, the Corps was less involved in the subcontracting process. Under the 1984 interagency agreement with EPA, the Corps selects and oversees private contractors for all design, construction, and other related tasks at Superfund sites, in accordance with Corps procedures and procurement regulations. According to Corps officials, the Corps selected a contractor to perform the design for the three OUs at the Federal Creosote site from a list of qualified vendors and then negotiated a price for the contracts. For construction, the Corps selected a prime contractor from a pool of eligible contractors under a cost-reimbursement, indefinite-delivery/indefinite- quantity (IDIQ) contract. According to EPA and Corps guidance, this system provides more flexible and responsive contracting capabilities for Superfund sites, which may require a quick response and often lack a sufficiently defined scope of work for price negotiation. The Corps’ prime contractor performed some of the work and subcontracted some tasks to other companies. For example, the prime contractor excavated contaminated material but awarded subcontracts for transportation, treatment, and disposal of the excavated material. Other subcontracted services included providing backfill soil and landscaping for site restoration, and treating wastewater. To subcontract, the prime contractor solicited bids from potential vendors and, for smaller subcontracts, provided the Corps with advance notification of the award. To award larger subcontracts, the prime contractor requested Corps approval. To carry out its oversight responsibilities, the Corps monitored changes in the scope of the work, contractor progress and costs, and work quality. For example, Corps officials stated the following: The Corps had to approve any changes in project scope, such as excavating greater quantities of material, or any increases in other construction services or materials beyond the amounts originally negotiated between the Corps and the prime contractor. According to EPA officials, this chain of command helped prevent any unauthorized expansion of work at the site. To monitor project progress and contractor costs during construction, the Corps reviewed prime contractor cost summary reports for each phase of the work. These reports contained detailed information on contractor costs and work progress, and, according to Corps officials, they were updated, reviewed, and corrected if necessary on a daily, weekly, and monthly basis. For example, Corps officials explained that they reviewed the daily reports primarily for accuracy and unallowable costs. For weekly and monthly reports, the Corps also examined whether the contractor was incurring costs more quickly than expected, which could indicate that a cost was incorrectly attributed or that a change in project scope was necessary (i.e., because particular aspects of the work were more costly than anticipated, and, therefore, a scope revision was needed to complete planned activities). However, Corps officials commented that the contractor data were generally accurate, and that errors were infrequent. The officials also said that, during the most active periods of the work, they discussed the cost reports and project progress, including any potential changes in unit costs, during the weekly meetings with the contractor. The Corps also monitored work quality at the site. According to site documents, the Corps was required to implement a quality assurance plan as part of its oversight responsibilities and had a quality assurance representative at the site during construction. For example, in a July 2002 notice to the prime contractor, the Corps identified several workmanship deficiencies that the contractor had to address to retain its contract for that portion of the work. According to Corps guidance and officials, the Corps had a limited role in the subcontracting process at the Federal Creosote site. For example, the prime contractor was responsible for selecting and overseeing subcontractors. In particular, Corps guidance states that since subcontracts are agreements solely between the prime contractor and the subcontractor, the Corps does not have the authority to enforce the subcontract provisions. Rather, the guidance indicates that the Corps oversees the prime contractor’s management systems for awarding and administering subcontracts through periodic reviews of the contractor’s subcontracting processes and ongoing reviews of subcontract awards. According to Corps officials, the Corps’ main responsibility in the subcontracting process at the Federal Creosote site was to review subcontract decisions and approve subcontracts above a certain dollar threshold. As Corps officials explained, subcontracts between $25,000 and $100,000 did not need to be approved by the Corps; rather, the prime contractor sent the Corps an “advance notification” package, which documented that the contractor had competitively solicited the work and why the contractor selected a particular subcontractor over others. However, for subcontracts greater than $100,000, the prime contractor had to submit a “request for consent” package to the Corps, which contained similar documentation as an advance notification but required Corps approval prior to awarding a subcontract. According to federal acquisition regulations and policies, when evaluating request for consent packages, Corps contracting officers should consider whether there was sufficient price competition, adequate cost or price comparison, and a sound basis for selecting a particular subcontractor over others, among other factors. Early in the project, the Corps identified several issues with the prime contractor’s performance at the site, including the award of subcontracts. According to a letter the Corps sent to the prime contractor, the Corps noted that after repeated unsuccessful attempts to address these issues, the Corps would initiate proceedings to terminate the contract for site work unless the contractor took corrective action. However, Corps officials said the contractor demonstrated sufficient improvement in its documentation practices. Then, in 2003, the Corps raised the request for consent threshold from $100,000 to $500,000 because of the high volume of these packages that the Corps was receiving. A Corps official noted that while the Corps reviews and consents to the subcontracting decisions of its contractors as appropriate, it avoids becoming too involved in the subcontracting process because of bid protest rules regarding agency involvement in that process. According to the official, under these rules, a subcontract bidder cannot protest a subcontract award unless it can show that the overseeing agency was overly involved in the subcontracting process. Concerning contractors at the Federal Creosote site, the Department of Justice and EPA’s Office of Inspector General have ongoing investigations, some of which have resulted in allegations of fraud committed by employees of the prime contractor and several subcontracting firms. For example, court documents alleged bid-rigging, kickbacks, and other fraudulent activity related to the award of several subcontracts for a variety of services and materials. According to Corps officials, the Corps did not suspect issues of fraud in the subcontracting process until 2004 when, in one instance, a subcontract bidder objected to the award of a soil transportation, treatment, and disposal subcontract to another firm whose bid was substantially higher. Upon further review of the documents, Corps officials found that the prime contractor had not conducted a proper evaluation of the bid proposals, and the Corps withdrew its consent to the subcontract—ultimately requesting that the prime contractor solicit bids under a different process. In the revised bidding process, the firm that had won the earlier subcontract reduced its price from $482.50 to $401.00 per ton of contaminated material—only 70 cents below the competing bid submitted by the firm that had protested the original subcontract. On this basis, the prime contractor again requested consent to subcontract with the firm to which it had awarded the earlier subcontract. According to a Corps official, the Corps was suspicious of illegal activity given how close the two bids were, and Corps officials discussed whether to take formal action against the prime contractor. However, Corps officials decided they did not have sufficient evidence of wrongdoing to support a serious action but did cooperate with others’ investigations of fraud at the site. For more information on site-related fraud, see appendix I. We provided a draft of this report to the Secretary of the Army and the Administrator of the Environmental Protection Agency for review and comment. The Secretary, on behalf of the Corps of Engineers, had no comments on the draft report. EPA generally agreed with our findings regarding the agency’s actions and costs to clean up the Federal Creosote site, and provided a number of technical comments, which we incorporated as appropriate. EPA’s written comments are presented in appendix IV. In its comments, EPA noted that the draft report accurately described the cleanup of the site and correctly compared the site’s estimated and final remedial construction costs. However, EPA stated that comparing estimated remedial construction costs to total site costs is not an “apples to apples” comparison because some costs, such as amounts spent on removal actions or EPA personnel salaries (referred to as “other response costs” in this report), are purposely excluded from EPA’s early estimates of remedial construction costs. We agree that to identify the extent to which site costs increased over agency estimates, one should only compare estimated and actual remedial construction costs, as we do in table 4 of this report. However, our objective was, more broadly, to identify what factors contributed to the difference between the estimated remedial construction costs ($105 million) and the actual total site costs ($338 million). We found that the difference between these two amounts was $141 million in remedial construction cost increases—which were largely due to increases in the amount of contaminated material requiring remediation—and $92 million in other response costs that were not included in EPA’s original estimates. We believe it was necessary to provide information on these other response costs to more fully answer our objective and to provide a more informative accounting of the total costs that EPA incurred in cleaning up the Federal Creosote site. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Army, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Court records show that several cases have been brought concerning the Federal Creosote site cleanup. First, the Department of Justice (Justice) and the state of New Jersey have filed claims to recover cleanup costs. Second, Justice has brought criminal charges in a series of cases against one employee of the prime contractor, three subcontractor companies, and eight associated individuals involved in the cleanup, alleging fraud, among other things. Third, the prime contractor has brought a civil suit against a former employee alleged to have committed fraud and other offenses during his employment as well as against associated subcontractors. The information in this appendix provides a brief summary of known actions related to the Federal Creosote site cleanup. United States v. Tronox, LLC: The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) provides that parties incurring costs to respond to a release or threatened release of a hazardous substance may recover such costs from legally responsible parties, including persons who owned or operated a site, among others. In this regard, the Environmental Protection Agency (EPA) identified Tronox, LLC, the successor to the companies that owned and operated the Federal Creosote site, and, for 2 years, EPA and Tronox participated in alternative dispute resolution concerning EPA’s cost recovery claims. In August 2008, Justice, on behalf of EPA, filed a civil action in the United States District Court for the District of New Jersey against Tronox, seeking recovery of costs that the government incurred for the Federal Creosote site cleanup. The complaint asserted that the government had incurred at least $280 million in response costs and would incur additional costs. In October 2008, the New Jersey Department of Environmental Protection and the Administrator of the New Jersey Spill Compensation Fund filed suit in the same court against Tronox, seeking recovery of costs incurred for the site, as well as damages for injury to natural resources—under both CERCLA and the New Jersey Spill Compensation Act—and public nuisance and trespass claims. In December 2008, the federal and state cases were consolidated. Tronox has stated its intent to vigorously defend against these claims. In early 2009, Tronox filed for voluntary Chapter 11 bankruptcy in federal bankruptcy court and initiated an adversary proceeding in that court, seeking a declaratory judgment on the status of the EPA and New Jersey claims with respect to the bankruptcy. Subsequently, both courts entered a stipulation filed by both the government plaintiffs and Tronox to stay the cost recovery case as well as the adversary proceeding to allow the parties to resolve the claims. As of the date of this report, the stays remain in effect. United States v. Stoerr: Norman Stoerr, a former employee of the prime contractor at the Federal Creosote site, pled guilty to three counts related to his activities as a contracts administrator at the site. Court documents alleged that over a 1-year period, the employee conspired with others to rig bids for one subcontractor at the site, resulting in EPA being charged inflated prices. In addition, the documents alleged that over several years, the employee solicited and accepted kickbacks from certain subcontractors at the Federal Creosote site and another site, and allowed the kickbacks to be fraudulently included in subcontract prices that were charged to EPA. To date, Stoerr has not been sentenced. United States v. McDonald et al: In August 2009, the United States indicted Gordon McDonald—a former employee of the prime contractor at the Federal Creosote site—as well as representatives of two subcontractors who worked at the site, for various counts, including kickbacks and fraud. The indictment charged that the prime contractor’s employee, a project manager, solicited and accepted kickbacks from certain subcontractors in exchange for the award of site work, and that these kickbacks resulted in EPA being charged an inflated price for the subcontractors’ work. The indictment also charged that the project manager disclosed the bid prices of other vendors during the subcontracting process, which resulted in the government paying a higher price for services than it would have otherwise paid. One of the indicted employees (James Haas)—representing a subcontractor who provided backfill material to the site—has pled guilty of providing kickbacks and submitting a bid that was fraudulently inflated by at least $0.50 per ton of material. Haas agreed to pay more than $53,000 in restitution to EPA as part of his guilty plea, and has been sentenced to serve 33 months in jail and to pay a $30,000 criminal fine. McDonald’s case is proceeding, and charges against a third defendant are still pending. United States v. Bennett Environmental, Inc.: Bennett Environm Inc. (BEI), a subcontractor providing soil treatment and disposal services to the Federal Creosote site cleanup, entered a plea agreement admitting to one count of fraud conspiracy. Court documents alleged that over 2 years, the company paid kickbacks to an employee or employees of the prime contractor, in return for receiving favorable treatment in the award of subcontracts, and inflated its prices charged to EPA. BEI was sentence to 5 years’ probation and ordered to pay $1.662 million in restitution to EPA, plus a $1 million fine. United States v. Tejpar: Zul Tejpar, a former employee of BEI, entered a plea of guilty to one count of fraud conspiracy. Court documents alleged that Tejpar, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and fraudulently inflated the company’s bid price after an employee of the prime contractor revealed the other bid prices. To date, T ejpar is awaiting sentencing. United States v. Griffiths: Robert P. Griffiths entered a plea of guilty to three counts related to fraudulent activity at the Federal Creosote site when he was an officer of BEI. Griffiths, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site, fraudulently inflated the company’s invoices that the prime contractor charged to EPA, and fraudulently received the bid prices of other bidders prior to award of a subcontract. To date, Griffiths is awaiting sentencing. United States v. JMJ Environmental, Inc.: JMJ Environmental, Inc., a subcontractor providing wastewater treatment supplies and services, and John Drimak, Jr., its president, entered guilty pleas related to fraudulent activity at the Federal Creosote site and another site. At the Federal Creosote site, JMJ Environmental and Drimak, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site, fraudulently inflated the company’s prices that the prime contractor charged to EPA, and arranged for intentionally high, noncompetitive bids from other vendors. To date, JMJ Environmental and Drimak are awaiting sentencing. United States v. Tranchina: Christopher Tranchina, an employee of subcontractor Ray Angelini, Inc., which provided electrical services and supplies, entered a plea of guilty to fraud conspiracy for activities at the Federal Creosote site. Tranchina, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and fraudulently inflated the company’s prices that the prime contractor charged to EPA. Tranchina was sentenced to imprisonment of 20 months and ordered to pay $154,597 in restitution to EPA. United States v. Landgraber: Frederick Landgraber, president of subcontractor Elite Landscaping, Inc., entered a plea of guilty to fraud conspiracy for activities at the Federal Creosote site. Landgraber, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and submitted fraudulent bids from fictitious vendors to give the appearance tive process, resulting in EPA paying higher prices than if of a competi procurement regulations were followed. Landgraber was sentenced to imprisonment of 5 months and ordered to pay $35,000 in restitution to EPA and a $5,000 fine. United States v. Boski: National Industrial Supply, LLC, a pipe supply company, and coowner Victor Boski entered guilty pleas for fraud conspiracy at the Federal Creosote site and another site. At the Feder Creosote site, National Industrial Supply and Boski, along with coconspirators, provided kickbacks to em to influence the award of subcontracts at the site and fraudulently inflated ms the company’s prices that the prime contractor charged to EPA. The terployees of the prime contractor of the plea agreement require National Industrial Supply and Boski to have available $60,000 to satisfy any restitution or fine imposed by the court, among other items. To date, they are awaiting sentencing. This appendix provides information on the scope of work and methodology used to examine (1) how EPA assessed the risks and selected remedies for the Federal Creosote site, and what priority EPA assigned to site cleanup; (2) what factors contributed to the difference between the estimated and actual remediation costs of the site; and (3) how responsibilities for implementing and overseeing the site work w divided between EPA and the U.S. Army Corps of Engineers (the Corp also discusses our methodology for summarizing criminal and civil litigation related to the Federal Creosote site. To examine how EPA assessed the risks and selected remedies for the Federal Creosote site, as well as what priority it assigned to the clean we reviewed EPA’s Superfund site investigation and cleanup processes, including applicable statutes, regulations, and agency guidance. We also reviewed documentation from the site’s administrative record, which detailed the agency’s activities and decisions at the site. As part of this review, we analyzed public comments that were documented in site records of decision to identify key issues with the cleanup effort. To obtain additional information on these and other site cleanup issues, we interviewed EPA Region 2 officials involved with the site, including officials from the Emergency and Remedial Response Division, the Public Affairs Division, and the Office of Regional Counsel. Furthermore, we interviewed and reviewed documentation obtained from officials with the Agency for Toxic Substances and Disease Registry regarding its determination of site risks. We also consulted with New Jersey and Borough of Manville officials to obtain their views on the cleanup effort. Finally, we interviewed representatives of the potentially responsible party for the site to obtain the party’s views on EPA’s risk assessment, remedy selection, and site prioritization. To determine what factors contributed to the differences between the estimated and actual costs of site cleanup, we obtained and analyzed data on estimated and actual site costs from several sources. For estimated site costs, we combined EPA’s estimates for selected remedies from site records of decision and remedial alternative evaluations. In developing these estimates, EPA applied a simplifying assumption that all construction costs would be incurred in a single year, and, therefore did not discount future construction costs, even though work was projected to occur several years into the future as a result of design activities and resident relocations as well as EPA’s estimated constructio time frames. However, our discount rate policy guidance recommends that we apply a discount factor to future costs. Consequently, to convert EPA’s estimated costs into fiscal year 2009 dollars, we (1) conducted present value analysis to discount future site c original estimate (base year) for each remedy, using EPA’s recommended discount rate of 7 percent, and (2) converted the present value of each estimate into fiscal year 2009 dollars. To calculate the present value of osts to the dollar year of the estimated costs, we identified the projected construction time frames for each remedy from site documents. Because the documents did not provid information on how construction costs would be distributed over the projected time frame, we calculated the midpoint of a range of values, assuming that all costs for particular activities comprising EPA’s sele remedies would either be incurred at the beginning of the projected t frame (the maximum value of these costs) or at the end of the projected time frame (the minimum value). To adjust the present values from t base year to fiscal year 2009 constant dollars, we divided the present values by the inflation index for the base year and weighted the calculation to convert the base year from calendar years to fiscal years. To identify actual sitewide costs, we compiled data from multiple so including EPA’s Superfund Cost Recovery Package Imaging and On-Line System (SCORPIOS) for data on site costs through April 30, 2009; the Corps of Engineers Financial Management System (CEFMS) for data on gh various dates in April and early May, Corps and contractor costs throu 2009; and contractor-generated project cost summary reports for data on urces, contractor costs for each phase of the cleanup through Februar We relied on multiple data sources for our analysis because none of the sources provided a sufficient level of specificity for us to comprehe determine when and for what purpose costs were incurred. In partic the SCORPIOS data provided specific dates of when EPA incurred costs but for some costs, especially those related to site construction work, th e data did not generally provide detailed information on why the costs were incurred. Therefore, to obtain more detailed information on the reason f incurring certain costs, we used the data from CEFMS and the contractor’s project cost summary reports. However, the CEFMS and contractor project cost summary report data did not generally provide specific information on when costs were incurred. Consequently, to determine actual site costs in fiscal year 2009 dollars, we used two approaches. For costs taken from the SCORPIOS data or when detailed information on the date of a particular cost was available, we applied the inflation index f the particular fiscal year in which EPA incurred the cost. For costs take from the other data sources, we used the midpoint of the range of inflation-adjusted values for the construction start and end dates for individual work phases, as recorded in site documents. y 15, 2009. We worked with EPA Region 2 officials to categorize site costs, includ those that were part of EPA’s original construction estimates as those that were not part of EPA’s estimates. After identifying the costs th were not included in EPA’s original estimates, we took the difference between estimated and actual construction costs, according to categories that we discussed with EPA, to identify where actual costs changed the most from EPA’s estimates. Then, to identify the factors that contribute d the most to the difference in these cost categories, we analyzed the typesof costs in each category and interviewed EPA Region 2 and Corps officials responsible for the cleanup. In addition, we analyzed data from site documents on the estimated and actual amounts of contaminated material at various stages of the cleanup process to obtain further information on the extent to which increased amounts of contaminated material affected site costs. To examine the impact of alternative methodologies on the disparity between estimated and actual costs, we reviewed EPA cost-estimating guidance and calculated the effect of discounting future estimated costs within our analysis. To determine how fraud impacted site costs, we reviewed civil and criminal litigation documents describing the monetary values exchanged in various schemes. To ensure the reliability of the actual cost data we used for this report, we reviewed the data obtained from the SCORPIOS and CEFMS databases as well as the contractor-generated cost summary reports that the Corps provided. For each of these data sources, we reviewed agency documents and interviewed EPA and Corps officials to obtain information on their data reliability controls. We also electronically reviewed the data and compared them across all sources as well as with other information on sit costs as available. For example, we compared contractor cost data provided by the Corps with similar data from the contractor-generated cost summary reports. Similarly, we compared Corps cost data from CEFMS with analogous data from EPA’s SCORPIOS database. Generally, we found that discrepancies among comparable data from different sources were most likely attributable to the potential delay between when a cost is incurred by a contractor and when it is invoiced and processed, first by the Corps and later by EPA. On the basis of our evaluation of these sources, we concluded that the data we collected and analyzed were sufficiently reliable for our purposes. However, because some costs incurred prior to early May 2009 may not have been processed through the Corps and EPA’s cost-tracking systems at the time of data collection, site cost data in this report are considered to be approximate. Moreover, because our methodology relied on calculating the midpoint of a range of costs for both the present value calculations and adjusting data for inflation, we consider the data we present in this report on estimated and actual costs and the difference between these costs also to be approximate. reviewed agency guidance regarding EPA’s responsibilities at Superf sites. To obtain information on EPA’s oversight actions, we interviewed EPA and Corps officials responsible for site cleanup and contracting We also reviewed site meeting minutes, monthly progress reports, correspond General reports. To further describe the Corps’ responsibilities at the Federal Creosote site, we reviewed Corps guidance for the cleanup of hazardous waste projects, Corps contract management best practices, and t relevant procurement regulations. To obtain information on actions tha the Corps took to implement its site responsibilities, we reviewed Corps correspondence to the contractor and contractor requests for approval o soil treatment and disposal subcontracts. We also interviewed Corps officials responsible for site cleanup and contracting work as well as EPA orps’ Region 2 officials. However, we did not assess the adequacy of the C efforts or its compliance with Corps guidance and federal procurement regulations. work. ence to the Corps, and relevant EPA Office of Inspector To examine issues regarding civil and criminal litigation related to th Federal Creosote site, we collected case data from the Public Access to Court Electronic Records system. We then qualitatively analyzed documents obtained from this system to identify the issues involved and the status of each case as well as the outcomes, if any, of the cases. However, because criminal investigations are ongoing and confidenti could not determine whether any additional criminal charges were under consideration, but relied solely on the publicly available information for charges that had been filed as of November 2009. In addition to the individual named above, Vincent P. Price, Assistant Director; Carmen Donohue; Maura Hardy; Christopher Murray; Ira Nichols-Barrer; and Lisa Van Arsdale made key contributions to this report. Elizabeth Beardsley, Nancy Crothers, Alexandra Dew, Richard Johnson, and Anne Stevens also made important contributions.
|
In the 1990s, creosote was discovered under a residential neighborhood in Manville, New Jersey. Creosote, a mixture of chemicals, is used to preserve wood products, such as railroad ties. Some of the chemicals in creosote may cause cancer, according to the Environmental Protection Agency (EPA). EPA found that creosote from a former wood-treatment facility (known as the Federal Creosote site) had contaminated soil and groundwater at the site. Under the Superfund program--the federal government's principal program to clean up hazardous waste--EPA assessed site risks, selected remedies, and worked with the U.S. Army Corps of Engineers to clean up the site. As of May 2009, construction of EPA's remedies for the site had been completed; however, total site costs were almost $340 million and remedial construction costs had exceeded original estimates. In this context, GAO was asked to examine (1) how EPA assessed risks and selected remedies for the site, and what priority EPA gave to site cleanup; (2) what factors contributed to the difference between the estimated and actual costs; and (3) how EPA and the Corps divided responsibilities for site work. GAO analyzed EPA and Corps documents and data on the cleanup effort and its costs, and interviewed officials from these agencies. This report contains no recommendations. EPA generally agreed with GAO's findings on the agency's cleanup costs and actions, while the U.S. Army Corps of Engineers had no comments. The extent of the contamination in a residential area at the Federal Creosote site was the primary factor influencing EPA's risk assessment conclusions, remedy selection decisions, and how EPA prioritized site work, according to site documents and agency officials. EPA assessed site contamination through multiple rounds of evaluation and concluded that soil and groundwater contamination levels were high enough that EPA needed to take action. Then, EPA evaluated remedies to achieve cleanup goals that it had established for the site and that were consistent with its residential use. EPA selected off-site treatment and disposal of the contaminated soil and long-term monitoring of the groundwater contamination as the remedies for the site. In selecting these remedies, EPA considered a range of alternatives but ultimately determined that certain options would be potentially infeasible or ineffective due to the residential setting. For example, EPA chose not to implement certain alternatives on-site because the agency found that there was insufficient space and they would be too disruptive to nearby residents. In addition, EPA chose not to implement certain alternatives because the agency found that they would be unlikely to achieve the cleanup goals for the site, especially considering the high level of treatment required to allow for unrestricted residential use of the area and the high levels of contamination found at the site. EPA made cleanup of the site a high priority because the contamination was in a residential area. For example, EPA took steps to shorten the cleanup period and prioritized the use of regional Superfund resources on the Federal Creosote site over other sites in the region. The $338 million in total site costs exceeded EPA's estimated remedial construction costs of $105 million by about $233 million, primarily because EPA's estimates focused only on construction costs, and EPA discovered additional contamination during the cleanup effort. EPA prepared preliminary cost estimates during the remedy selection process; however, EPA requires that these estimates include only the costs associated with implementing different remedies it was considering, not all site costs. Also, as a result of the movement of contamination in the ground and sampling limitations during EPA's site investigation, a greater-than-expected amount of contamination was discovered during the cleanup effort, which increased costs. Other factors, such as contractor fraud, affected total site costs to a lesser extent. EPA was responsible for managing the overall site cleanup and community relations, while the Corps was responsible for implementing the cleanup. EPA dedicated a full-time staff member to manage the site cleanup who, according to EPA, maintained a significant on-site presence to ensure that the project remained on schedule and was adequately funded and to work with residents. EPA also oversaw the work of the Corps and its costs. To conduct the actual cleanup work, the Corps hired contractors to design or implement cleanup activities who, in turn, hired subcontractors for some tasks. The Corps oversaw the activities and costs of its primary contractors but, according to Corps officials, was less involved in selecting and overseeing subcontractors.
|
GSA was established by the Federal Property and Administrative Services Act of 1949 to serve as a central procurement and property management agency for the federal government. GSA’s diverse activities and programs have governmentwide implications that, according to GSA, affect over $52 billion, which is more than one-fourth of the federal government’s total procurement dollars. Through various revolving funds, GSA buys goods and services from private vendors and resells them to agencies. GSA has four major components—the Public Buildings Service, FSS, FTS, and its Office of Governmentwide Policy (OGP)—to carry out its various programs and activities. FSS provides contract arrangements for commercial products and services worth over $17 billion per year through its four business lines: supply and procurement, vehicle acquisition and leasing, travel and transportation, and personal property management. As previously indicated, we did not focus on the travel and transportation and personal property management business lines. FTS provides reimbursable services for local and long- distance telecommunications. It also assists agencies with acquiring, managing, and using IT systems. FTS accomplishes this through two business lines: network services, for its telecommunications activities; and IT solutions, for its IT systems-related activities. In carrying out their duties, FSS and FTS are to follow the Federal Acquisition Regulation (FAR), which is the uniform set of policies and procedures executive agencies are required to follow in procuring goods and services. The FAR implements various statutory requirements intended to advance national social and economic goals, such as giving preferential treatment in awarding contracts to certain groups, such as the blind and severely handicapped, small and disadvantaged businesses, and the federal prison work program. Governmentwide procurement policy is overseen by the Office of Federal Procurement Policy (OFPP) within the Office of Management and Budget (OMB). OFPP is responsible for prescribing policy and coordinating the development of governmentwide procurement standards. OGP within GSA has a supporting role by creating networks of agency procurement representatives and by providing guidance and policy related to specific areas, such as vehicles, aircraft, and electronic commerce. Each year, the U.S. government spends approximately $200 billion in acquiring goods and services. FSS finances its supply and vehicle activities though the General Supply Fund, which is a revolving fund that is sustained by revenues received from customer agencies for goods and services. Through its supply and procurement business line, FSS offers federal agencies a choice of more than 4 million commercial products and a range of technology-oriented, financial, environmental, management, and administrative services. FSS’ three methods of supply are (1) the stock program, (2) special order sales and (3) federal supply schedules. In the stock program, FSS stores approximately 19,000 common-use items for resale to agencies in 4 major distribution centers, 3 smaller centers, and 19 government stores located throughout the country and overseas. This program had sales of $817 million in fiscal year 1998. The special order program, which had sales of $477 million in fiscal year 1998, provides products for special needs or when stocking is not desirable, such as office furniture and appliances. The federal supply schedules program is similar to a commercial catalog business and provides agencies with access to over 6,800 contracts to obtain various goods and services. In addition to covering a vast range of commercial items, the schedules cover IT products and services. FSS prenegotiates terms, conditions, and ceilings on price with vendors; agencies deal directly with the vendors to negotiate final prices and establish deliveries. Supply schedule sales were about $8 billion in fiscal year 1998. The vehicle acquisition and leasing business line in FSS provides agencies with one-stop shopping for purchasing vehicles or leasing them from the FSS-managed interagency fleet. FSS is the federal government’s mandatory source for the purchase of new, nontactical vehicles. Although leasing vehicles through the interagency fleet is not mandatory, agencies that choose this option get scheduled replacement, full-service management, and a fleet services card for fuel and repairs, for a fixed monthly fee, as well as a cost per mile charged by vehicle type. In fiscal year 1998, the vehicle acquisition and leasing business line purchased about 56,800 vehicles worth about $1 billion; one-half of the vehicles were for the interagency fleet, with the rest reflecting vehicle purchases for agencies. The interagency fleet comprised over 160,000 automobiles, passenger vans, trucks, buses, ambulances, and special-purpose equipment in fiscal year 1998. FSS relies on the private sector for vehicle delivery, fuel, maintenance and repair, and vehicle auctions. FTS finances its telecommunications and IT activities through the Information Technology Fund, which is a revolving fund sustained by revenues received from customer agencies for goods and services. In fiscal year 1998, FTS had revenues of $3.4 billion. The network services business line in FTS provides customer agencies with telecommunication services, including global voice, data, and video services, supporting both the local and long-distance needs of the federal government. According to FTS officials, the network services business line had revenues of about $1 billion in fiscal year 1998. Until the end of 1998, FTS long-distance services—under its FTS2000 arrangements with AT&T and Sprint—were a mandatory source for federal agencies. Under the FTS2001 arrangements with MCI and Sprint that were recently awarded, agencies are able to select their own service provider. According to FTS officials, these are the largest non-Defense government contracts, valued at between $5 and $8 billion over 8 years. FTS local telecommunications services also used to be mandatory; however, FTS now offers a range of nonmandatory services in this area, where revenue totaled $266 million in 1998. The IT solutions business line in FTS provides agencies with a range of assistance related to acquiring, managing, and using IT. In fiscal year 1998, the IT solutions business line had revenues of about $2.4 billion. FTS prides itself in this area on being an objective and trusted third party that can provide independent assistance to agencies. For a fee, FTS acts as a consulting agent for agencies in the acquisition of large IT systems and related services, systems integration, software definition and design, and office systems development. It also supports federal systems through risk analysis and information security support. Its Federal Acquisition Services for Technology (FAST) program is intended to provide quick procurement assistance for IT products and services. The FAST program had revenues of $973 million in fiscal year 1998. According to FTS officials, FTS services differ from the IT products and services offered by FSS under the supply schedules in that FTS is involved as a third party. Agencies deal directly with vendors under the FSS schedules. An FTS official added that FTS views its role as that of a value-added reseller of telecommunications and IT. In addition, this official said that FTS recognizes the significance of the evolving integration of telecommunications and IT in meeting customer agency needs, now and in the future. The federal government has undergone reform and downsizing in response to efforts like the National Performance Review and congressional initiatives to promote efficiency and economy in contracting, such as the Federal Acquisition Streamlining Act of 1994. More recently, the Federal Activities Inventory Reform Act of 1998 (FAIR) required executive agencies to identify functions they perform that are not inherently governmental and could be performed by the private sector. This environment of reform has affected FSS and FTS. GSA, as a whole, has gone from 39,000 employees in 1971 to fewer than 14,000 employees in 1999. It also realigned itself organizationally to mirror the private sector and incorporated commercial practices to improve the level of service provided to agencies and to enhance its relationships with the private sector. These changes were evident in FSS and FTS with the establishment of the aforementioned business lines. The changes also manifested themselves in the shift from being a mandatory to nonmandatory source for agencies in such areas as supply procurement, vehicle leasing through the interagency fleet, telecommunications services, and IT acquisition. Also, the Government Performance and Results Act of 1993 increased FSS’ and FTS’ focus on performance measurement as a vital component of operating in a more business-oriented environment. Despite the changes that occurred, FSS and FTS believe that several barriers still exist that impede their ability to compete in this new environment and operate in a businesslike manner. Barriers cited by FSS were the inability to recruit and train top-level staff because of various federal personnel requirements, prohibitions on its ability to enter into cooperative purchasing arrangements, the extensive bid protest processes available to federal contractors, and its inability to deal effectively with poor-performing vendors. FTS also cited personnel-related barriers but had more concerns about financial-related barriers, such as the inability to consider accounts receivable the same as cash in managing the Information Technology Fund. An FTS official said this limits FTS’ ability to commit to new business opportunities because payments to FTS from some agencies can take up to 90 days. Another barrier FTS cited was that federal rules related to disposal of property can make agencies less efficient because they cannot exchange the equipment they own for like services. FTS also cited being prohibited from using the standard of “adequate” competition as an alternative to “full and open” competition, which is required by law, in certain multiple award contracting situations as another barrier to operating effectively. In the past, Congress has amended laws to allow agencies to overcome various barriers when they were shown to impede effective performance. For example, government corporations, including the Tennessee Valley Authority (TVA), and dozens of others, serve public functions of a business nature and were given some flexibility related to the applicability of federal statutes to overcome barriers caused by the laws and implementing regulations. Congress authorized TVA, a government corporation, as well as federal agencies such as the Department of Veterans Affairs (VA) and the Federal Aviation Administration (FAA), to adopt alternative personnel systems. Congress also gave FAA authority to implement a streamlined procurement system so FAA could more easily deploy new technologies. Agencies have also outsourced a wide range of functions that typically were done in-house. For example, the Office of Personnel Management (OPM) now contracts for investigative services, which were formerly done in-house until OPM privatized its investigative unit. The United States is not alone in its efforts to make its agencies more businesslike and to address barriers to efficient and streamlined government. Governments around the globe have reassessed the role of government and have made organizational and operational changes to improve the level of service to citizens. Changes that have taken place have included greater reliance on the private sector through such methods as outsourcing, empowering civil servants to make business decisions, adopting a more results-oriented focus, and developing and monitoring data on performance. To meet our objective, we obtained information on FSS’ and FTS’ procurement activities and federal procurement in general. We primarily relied on interviews with, and documents obtained from, officials from FSS, FTS, GSA’s OGP, and OFPP within OMB. We conducted research, primarily using the Internet, to select countries for the review. We identified countries where the government had made a commitment to procurement reform and where preliminary work showed reforms were made in activities similar to those carried out by FSS and FTS. On the basis of this work, we selected Canada, the UK, Australia, and New Zealand. We confirmed our selections primarily through discussions with our counterpart organizations—the Auditor General offices—in each of the countries. To collect information on the organizations, programs, and policies in these countries, we visited the countries, interviewed key officials about their operations, and obtained a wide range of material. After collecting the information, we compared these countries’ operations to the way FSS and FTS assist agencies with the procurement of supplies, vehicles, telecommunications, and IT. We performed our work between July 1998 and May 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB, Administrator of GSA, and responsible officials in the countries we visited. On June 4, 1999, OFPP’s Associate Administrator for Procurement Law and Legislation told us that OMB had no comments. In response to our request for comments from the Administrator of GSA, FSS and FTS officials provided comments. Our FSS liaison orally provided the comments of various FSS components on June 11, 1999, and FTS’ Chief of Staff provided oral comments on June 16, 1999. Various officials from the four countries provided comments via e-mail, facsimile, or letter during June 1999. These comments are discussed near the end of this letter. Appendix II contains a more detailed description of our objective, scope, and methodology and identifies the organizations discussed in this report and their Internet addresses. Canada, with a population of about 31 million, is a federation of 10 provinces and 3 territories and has a central government that operates as a parliamentary democracy. Canada’s central procurement department— Public Works and Government Services Canada (PWGSC)—had two organizations—the Supply Operations Service (SOS) and the Government Telecommunications and Informatics Services branch (GTIS)—with activities similar to those carried out by FSS and FTS. Procurement in the Canadian government centered on purchase authority thresholds, which were delegated by the Treasury Board. That is, agencies had authority to automatically buy goods and services up to certain amounts. For goods purchases above a $5,000 Canadian threshold ($3,425 U.S., assuming that $1 U.S. = $1.46 Canadian), agencies generally were required to use PWGSC as a central purchasing agency. SOS arranged governmentwide contracts for supplies, including IT products and services, similar to FSS’ function. However, it no longer operated a stock program with distribution centers or government stores. As with FSS in the United States, agencies were required to use SOS for vehicle acquisitions, although unlike FSS, it did not manage a central vehicle fleet. SOS differed from FSS in the IT area in that it had a unit that provided IT systems acquisition services, similar to FTS’ role. GTIS also provided some services in the IT area, where it assisted mostly smaller agencies in defining their needs, but it was primarily involved in the procurement of telecommunications services, like FTS. In the IT area, the government of Canada was starting to use benefits-driven-procurement (BDP), under which the government asks the private sector to deliver certain agreed-upon results, instead of a more traditional approach under which the private sector is asked to follow a government blueprint with detailed specifications. Greater use of BDP was part of a broad vision for reform being developed by the Treasury Board Secretariat (TBS).Appendix I identifies the key organizations in Canada and summarizes their activities. The central government of Canada meets its procurement needs through a combination of central purchasing and delegated authority to agencies. PWGSC is the central purchasing agent for the government of Canada. PWGSC’s activities covered both civilian and defense purchasing for approximately 100 departments and agencies of the central government and other jurisdictions. Employing about 11,800 people, PWGSC, among other things, managed approximately 63,000 contracts and was responsible for purchasing some 17,000 categories of goods, services, and construction, with a total annual value in excess of $8 billion Canadian (about $5.5 billion U.S.). This amount is more than one-half of the total amount of all federal government contracting in Canada. In addition, PWGSC had several other governmentwide responsibilities, including those related to real property, personnel, consulting and audit, public information, and translation services. It also banked and dispersed government funds and maintained the government’s accounts. The Treasury Board sets contracting authority levels for departments in the Canadian government. As a central procurement agency, PWGSC had much higher authority than other departments. In addition, the Public Works and Government Services Act of 1996 gave PWGSC exclusive responsibility to purchase goods on behalf of the Canadian government and also for delegating purchase authority for goods to other departments. PWGSC’s standard delegation of authority for goods to other departments was $5,000 Canadian ($3,425 U.S.) and according to an official with TBS, some departments had authority of $25,000 Canadian ($17,123 U.S.). Each department could procure services within its own authority, although the departments could ask PWGSC to do the procurement for them. Departments had authority to purchase services up to $2 million Canadian (about $1.4 million U.S.) if they used the government’s electronic tendering service. Purchases above contracting authorities set by the Treasury Board required approval by the Treasury Board. Government policy in Canada requires that contracting be conducted in a manner that will, among other things, ensure competition and the preeminence of operational requirements. According to TBS officials, government policy also seeks to advance certain national objectives, including regional development and award of some contracts to aboriginal populations. Canada did not, however, appear to use its procurement system to advance social objectives to the extent this is done in the United States. SOS—a major component of PWGSC—arranged governmentwide agreements with suppliers through its standing offers and supply arrangements. Standing offers provide goods and services to departments at prearranged prices, under set terms and conditions, without specifying delivery schedules or quantities required up front. Standing offers are employed when one or more purchasers repetitively order the same good or service. Common products offered under the standing offers are food, fuel, plumbing supplies, tires, stationery, and office equipment. Services include repair and overhaul of equipment and temporary help services. Supply arrangements are nonbinding agreements between SOS and suppliers to provide a range of goods or services on an as-required basis. With supply arrangements, departments solicit bids from a pool of prescreened vendors based on their specific scope of work; in this way supply arrangements differ from standing offers, under which departments accept a portion of a requirement already defined and priced. Many supply arrangements include ceilings on prices, which allow departments to negotiate the price downward on the basis of the actual requirement or scope of work. Although we did not do a comprehensive comparison of the supply activities of SOS and FSS, they were similar in that they aim to simplify the buying process for the government purchaser by prenegotiating terms, conditions, and sometimes prices with suppliers. We also noted that like FSS, SOS had on-line catalogues that purchasers could use to find products and services. There was, however, a difference in SOS and FSS supply operations in that SOS no longer operates a stock program with distribution centers or government stores as FSS does. According to SOS officials, the government had operated distribution centers at one time, but they were considered inefficient and the government stopped operating them several years ago. SOS’ current supply activities rely primarily on direct delivery from the vendor. According to these officials, the government also used to operate government stores that at one time were found in most of the major federal buildings. However, for ideological reasons, the government decided that it should not be in competition with the private sector and privatized the stores several years ago. Another difference we noted was that SOS can be a mandatory source of supply if the purchase amount exceeds the buyer’s threshold. FSS, in contrast, is a nonmandatory source of supply, regardless of the purchase amount. In the vehicle area, SOS’ activities were similar to FSS’ activities in that departments were required to use SOS for nontactical vehicle acquisition. According to TBS officials, the Canadian government purchases over 2,250 vehicles each year. The most common method of supply for vehicles is standing offers, whereby manufacturers provide prices for different models with different option combinations. For urgent requirements, departments could access SOS’ inventory of vehicles that were already purchased by SOS through standing offers and were being held by manufacturers until needed. The least common method, which required special approval by the Treasury Board because it was the most expensive, was direct purchase from dealer stock. Departments could also lease vehicles from the private sector through SOS. According to SOS officials, departments generally managed their own fleets and had arrangements with dealerships and private garages for vehicle servicing. Under this framework, SOS differed from FSS in that it did not manage a central fleet like FSS’ interagency fleet. As with most other goods and services in which PWGSC was involved, SOS acted as the contract authority on behalf of the buyer and was not involved in delivery of the goods and services. Like FSS’ schedules, SOS contracts also covered IT goods and services. However, SOS had a role in the IT area that went beyond what FSS offers through its IT schedules and more closely resembled what FTS offers in assisting agencies with the acquisition of IT systems and related services. SOS had a branch called the Science, Informatics, and Professional Services Sector (SIPSS) that managed the IT goods and services contracts mentioned earlier. These contracts included consulting services for IT systems design, research and development, and training as well as goods, such as IT systems infrastructure, electronic data processing systems, hardware, and software. In addition to managing these contracts, which was similar to what FSS does through its IT schedules, SIPSS provided direct assistance to departments with major IT systems acquisitions, similar to what FTS does. A difference between SIPSS and FSS/FTS activities in this area, however, was that SIPSS was often a mandatory source for departments because of the purchasing thresholds. FSS and FTS, on the other hand, are always nonmandatory sources in the IT area. Like FTS, GTIS managed governmentwide telecommunications contracts and sought to aggregate government requirements to save costs. In fiscal year 1997/1998, GTIS spent about $275 million Canadian (about $188 million U.S.) on telecommunications services. According to GTIS officials, the telecommunications industry in Canada has undergone a great deal of change since the mid-1990s. In 1995, the Canadian Radio-Television and Telecommunications Commission deregulated large segments of the telecommunications industry. According to these officials, prior to this time, the Stentor alliance of regional carriers was the dominant service provider; Bell Canada was the largest provider of services in the provinces of Ontario and Quebec. The deregulation resulted in a more competitive environment and required GTIS to develop a competitive telecommunications supply arrangement. In general, most departments procured local and long distance service through GTIS, although its services were not mandatory. GTIS officials said it was more convenient and less expensive for departments to use GTIS. At the time of our review, however, GTIS officials were evaluating the ongoing impact of deregulation on their optional status and governmentwide bargaining position as departments began procuring services directly from the private sector. GTIS also provided services related to IT systems acquisition that were similar to services offered by FTS. Small agencies or agencies that did not have IT expertise could get assistance from GTIS in defining their needs and procurement objectives. GTIS then interfaced with SIPSS on behalf of these agencies and could bundle their requirements to get a better price. GTIS also was involved in several governmentwide IT initiatives, which included fostering electronic data interchange and electronic transactions within government. We noted during our review that in the IT procurement area, the government of Canada was starting to use an approach called benefits- driven procurement (BDP). BDP stresses the results and benefits that the government and suppliers mutually seek to gain from each acquisition. Although we did not do a detailed comparison, BDP has concepts similar to performance based service contracting (PBSC) in the United States in that contractors are given more freedom to determine how to meet the government’s performance objectives. Arising from recognition by the Canadian government that one of the major reasons IT projects fail is that the procurement process is too inflexible, BDP is an alternative to traditional approaches. According to Canadian procurement officials, under traditional procurement approaches, departments could spend months, even years, developing a detailed requirement that, when completed, is often outdated and did not reflect changes that have taken place in the organization. Instead, the BDP approach is to ask the private sector to deliver certain agreed-upon results rather than follow a blueprint with detailed specifications. The private sector is also invited to submit ideas on what sort of project should be undertaken before a formal request for proposals is issued. Another key feature of BDP is up-front planning to remove or mitigate potential problems in the procurement process. Both the front-end planning and the management of the entire acquisition are based on four elements: (1) a solid business case, (2) risk analysis, (3) clear delineation of accountabilities, and (4) a compensation structure tied closely to the contractor’s performance. Appendix III provides a more detailed description of these elements and the BDP approach. At the time of our review, TBS was in the midst of developing a broad agenda for procurement reform. TBS officials said that the main problem with Canada’s procurement system was that it was still too focused on rules and process and not streamlined and results-oriented. The officials said that although key departments had made a good start at modifying and streamlining their processes and focusing on their core missions, more could be done. TBS was planning to take a leadership role in reforming procurement processes and was aiming to create a system in which central policy focused on principles instead of on developing prescriptive rules. In addition, TBS officials said they would support applying BDP principles to other types of acquisitions and would attempt to coordinate the other reforms under way in PWGSC and other departments. TBS and PWGSC officials said that the government was in the early stages of applying performance measurement principles in assessing the reforms that have taken place and therefore did not have much data available to gauge results. A top procurement official with the Department of National Defence (DND) whom we interviewed agreed that there had been some positive gains as a result of recent procurement reforms. This official cited examples where DND had privatized support functions so it could focus more on its core mission. These included maintenance of vehicles and some weapons systems and pilot training. DND had also used the BDP process for a new information system for its supply network. Also, this official said that DND and PWGSC had a good working relationship. He added that skilled procurement staffs were crucial as departments focused more on their core missions and increasingly relied on the private sector for activities that were traditionally done in-house. The UK, with a population of about 59 million, encompasses England, Wales, Scotland, Northern Ireland, and several dependent areas, and has a central parliamentary government that operates under a constitutional monarchy. The central government had two organizations—The Buying Agency (TBA) and the Central Computer and Telecommunications Agency (CCTA)—with activities similar to those of FSS and FTS. However, these agencies had more flexibility in how they managed their financial and personnel affairs than if they were traditional government departments. Known as executive or “next steps” agencies because they represented the next steps in reforming government management, they were structured like private businesses and were one part of a broad government reform effort being led by Her Majesty’s Treasury (HM Treasury), which sets procurement policy. Like FSS, TBA had contract arrangements for supplies that departments and agencies could use on a nonmandatory basis. However, TBA did not operate a stock program with distribution centers or government stores as FSS does. Also, the UK did not have a central vehicle fleet like FSS; however, TBA could assist agencies in obtaining fleet management services or with vehicle acquisition. Agencies also could use vehicle acquisition arrangements held by the Ministry of Defence (MOD) or go directly to the private sector. Like FTS, CCTA arranged telecommunications contracts for governmentwide use and provided services in IT systems acquisition on a nonmandatory basis. Across the government, HM Treasury was leading a public-private partnering initiative known as the private finance initiative (PFI) and had other efforts under way to encourage knowledge sharing and performance measurement in procurement. Appendix I identifies the key organizations in the United Kingdom and summarizes their activities. In recent years, the central government of the United Kingdom has undergone a continued program of government reform, where, according to UK government officials, the emphasis has been on cost consciousness, value for money, downsizing, and greater concentration on the core businesses of government. With these reforms, the government has decentralized procurement authority to its agencies and ministries, which spend over 20 billion each year for goods and services (about $32.3 billion, assuming that $1 U.S. = 0.62). According to officials with HM Treasury, most procurement prior to the reforms went through several central procurement departments, which supplied everything from pencils to large computer systems. Now, agencies and departments are, for the most part, responsible for their own procurement, although they are expected to adhere to standards that are part of HM Treasury’s broad strategy for procurement. These standards include achieving value for money; emphasizing fair competition; incorporating best practices; and carefully assessing and managing business cases, risks, and contracts. HM Treasury officials told us that their procurement system was generally not used to advance any social objectives. However, UK officials pointed out that in the procurement area, the UK cannot act unilaterally and is required to implement laws compatible with directives promulgated by the European Community, such as ensuring that relevant contracts are awarded objectively. As part of the trend toward decentralization and getting government to run more like business, the government separated its service delivery and policy formulation functions. In February 1988, the government launched the “Next Steps” initiative, referring to the next steps in improving government management. Under the initiative, the government identified areas of departmental work that could be grouped together into operational units under single officials who would be accountable directly to their ministers for delivering specific objectives, services, and results. The government looked critically at its service delivery functions and determined whether each should be retained, reengineered, privatized, contracted out, or abolished. As a result of this process, several next steps agencies were established. Next steps agencies operate within a framework with targets set by ministers for the task to be done, the results to be achieved, and the resources to be provided. The day-to-day responsibility for running the organization is delegated by ministers to a chief executive, who is to have the management tools and freedoms needed to do the job. Each next steps agency has a public framework document, so that everyone can know the framework within which the agency operates. It includes the aims and objectives of the agency, its financial and accounting processes, and its approaches to pay and personnel issues. The frameworks for each agency vary; however, they generally are intended to provide the chief executive with much greater flexibility than if the units were operating within a traditional government department. As of October 1997, there were about 120 next steps agencies with staff numbering about 362,000, or about 77 percent of the civil service. TBA was established in 1991 as a next steps agency and is part of the Cabinet Office, which is the UK’s central department for policy formulation, government management, and the civil service. TBA was similar to FSS in that it offers departments and agencies nonmandatory supply arrangements for common-use goods and services. TBA sought to provide a center of procurement excellence within the public sector and to help customers secure better value for money than they could otherwise achieve. TBA’s framework document included objectives to provide procurement services so that agencies could receive better value for money than they would otherwise and to bring about improvements in cost effectiveness and the quality that agencies receive from suppliers. TBA offered a range of procurement services that were similar to FSS’ supply activities. These included pretendered “direct call-off” contracts covering over 50,000 products and services; the “Pathfinder” service for larger or more complex procurements; and direct sales and spot buying, where TBA coordinates volume purchases or assists with complex items or items that are difficult to source. TBA had a catalogue of goods and services for its direct call-off contracts. TBA was to be self-sufficient financially and derived its income from commissions paid by departments and agencies related to the direct call-off arrangements and direct charges for services. In 1997, TBA had sales of 272 million (about $439 million). A difference between TBA and FSS was that TBA appeared to have more managerial and financial flexibilities because of its status as a next steps agency. Although we did not do a comprehensive analysis of TBA’s status as a next steps agency and FSS’ status within the U.S. government, next steps agencies generally have greater flexibility with regard to how they manage their finances and human resources than traditional government departments in the UK that operate within a government structure. In the case of TBA, its framework document specifies that the chief executive has the authority to seek flexibility in the personnel area, subject to approval by HM Treasury. According to TBA’s Procurement Director, some specific personnel flexibility that TBA had included the ability to seek its own staffing levels by taking on or releasing staff as the business need arose. TBA also could set its own pay scale. According to this official, if TBA needed to increase the pay of procurement specialists to compete with a tight labor market, it could obtain approval to do so rather quickly. Like FSS, TBA operated on what is called a “trading fund basis” in the area of financial management, which means it was self-supporting and received no revenue from the central government. Unlike FSS, however, TBA could retain its revenue after covering operating costs and other financial obligations. In contrast, FSS generally had to return excess revenue to the U.S. Treasury after recovering its costs. TBA also had the authority to commit to capital expenditures or asset disposals up to 250,000 ($403,226). Another difference between FSS and TBA was that TBA did not operate distribution centers or government stores. Also, unlike FSS, TBA provided its services to local government. In addition to TBA, we noted that for some types of office supplies, departments and agencies could use a former government agency that was privatized. In 1996, the government privatized Her Majesty’s Stationery Office (HMSO), now referred to as The Stationery Office (TSO). In addition to being the official publisher of government documents, similar to the Government Printing Office in the United States, HMSO provided letterhead stationary and other office supplies to departments and agencies. Today, TSO is a nonmandatory source in the private sector that departments can also use to meet some of their office supply needs, a function similar to that of FSS. In the UK, departments and agencies purchased and maintained their own vehicles and could go directly to the private sector. According to a TBA official we interviewed, TBA could assist agencies with vehicle purchases, if requested. However, these services were not required like they are with FSS. TBA also did not operate, nor does the government have, a central fleet that is similar to the FSS interagency fleet. It is important to note that according to HM Treasury and MOD officials, MOD had the majority of the nontactical vehicles and had purchasing arrangements with vehicle suppliers. These officials said that other departments and agencies often “piggyback” these contracts to take advantage of the favorable prices MOD gets. We also noted that the government had a small fleet of 160 cars within the Cabinet Office known as the Government Car and Despatch Agency; however, these cars were to be used for courier services and to transport top officials only. In the vehicle area, MOD was in the midst of developing an arrangement that was like a public-private partnership for its entire “white fleet” vehicles that were used for nontactical, administrative, and support functions. This project was being done as part of a major UK procurement reform effort, known as the private finance initiative (PFI). PFI is designed to meet major capital investment needs by having the private sector finance capital assets and having the government or users pay for the service. HM Treasury had established a special task force to improve the PFI procurement process and to assist departments and agencies with implementing PFI projects. The PFI project for the entire white fleet was under development at the time of our review, and test projects for two portions of the white fleet were among 115 PFI projects that were in progress. According to MOD officials, preliminary data on these test projects showed reductions in cost of 15 and 27 percent for these two portions compared to in-house alternatives. Under the planned PFI arrangement for the entire white fleet, the private sector was to invest in, manage, and operate the vehicles necessary to deliver an agreed-upon level of service to MOD under a long-term contract. MOD officials said that they were pleased the government has given them tools such as PFI and had moved to a decentralized purchasing environment. Appendix IV provides an overview of the PFI initiative and a more detailed description of MOD’s white fleet PFI efforts. CCTA, which is also part of the Cabinet Office and became a next steps agency in 1996, was similar to FTS in that it assisted departments and agencies in acquiring telecommunications services and IT systems and related services on a nonmandatory, cost recovery basis. CCTA’s main objective in its framework document was “to develop, maintain, and make available, expertise about IT which public sector organizations will draw on in order to operate more effectively and efficiently.” According to a CCTA official, its new mission statement emphasized “championing electronic government.” CCTA managed contracts to operate about 80 percent of the government’s telephone lines, involving almost 45,000 extensions. Similar to the FTS telecommunications contracts, CCTA was to charge government users a flat fee for each line. According to the chief executive of CCTA, departments and agencies, especially the smaller ones, liked the simplicity of dealing with CCTA. Some larger agencies, such as MOD, have chosen to procure their own telecommunications services. The telecommunications industry in the UK is dominated by British Telecom, which is a major supplier to CCTA. We noted that unlike FTS, CCTA could provide its services to local government. In the IT area, CCTA was similar to FTS in that for a fee, it advised departments and agencies on, and identified vendors that could assist with IT management, systems analysis and design, and procurement. CCTA’s work also involved full Internet service, including Internet site provision, development, maintenance, and consulting; and advice on electronic commerce. It also had written many publications to help departments and agencies on such topics as IT systems strategy, benchmarking, and business process reengineering. According to an FTS official, GSA’s Office of Governmentwide Policy has activities similar to these. CCTA also had a catalogue of IT products and services, like FSS. We noted that TBA also had contract arrangements for IT products and services; however, a CCTA official told us that CCTA and TBA see the goods and services they offer in the IT area as complementary, with little overlap. Despite the similarities between CCTA and FTS in the telecommunications and IT areas—and FSS in the case of the catalogue of IT products and services—there was a difference related to CCTA’s status as a next steps agency. That is, like TBA, CCTA appeared to have greater managerial and financial flexibility because of its status as a next steps agency. For example, like TBA, responsibility for personnel management, including developing its own pay and grading system, was delegated to CCTA’s chief executive. The chief executive was given the freedom to manage CCTA on a quasi-commercial basis within the framework of government accounting rules. According to a CCTA official, CCTA was, for the most part, left alone to run its own affairs so long as its operations ran smoothly and in accordance with its business plan. Because it was a government organization, however, there were some requirements CCTA had to meet. For example, the business case for its pay and grading system had to be approved by HM Treasury. According to HM Treasury officials, the government of the UK views performance measurement as crucial to any core business activity, including procurement. However, HM Treasury and the Cabinet Office recognized that, in the past, developing performance measures for procurement was difficult. Difficulties arose over defining universally applicable measures and questions were raised about whether the effort was worth it. In July 1998, HM Treasury and the Cabinet Office jointly reported that changes in the procurement environment, such as the shift to purchasing services instead of investing in capital assets, had opened the door of opportunity for refining and improving procurement performance measurement. This report, entitled Efficiency in Civil Government Procurement, noted that although most departments and agencies measured procurement performance, their practices varied. The majority were using measures that were not very sophisticated, although some progress had been made in the prior 12 to 18 months. As a result, HM Treasury and the Cabinet Office were planning to develop a performance measurement system for procurement that would allow benchmarking across government and would increase the sophistication of the measures used by modeling the government’s efforts after the private sector. The report contained several other recommendations aimed at setting a new agenda for improving the efficiency of government procurement. Also, at the time of our review, HM Treasury was starting an effort to determine how, in a decentralized environment, departments and agencies could share knowledge, capitalize on lessons learned, and ensure that efficiencies gained in one area are utilized in other areas. Although HM Treasury’s report did not address the adequacy of specific performance measures, we noted that TBA and CCTA had some key performance measures that they used to compare performance from year to year. TBA had performance measures that included total sales volume, customer satisfaction, and cost per 1 of savings achieved. For example, TBA reported it cost 4.38 pence for every 1 saved (100 pence is equal to 1), exceeding its 1997 target of 4.40 pence. CCTA had performance measures that included the reduction in cost of support services per 1 of salary of project staff and percentage of assignments or services delivered to customers’ satisfaction. CCTA reported a 97 percent customer satisfaction rating for 1998, although it noted that more feedback from customers was needed to make the results statistically significant. Australia, with a population of about 18 million, has a federal-state system with a central government that operates as a parliamentary democracy. The central government, which has devolved purchasing responsibilities for goods and services to its agencies, did not have an organization with activities like those of FSS, but did have an agency that performed some activities similar to those of FTS. In purchasing goods and services, agencies in Australia were encouraged to follow broad principles—such as achieving value for money—that were set by the Department of Finance and Administration (DOFA). DOFA also administered a vendor certification program for certain goods and services. However, unlike FSS, it did not enter into governmentwide supply contracts with vendors, administer supply schedules, or run a stock program with distribution centers or government stores. Also unlike FSS, Australia did not own and operate a central vehicle fleet, because it had been privatized. In the telecommunications and IT areas, however, Australia did have an agency with activities similar to those of FTS. In telecommunications, officials with the Office for Government Online (OGO) said that OGO had agreements with service providers on certain terms and conditions; however, unlike with FTS in the United States, agencies were required to use these providers. Like FTS, OGO also assisted agencies, on a nonmandatory basis, with IT projects; however, this role was relatively minor. In fact, agencies were moving away from operating and maintaining their own IT infrastructures. The government had undertaken a major initiative to phase out IT systems acquisition and ownership—except for some systems related to national security—-and instead have agencies purchase IT services from the private sector. This outsourcing effort was being done through a multiyear, phased process being administered by the Office of Asset Sales and IT Outsourcing (OASITO). To assess the outcomes of these and other procurement reforms, a committee of the Australian parliament had begun a review of government purchasing policies and practices. Appendix I identifies the key organizations in Australia and summarizes their activities. The government of Australia has devolved purchasing responsibilities to its agencies, which spent about $9 billion Australian for goods and services in fiscal year 1997-1998 (about $6 billion U.S., assuming that $1 U.S. = $1.51 Australian). With enactment of the Financial Management and Accountability Act of 1997, the government gave agencies the responsibility to handle their affairs and to ensure that the government’s procurement policies were observed. DOFA’s Competitive Tendering and Contracting branch had a key role in Australia’s procurement reform agenda. This branch, among other things, provided assistance to agencies in implementing reforms, surveyed and reported on agencies’ implementation efforts, and developed and maintained the government’s purchasing policy framework. The government’s procurement policies were set by DOFA in its March 1998 guidance entitled Commonwealth Procurement Guidelines: Core Policies and Principles. The guidelines stated that the fundamental objective of procurement in the Australian government was to provide the means to efficiently and effectively deliver the government’s programs. This objective, according to the guidance, was supported through several core principles: value for money, open and effective competition, ethics and fair dealing, accountability and reporting, national competitiveness and industry development, and support for other government policies. The guidance also encouraged agencies to provide opportunities for Australian and New Zealand industry. However, DOFA officials said that their procurement system was not used to advance other social objectives. When developing instructions for procurement within their agencies, agency executives were expected to take these core policies and principles into account. According to DOFA officials, the government decided that its agencies should be involved only in core, mission-related activities and should not be performing functions that could be performed by the private sector. DOFA officials said that most agencies were pleased with the devolution that had occurred. An official from a large agency we interviewed, the Department of Family and Community Services, said that they liked having more control over purchasing decisions and were very satisfied with the reforms. As a result of the devolution of purchasing responsibilities to agencies, Australia did not have an organization like FSS to assist agencies with the procurement of supplies. In the supply area, DOFA administered a vendor certification process for IT, office machines, office furniture, and auction services known as the Endorsed Supplier Arrangement (ESA). The ESA was to rely on a good faith, self-assessment approach where vendors submitted information about key factors, such as delivery performance and financial viability. According to DOFA officials, DOFA was to assess vendors’ applications in terms of financial capability and compliance with industry standards. These officials told us that DOFA also did random and targeted vendor reviews. A key difference between the ESA and FSS’ schedule programs was that DOFA did not establish governmentwide supply contracts with the vendors, as does FSS. Another difference was that unlike FSS’ schedule programs, agencies were required to buy IT from ESA vendors. According to DOFA officials, DOFA and one of its predecessor departments, the Department of Administrative Services (DAS), used to administer “common use arrangements” that were replaced with the ESA. The common-use arrangements more closely resembled FSS supply activities in that they were governmentwide contractual agreements administered centrally. The non-IT arrangements were ended in June 1998, and IT and major office machine arrangements were ended in September 1998. The officials said that the primary reason for eliminating these arrangements was the government’s ideological decision to devolve financial accountability to agencies. In addition, they said that the arrangements generally were not achieving a level of savings that would justify continuing them. Also, the officials added that the government did not administer a stock program, like FSS does, with supply distribution centers or government stores. In the vehicle area, DAS used to manage the government’s vehicle fleet, known as DASFLEET, up until its privatization in 1997. DASFLEET was established in the 1920s and was expanded to become the sole supplier of passenger and commercial vehicles for the Australian government. DASFLEET operated three main business areas: long-term vehicle leasing, short-term vehicle rental, and fleet management and maintenance services. Prior to its sale, agencies were free to use private sector operators for their short-term rentals and fleet management and maintenance requirements. In practice, however, these customers used DASFLEET for much of these needs. In early 1997, DASFLEET’s total fleet was valued at $376 million Australian (about $249 million U.S.) and comprised over 17,000 vehicles. DASFLEET owned these vehicles, except for about 700 that were privately financed or managed by DASFLEET for other parties. DASFLEET’s workforce totaled 376 people, and its yearly profits were about $23 million Australian (about $15.2 million U.S.). In 1996, the Department of Finance reviewed DASFLEET’s finances and operations and determined that the government should either refinance the fleet or privatize the business. The privatization option would include a tied contract commitment by the government whereby agencies would be required, for 5 years, to use the new entity for their long-term leasing needs. The short-term vehicle rental business would not be included in the tie. The government ultimately determined that the privatization option provided the best option and assigned responsibility for the sale to the Office of Asset Sales (OAS). In September 1997, DASFLEET was sold to Macquarie Fleet Leasing Pty. Limited, a wholly owned subsidiary of Macquarie Bank. The sale produced proceeds of about $407 million Australian (about $270 million U.S.). At the time of our review, DOFA had responsibility for monitoring the tied contract. As with other goods and services, agencies in Australia were responsible for acquiring their own telecommunications services. However, officials told us that agencies were required to use service providers that had agreed to certain terms and conditions with the Office for Government Online (OGO), formerly known as the Office of Government Information Technology (OGIT). Each year, government agencies spend about $365 million Australian (about $242 million U.S.) to meet their telecommunications needs, including voice, data, and mobile services. Telstra is the major service provider, accounting for just over 75 percent of government expenditures on telecommunications services, although a number of other smaller companies also compete for the government’s business. OGO sought to aggregate the government’s buying power to achieve a better price for the government as a whole, like FTS does. According to OGO officials, OGO managed centrally administered “whole-of- government” telecommunications arrangements where service providers agreed to certain terms and conditions in “head agreements” negotiated by OGO. The officials said that agencies were to purchase services directly from the service providers under the umbrella of the head agreements and the latest prices negotiated in those agreements. The officials added that for agency-specific requirements, agencies could seek the assistance of OGO in negotiating favorable terms and conditions that became part of the whole-of-government arrangements and were available to other agencies, as appropriate. In this way, the officials said that the government used its aggregated purchasing power to achieve lower prices, competition, and economies of scale. According to the OGO officials, the government has saved in excess of $30 million Australian (about $20 million U.S.) in the last 3 years through the leverage of these arrangements. OGO was also a central agency for IT. OGO’s primary objectives in the IT area related to bringing a governmentwide perspective to IT management. The agency’s main focus was to promote efficient access to government information and services, help agencies avert problems related to the year 2000 crisis, and provide policy advice to the government related to online services. Like FTS, OGO officials said that OGO also acted as a third party in providing agencies with advice on the development and implementation of their IT projects, although this role was relatively minor. A major development in the IT area was that agencies were moving away from in-house implementation and management of IT systems. In April 1997, the government announced a major initiative to outsource all of its IT systems infrastructure, with the exception of some systems related to national security. The initiative was to be accomplished through a multiyear, phased process currently being administered by the Office of Asset Sales and IT Outsourcing (OASITO). Appendix V provides a more detailed description of Australia’s IT outsourcing initiative. In December 1998, the Australian parliament’s Joint Committee for Public Accounts and Audit announced that it would conduct an inquiry into Australian government purchasing policies and practices. The inquiry was to have two general purposes. First, the Committee was interested in whether government entities had achieved effective outcomes, such as value for money, through the new purchasing policies. Second, the Committee was interested in whether the Australian business community had achieved more equitable outcomes as a result of these policies. To determine how government entities had performed, the Committee planned to collect and analyze statistical and performance information showing trends in purchasing opportunities and outcomes and planned to hold a series of hearings. According to DOFA officials, the extent to which agencies maintained this type of information, including information on performance goals and measures, likely varied across government, with some agencies having better data than others. There has, according to these officials, been no central effort to collect and report this type of information. Separate from the parliamentary inquiry, DOFA officials said they had begun surveying agencies on the types of performance data they collected. New Zealand, with a population of about 3.6 million, has a local government structure with counties and districts and a central government that operates as a parliamentary democracy. The central government has decentralized purchasing authority for goods and services, and did not have any government organizations similar to FSS or FTS because it privatized its central procurement agency in 1992. With the exception of some central monitoring for major IT projects, agencies were given complete discretion over how they acquire goods and services while still being expected to follow some general principles, such as ensuring that domestic suppliers were treated fairly. In meeting their needs for goods and services, agencies could go directly to the private sector and had the option of using the private sector business that was created when the central procurement agency was privatized. This business, called GSB Supply Corporation Ltd. (Supplycorp), acted as a purchasing agent for the government by assisting agencies with their procurement needs and did business only with government organizations. Supplycorp was similar to FSS and FTS in that it negotiated contracts on behalf of the government for supply, fleet, telecommunications, and IT products and services, yet it operated completely outside the government sector. One exception to New Zealand’s highly decentralized approach to procurement was in the IT area, where the Treasury and the State Services Commission (SSC) were responsible for examining and monitoring IT projects. SSC was also responsible for assessing overall agency performance with a focus on measuring outputs rather than inputs. Because procurement was viewed as an input, performance data were not readily available to measure progress or gauge the results of the various procurement reforms. Appendix I identifies the key organizations in New Zealand and summarizes their activities. Over the last decade, reform in the central government of New Zealand has centered on shifting accountability for results to departments and relying more on the private sector to perform activities of a business nature. The New Zealand government spends about $3 billion New Zealand for goods and services each year (about $1.7 billion U.S., assuming that $1 U.S. = $1.78 New Zealand). With enactment of the State Sector Act of 1988 and Public Finance Act of 1989, departments were given complete discretion over how they managed their affairs, including how they acquired goods and services. The reforms also set up a relationship between each department and SSC, which is a central management agency that reviews and reports on agency performance. Departmental chief executives, who are civil servants who manage the day-to-day affairs of departments, enter into agreements with SSC to deliver results that are defined as an agreed- upon level of outputs. Generally speaking, outputs are measurable units of whatever the department produces, whether it is policy advice or direct services to the public. In return, departments had nearly complete freedom over how much of their budgets they spent on the different types of resources—inputs they need to produce the outputs—and from where they would be purchased. Although departments had these freedoms, they were still expected to operate open, fair, and competitive procurement processes. Guidance by both the Treasury and the Ministry of Commerce outlined the government’s open purchasing policy and principles and recommended procedures that are considered to be consistent with sound business practices. The government’s general purchasing policy was based on the commercial principle of best value for money through open and effective competition and full and fair opportunity for New Zealand and Australian suppliers. According to SSC officials, the policy did not seek to advance any social objectives, which instead were usually funded directly. New Zealand and Australian suppliers could register with the New Zealand Industrial Supplies Office (NZISO), a unit within the Ministry of Commerce. NZISO provided information to purchasers on domestic suppliers and their capabilities. Prospective purchasers were urged, but not required, to contact NZISO, which did not get involved in actual purchasing negotiations or decisions. Within this policy framework for procurement, the government of New Zealand did not have any central procurement agencies. In meeting their supply, fleet, telecommunications, and IT needs, departmental purchasers did not have the same options that exist in the United States with FSS and FTS. That is, within the government, there were no central supply schedules, stock programs with distribution centers or government stores, vehicle acquisition and fleet management services, governmentwide telecommunications arrangements, or IT-related services that were available for governmentwide use like there are in the United States through FSS and FTS. The government of New Zealand once had a central procurement agency, but it was reorganized as a state-owned enterprise (SOE) in 1989 and privatized in 1992. Prior to this time, the Government Stores Board (GSB) acted as a central purchasing agent within the government. Chaired by the Secretary of the Treasury, it consisted of representatives from other departments and was administered by a division of the Treasury. The function of GSB was to act as a central controlling, supervisory, and coordinating authority for the purchase, custody, distribution, use, interdepartmental transfer, and disposal of public stores. GSB’s main service was to award contracts for the supply of goods to government departments. It did not stock goods; rather, it acted as an agent for the government by arranging bulk purchase contracts under which departments were required to purchase specific goods from selected suppliers. GSB also issued binding instructions to departments to regulate their purchasing activities. In 1989, the government reorganized GSB from a government agency to an SOE, renamed it the Government Supply Brokerage Corporation (NZ) Ltd. (GSBC), and made its services nonmandatory. The Treasury assumed responsibility for the former GSB’s control functions but did not issue any purchasing instructions to departments, leaving these matters to each department to determine. As an SOE, GSBC gained the ability to act as a private sector firm, but the New Zealand government owned all the shares of the corporation. The government eventually sold its shares in 1992 and a new private business—GSB Supply Corporation Ltd. (Supplycorp)—was established. At the time of our review, Supplycorp performed activities similar to those carried out by FSS and FTS in that it assisted agencies with procurement, yet it was a private sector business that operated completely outside the government sector. In fact, Supplycorp only did business with government organizations, defined as those that receive at least 50 percent of their funding from a government source. Supplycorp also provided its services to local governments as well, unlike FSS and FTS. Each year, Supplycorp has sales of about $350 million to $450 million New Zealand (about $197 million to $253 million U.S.). According to Auditor General staff we interviewed, more than 90 percent of government departments and local authorities continued to use Supplycorp after it was privatized to meet at least some of their needs for goods and services. In the supply area, it managed over 800 contracts with 1,200 suppliers for a wide range of common-use commodities, including IT products and services. These included national purchase contracts, as well as local purchase contracts tailored to individual regions of the country. It did not, however, operate distribution centers or government stores like FSS does. In the vehicle area, Supplycorp services covered the purchase, disposal, and management of new and used motor vehicles, similar to FSS’ services. Supplycorp arranged for the purchase of about 3,750 vehicles each year. Unlike FSS, it did not manage a central fleet for the government. Like FTS, Supplycorp arranged bulk rate telecommunications contracts. It is important to note that prior to the formation of Supplycorp as a private sector business, departments did not use GSB for telecommunications services because the government owned the sole telecommunications provider in New Zealand, Telecom Corporation. Departments simply went to Telecom for telecommunications services. In 1990, the government privatized Telecom and departments began arranging their own telecommunications contracts. In the IT area, like FTS, Supplycorp had a technology team that offered advice and consultations to departments and also had contract arrangements for IT products and services. Supplycorp also prided itself on being positioned to meet the future technology needs of the government. Before GSB was privatized, its Computer Services Division (CSD) was a nonmandatory source that assisted departments with procurement of large IT systems. In 1994, the government privatized CSD separately from Supplycorp. CSD was fully absorbed by the buyer and no longer exists. One exception to New Zealand’s highly decentralized approach to procurement was in the IT area, where SSC and the Treasury recently set up a monitoring team to conduct joint reviews of departments’ major IT projects. According to SSC officials, the frequency of review was determined by the monitoring team and depended on the complexity of the project and the capability of the department. Departments were expected to submit external quality assurance reports to the IT monitoring team, which assessed project risks and mitigation strategies, and SSC and Treasury provided program officials with feedback. In addition to this monitoring, SSC and the Treasury conducted higher profile reviews of high-risk projects and reported directly to ministers through what was called the Ad Hoc Officials IT Committee. According to SSC officials, projects reviewed by this Committee cost over $5 million New Zealand (about $2.8 million U.S.), involved strategic and mission-critical application systems, and generally posed a high risk to the government. The State Sector Act of 1988 was designed to introduce the government of New Zealand to many of the positive features of the private sector. The key principle was that managers, if they were permitted to make all input decisions—pay, appointments, organizational structures, production systems, etc.—would respond by accepting personal responsibility for producing substantially higher quality outputs—the goods and services provided by the government. As mentioned before, SSC played a key role by entering into performance agreements with departmental executives and monitoring performance. According to SSC officials, however, information that would enable an assessment of New Zealand’s approach to procurement was generally not available. According to these officials, procurement processes and approaches were viewed as inputs and accordingly were not routinely measured or assessed. Officials we contacted who operated in this environment—from the Ministry of Health, Health Funding Authority, and Ministry of Defence— were very satisfied with being held accountable for outputs while having the freedom to control inputs, including how they procured goods and services. They said that the reforms have made their departments more efficient and effective. It is important to note that although New Zealand’s management approach did not focus on regulating procurement practices, there were other controls over abuse of purchasing freedoms. These included parliamentary inquiries, audits of procurement practices by the Auditor General, and obligations set in law for departments to respond to any requests for information. Information on the various approaches used by these four countries provides insight into how they performed activities similar to those of FSS and FTS. These countries had reassessed the role of their central procurement agencies and procured goods and services in a variety of ways. None of the countries had government organizations that completely mirrored FSS and FTS. For example, in the UK, the government organizations that performed activities similar to FSS and FTS were different in that they had more flexibility to manage personnel and financial matters than traditional government departments. New Zealand sold its central procurement agency to the private sector, and agencies now could use the private sector business that was created to help meet their procurement needs. Also, there were similarities and differences in the programs and policies these countries used in the procurement of supplies, vehicles, telecommunications, and IT. Amending laws and regulations under which agencies operate and reforming procurement processes are not new concepts in the United States. For example, Congress authorized TVA, a government corporation, and some federal agencies such as VA and FAA to adopt alternative personnel systems. In addition to modifying requirements to help agencies accomplish their missions, the U.S. government has also reformed its procurement practices. The Federal Acquisition Streamlining Act of 1994 was enacted in part to promote efficiency and economy in contracting. In recent years, agencies have outsourced, or contracted for, a wide range of functions that had been done in-house. For example, the investigative unit of OPM was privatized and OPM now contracts for investigative services. To identify future candidates for privatization or outsourcing, the FAIR Act of 1998 requires agencies to identify functions they perform that are not inherently governmental. These reforms and streamlining efforts in the United States, as well as those in the four countries, were designed to make government operate more efficiently, improve service delivery, and focus on government’s core mission. In considering the merits of the approaches used by the countries and their applicability to FSS and FTS, it is important to recognize that such factors as differences in political and economic environments, the role of social objectives in the procurement process, and the volume of contracting activity would have to be considered. Furthermore, although the officials we interviewed in the four countries were generally satisfied with the reforms and believed their governments were better off with them in place, performance data on the effectiveness of the various reforms were generally unavailable or were in the early stages of development. Nonetheless, considering the experiences of these countries in reforming similar activities can serve as a starting point for examining what, if any, alternatives there are to the way FSS and FTS are currently organized and operate. OFPP’s Associate Administrator for Procurement Law and Legislation told us that OMB had no comments. Our FSS liaison, FTS’ Chief of Staff, and several responsible officials from each of the four countries provided technical comments on a draft of this report to add clarity and context to how we describe their procurement approaches. We incorporated their comments into the final report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Honorable David J. Barram, Administrator of GSA; the Honorable Jacob J. Lew, Director of OMB; the Honorable Deidre A. Lee, Administrator of OFPP; and the key officials in each of the countries we visited. We will also make copies available to others on request. Major contributors to this report were Gerald Stankosky, David E. Sausville, and David W. Bennett. We also greatly appreciate the assistance provided by the Auditor General staffs in each country as well as the willingness of the other officials to meet with us and provide information. If you or your staffs have any questions, please contact me on (202) 512- 8387 or at ungarb.ggd@gao.gov. Federal Supply Service (FSS) and Federal Technology Service (FTS) Office of Management and Budget’s Office of Federal Procurement Policy (OFPP); GSA’s Office of Governmentwide Policy (OGP) FSS has nonmandatory, prenegotiated contract arrangements under which agencies deal directly with vendors to acquire a range of supplies, including IT goods and services; FSS also stocks supplies for resale to agencies from its distribution centers and government stores. Public Works and Government Services Canada’s Supply Operations Service (SOS) and Government Telecommunications and Informatics Service (GTIS) branch. Like FSS, SOS had prenegotiated contract arrangements under which agencies dealt directly with vendors for supplies, including IT products and services; however, unlike FSS, they were mandatory over certain dollar thresholds; also unlike FSS, SOS had no distribution centers or government stores. The Buying Agency (TBA);Central Computer and Telecommunications Agency (CCTA) Her Majesty’s Treasury (HM Treasury) Like FSS, TBA had nonmandatory, prenegotiated contract arrangements for supplies, under which agencies dealt directly with vendors for supplies, including some for IT products and services; unlike FSS, TBA did not have distribution centers or government stores. Office for Government Online (OGO); Office of Asset Sales and IT Outsourcing (OASITO) Department of Finance and Administration (DOFA) Unlike in the United States with FSS, the government did not have any centrally administered supply contract arrangements and had no distribution centers or government stores; DOFA did, however, administer a vendor certification program that was mandatory for IT. New Zealand GSB Supply Corporation Ltd. Unlike in the United States with FSS, the government did not have any centrally administered supply contract arrangements and had no distribution centers or government stores; Supplycorp was a nonmandatory source in the private sector that, like FSS, had similar arrangements under which agencies dealt directly with vendors to acquire a range of supplies, including IT products and services; Supplycorp also did not have distribution centers or government stores. FSS is the mandatory source for vehicle purchases and operates a nonmandatory, interagency fleet for short-term agency needs. FTS has nonmandatory, local and long distance telecommunications service arrangements with private carriers that agencies can use. FTS assists agencies, on a nonmandatory basis, with acquiring IT systems and related services; FSS has an IT products and services schedule; FTS services differ from those of FSS in that FTS acts as a third party; with the FSS schedule, agencies deal directly with the IT vendors. Like FSS, SOS was the mandatory source for vehicle purchases but did not operate a central fleet. Like FTS, GTIS had nonmandatory, local and long distance telecommunications service arrangements with private carriers that agencies could use. SOS had a mandatory role in assisting agencies with IT systems acquisitions if they exceeded certain dollar thresholds; SOS also had prenegotiated arrangements for IT products and services, like FSS; GTIS assisted smaller agencies and those without IT expertise, on a nonmandatory basis, with identifying their IT needs. Like FTS, CCTA assisted agencies, on a nonmandatory basis, with acquiring IT systems and related services; like FSS and its sister agency TBA, CCTA also had a schedule of IT products and services. Unlike in the United States with FSS, agencies could go directly to the private sector to purchase vehicles and there was no central fleet agencies could use; TBA could assist agencies with vehicle acquisition and agencies could use vehicle purchase arrangements held by the Ministry of Defence (MOD).Unlike in the United States with FSS, agencies went directly to the private sector for vehicles and there was no central fleet; the government used to have a central fleet, but it was privatized in 1997; agencies were required to use the privatized fleet for a 5- year period to meet their vehicle needs. Like FTS, CCTA had nonmandatory, local and long distance telecommunications service arrangements with private carriers that agencies could use. Like FTS, OGO sought to aggregate the government’s buying power through agreements it had with telecommunications service providers; however, unlike with FTS in the United States, agencies were required to use those providers. Unlike in the United States with FSS, agencies went directly to the private sector for vehicles and there was no central fleet; Supplycorp had prenegotiated contract arrangements for vehicle purchases that agencies could use, and could assist agencies in obtaining fleet management services. Unlike with FTS in the United States, agencies went directly to the private sector for telecommunications services; Supplycorp had prenegotiated contract arrangements for telecommunications services that agencies could use. OGO had a minor role assisting agencies in the acquisition of IT systems; there was no major government initiative to assist agencies in acquiring IT systems because of a new initiative to phase out IT systems ownership and instead outsource, or contract for, IT services; this initiative was being done through a multiyear, phased process being administered by OASITO. Agencies went directly to the private sector for IT systems and had the option of using Supplycorp for assistance; Supplycorp had a technology team that offered advice and consultations, like FTS has, and a schedule of IT products and services, like FSS has; major IT projects were to be examined and monitored by SSC and the Treasury. TBA and CCTA are executive or "next steps" agencies, which means they had more flexibility than traditional government departments in how they managed their finances and personnel. Supplycorp is a private sector business that sells only to government agencies. Our objective was to identify the organizations, policies, and programs that Canada, the United Kingdom (UK), Australia, and New Zealand had in place to assist agencies with the procurement of supplies, vehicles, telecommunications, and IT. To meet this objective, we obtained information on FSS’ and FTS’ procurement activities by interviewing top FSS and FTS officials as well as program officials in the four business lines. We also held discussions with GSA’s OGP and OFPP within OMB. We collected information on federal procurement through research on the Internet and by reviewing our past work. We also reviewed procurement- related laws and regulations, such as the Federal Acquisition Streamlining Act of 1994, the Federal Activities Inventory Reform Act of 1998, and the Federal Acquisition Regulation. To select countries for the review, we first determined, on the basis of available resources and the time frames for the assignment, that we could collect and analyze information on four countries. We then conducted research, relying heavily on the Internet, as well as discussions with officials at the World Bank and Department of State, to identify Western industrialized countries that had made a commitment to procurement reform and would be candidates for selection. On the basis of this work, we selected Canada, the UK, Australia, and New Zealand because they had made such a commitment, and preliminary work showed they had reformed activities similar to those carried out by FSS and FTS. Our work was limited to the activities of the central or federal governments in these countries. We confirmed our selections primarily through further discussions with our counterpart organizations—the Auditor General offices—in each of the countries as well as the Department of State offices for each of the countries. We also held discussions with the embassy of New Zealand in Washington, D.C. and the U.S. embassies in Ottawa, Canada, and London, UK. It is important to note that the four countries were judgmentally selected and were not intended to be representative of how countries around the world were reforming similar activities. To collect information on the organizations, programs, and policies in these countries, we visited them and interviewed key officials about their operations. To identify which officials would provide information that would help us best meet our objective, we relied heavily on advice from the Auditor General staffs. The Auditor General staffs then arranged the interviews, provided us with relevant material, and assisted us with other logistical matters related to the visits. In Canada, the central procurement department also played a vital role in identifying key officials and arranging the interviews. In each country, we interviewed officials in any central procurement organizations involved in the procurement of supplies, vehicles, telecommunications, and IT. We also interviewed knowledgeable officials in organizations that set procurement policy such as each country’s Treasury department or equivalent; selected agencies that were the end-users of the procurement organizations, programs, and policies in place; and the Auditor General offices. In Australia, we held discussions with staff from a parliamentary committee conducting an inquiry into procurement practices. We also interviewed the general manager of a private sector business in New Zealand that assisted the government with procurement. In doing our work, we also analyzed a wide range of material on the organizations, programs, and policies in the four countries. Tables identifying the organizations in each country discussed in this report and their Internet addresses appear at the end of this appendix. After collecting the information, we compared these countries’ operations to how FSS and FTS assist agencies with the procurement of supplies, vehicles, telecommunications, and IT. It is important to note that we did not do a comprehensive comparison. That is, in each of the four FSS and FTS business lines, we focused on the major activities that FSS and FTS perform and determined how each country carried out similar activities. For example, for supply and procurement, we determined whether there were central supply contracts in the countries that agencies could use that were similar to those available through the FSS supply schedules and special order arrangements and whether the countries had operations similar to the FSS stock program. For vehicle acquisition and leasing, we focused on whether vehicles were purchased centrally and whether each country had a central fleet like the FSS interagency fleet. For network services and IT solutions, we focused on whether, in general, the countries had central sources from which agencies could obtain telecommunications services or assistance with the acquisition of IT systems and related services. In using this approach, we recognize that we did not focus on all the specific characteristics of the activities FSS and FTS perform in each of the business lines. Resource and time constraints prevented us from doing a detailed comparison, nor did they allow us to assess the applicability of the approaches used by these countries to FSS and FTS operations. We also did not analyze or verify the laws cited by the foreign officials or contained in documents they provided. Although we did ask the countries for performance data related to their procurement organizations and activities, we did not independently verify any data we obtained or assess the effectiveness of the reform initiatives. Finally, we did not verify the barriers cited by FSS and FTS or assess the effectiveness of reforms implemented in the United States. We did our work at FSS and FTS offices in Arlington and Falls Church, VA, respectively, and OGP and OFPP offices in Washington, D.C. In our visits to the countries, we did work in the cities of Ottawa and Hull in Canada; London and Bath in the UK; Canberra, Australia; and Wellington, New Zealand. In discussing the organizations in the countries, we used terms such as “agency” and “department” interchangeably. The exchange rates used throughout the report were as of May 2, 1999; we obtained them from the Federal Reserve Bank of New York and rounded them to the nearest cent. We performed our work between July 1998 and May 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB, Administrator of GSA, and key officials in the countries we visited. OMB had no comments. In response to the request for comments from the Administrator of GSA, FSS and FTS officials provided comments on a draft of this report, as did responsible officials from the four countries. Tables II.1 through II.5 identify the organizations in the United States and each of the four countries discussed in this report and their Internet addresses. Table II.1: United States General Services Administration (GSA) GSA/Federal Supply Service (FSS) GSA/Federal Technology Service (FTS) GSA/Office of Governmentwide Policy (OGP) Office of Federal Procurement Policy (OFPP) Table II.2: Canada Department of National Defence (DND) Department of Public Works and Government Services Canada (PWGSC) PWGSC/ Government Telecommunications and Informatics Services (GTIS) PWGSC/ Supply Operations Service (SOS) Office of the Auditor General of Canada (OAG) Treasury Board Secretariat of Canada (TBS) Table II.3: United Kingdom Central Computer and Telecommunications Agency (CCTA) Her Majesty’s Treasury (HM Treasury) HM Treasury’s task force on PFI Ministry of Defence (MOD) National Audit Office (NAO) The Buying Agency (TBA) The Stationery Office (TSO) Table II.4: Australia Australian National Audit Office (ANAO) Department of Family and Community Services (DFaCS) Department of Finance and Administration (DOFA) Joint Committee for Public Accounts and Audit (JCPAA) Office of Asset Sales and IT Outsourcing (OASITO) Office for Government Online (OGO) Table II.5: New Zealand GSB Supply Corporation Ltd. (Supplycorp) Health Funding Authority (HFA) Ministry of Commerce Ministry of Defence (MOD) Ministry of Health Office of the Controller and Auditor-General of New Zealand State Services Commission (SSC) Benefits Driven Procurement (BDP) is a new approach the Canadian government has started to use to help ensure the success of complex acquisition projects traditionally characterized as having significant risk. BDP stresses a focus on results and on the benefits that the government and its suppliers can gain from each acquisition project. Developed by the Canadian government in collaboration with Canadian industry, the BDP approach is designed to avoid the pitfalls that beset many complex projects—delays, cost overruns, and end results that often fall far short of expectations. BDP was first developed to solve problems with major IT acquisitions; but according to Canadian government officials, the concept has a broad application and is relevant to a wide range of complex, high- risk acquisitions. According to information on BDP from Public Works and Government Services Canada (PWGSC), Canada’s central procurement agency, major IT projects, which are among its most complex procurement projects, have a history of failure. Research done in the United States and Canada support this assertion. For example, in 1990, the President’s Council on Management Improvement cited the “unwieldy procurement process” as a reason IT projects often failed. Other reasons cited in this report included lack of top management commitment, inadequate planning, inadequate user input, and flawed technical approaches. In 1997, KPMG Consulting conducted a survey of IT projects in Canada and reported that the reasons for failure among IT projects were poor planning, a weak business case, and lack of top management involvement. A study by the Standish Group in the United States showed that 31.1 percent of U.S. IT development projects were cancelled before completion; about 53 percent of the projects were likely to cost 189 percent of their original estimates; and only 16.2 percent of software development projects were completed on time and within budget. Concerned about problems with IT acquisition, the Treasury Board of Canada developed a framework of management policies in 1996 that comprised best practices, principles, methodologies, and tools and standards aimed at ensuring a better success rate. Part of this framework addressed the procurement process, which the Treasury Board described as “too inflexible” and “not conducive to cooperation.” BDP evolved as a response to this Treasury Board framework. BDP focuses on the big picture—the overall desired outcomes—rather than on detailed project requirements. The traditional approach to procurement in a complex IT project was for an organization to spend months, even years, developing a detailed requirement—thousands of pages of specifications to present to the private sector. However, according to Canadian officials, the specifications may be outdated before they are complete, can be out of step with the latest technology, and the organization’s goals may even have changed during the long, drawn-out process of developing specifications. Finally, according to these officials, the private sector may know the project is not feasible, but may present bids anyway in order to obtain work. As a result, the traditional approach tends to be too lengthy, rigid, prescriptive, and costly in terms of time and human resources that have to be dedicated to each project, according to Canadian officials. BDP attempts to address this problem by asking the private sector to deliver certain agreed-upon results rather than follow a government blueprint with detailed specifications. The private sector is also invited to submit ideas on what sort of project should be undertaken before a formal request for proposals is issued. Another feature of BDP is that it is to incorporate rigorous up-front planning to remove potential problems in the procurement process. Both the front end planning and the management of the entire acquisition life cycle are based on four elements: a business case; risk analysis; clear delineation of accountabilities; and, a compensation structure closely tied to the contractor’s performance. Under BDP, a department is to prepare a business case justifying the project at the highest level and identifying the outcomes and benefits to be achieved. The business case is also to look at such issues as how much the project will cost, whether there are cheaper ways to realize goals, and what the benefits will be to the taxpayer. According to Canadian officials, under traditional methods, a project was often launched prematurely without a determination of whether it was really needed, whether it fit with the organization’s long-term goals, and without having the support of top management. Risk analysis is used to identify what could go wrong with the project and how to deal with the consequences. The goal is essentially to minimize risk and be prepared with contingencies for containment when problems do arise. Departments are to conduct risk assessments not just during the planning phase, but also throughout implementation. This helps ensure that projects with little chance of success are cancelled or modified as early as possible. Delineation of accountabilities is used to protect the client department from service delivery problems by identifying the specific risks each party assumes. For example, if the contract states that the supplier is responsible for delivering a certain outcome, or level of service, by a certain date but fails to do so, the supplier could be required to bear the cost of the delay. Relatedly, BDP encourages the use of a compensation structure closely tied to the contractor’s performance. For example, bonus payments may be made for finishing earlier than expected. A nonmonetary incentive could be intellectual property rights to a technology developed under government contract. Although BDP has been used primarily in the IT area, PWGSC and Treasury Board Secretariat (TBS) officials told us they anticipated greater usage of BDP in other areas in the future. BDP is part of a broad framework for procurement reform that TBS was developing at the time of our review. These officials said, however, that BDP should not always be used; in many cases, traditional procurement approaches are quite appropriate. They said that rigorous up-front planning by departments is done to help identify the most appropriate method. More information on BDP can be accessed at the PWGSC and TBS Internet addresses, http://w3.pwgsc.gc.ca/ and http://www.tbs-sct.gc.ca/, respectively. The Private Finance Initiative (PFI) is a procurement reform approach being used by the United Kingdom to provide services requiring a major capital investment. Under a PFI project, the private sector finances the capital assets and the government or users pay for the services. Similar to what are commonly referred to as public-private partnerships in the United States, PFI contracts require the private sector to invest in, manage, and operate a capital asset necessary to deliver a defined level of service. By way of example, in a PFI project to fill the need for a highway, the government would pay for the service of having a fully maintained and functional highway instead of constructing and maintaining the highway itself. In the IT area, the government would seek an arrangement in which the private sector would meet a department’s IT needs through services, instead of having the government own and manage an IT system and software. An example where the user of the services and not the government pays for the service would be a privately financed bridge that is paid for directly by tolls from motorists using the bridge. The thinking behind PFI includes transfering the risks and rewards of ownership of an asset to the private sector, along with the need for capital funding. According to HM Treasury officials, it requires government to consider not only whether it can afford to pay for the capital asset, but also to rigorously consider the long-term financial implications of the asset being properly maintained and operated over a period of, frequently, 25 years or more. PFI also aims to allocate procurement risks more appropriately between the supplier and the government customer to ensure that the risks rest with the appropriate party. For example, in a construction project, the risk of delay or cost overruns would rest with the private sector supplier. Alternatively, the risk of changes in legislation that could affect the contract would rest with the government customer, who is better placed to influence any such changes. Under PFI projects, the government also believes it can also potentially benefit from the innovation and skills of the private sector. By concentrating on the end service required, the government allows the private sector supplier to determine innovative ways of delivering the service—the government customer specifies the outputs but leaves the inputs to the supplier. To ensure value for money under PFI projects, the government customer is to demonstrate that the PFI method of procurement is likely to be better value for money than alternative means of supply—in particular, conventional means of procurement. This comparison is to be done by comparing the quality and cost of the service required under the different alternative means of supply. At the time of our review, the UK was at various stages of implementing 115 PFI projects for capital assets valued at about 10.9 billion (about $17.6 billion U.S., assuming that $1 U.S. = 0.62). These projects covered a wide range of services, including highways, prisons, IT services, and part of the Ministry of Defence’s (MOD) vehicle fleet. To improve the PFI procurement process and to assist departments and agencies with implementing PFI projects, HM Treasury had established a PFI task force. The task force sought to ensure that departments’ projects were not placed on the market until it was confident that the service was affordable, project teams had adequate resources, output specifications had been developed, and an acceptable risk allocation had been proposed. The task force issued guidance for carrying out PFI projects and also provided direct assistance to departments and agencies. At the time of our review, task force officials told us they had hired several individuals from the private sector to ensure that the government had high-calibre legal, financial, and other professional skills available to assist departments. HM Treasury’s guidance for PFI projects, as well as more detailed information on the PFI process, can be found at its Internet address that is dedicated to PFI, http://www.treasury-projects-taskforce.gov.uk. At the time of our review, MOD was in various stages of implementing 20 PFI projects for capital assets with a total estimated value of about 967 million (about $1.6 billion U.S.). These projects included services involving IT systems at the Army Logistics Agency and Army Training and Recruitment Agency, training for the Defence Helicopter Flying School, water and sewage at certain Royal Air Force (RAF) bases, and housing at two other MOD bases. Related to the vehicle acquisition and fleet area, two projects involved MOD’s “white fleet” of nontactical, administrative, and support vehicles. The two projects were for white fleet vehicles used by British forces in Germany and a portion of the RAF white fleet vehicles in the UK. This year, MOD planned to expand this project to include all of the white fleet vehicles in the British military. The white fleet PFI project had its origins in 1994, when MOD began a study to identify methods for encouraging greater commercial involvement in vehicle funding and management. The study concluded that there were both the capacity within the private sector and the potential for financial and operational benefits for MOD to expose the support vehicle fleet to industry competition. It was recognized that adopting a PFI approach to the supply of the white fleet would involve the private sector providing complete vehicle service, not just the funding for a substitution of new assets. MOD believed that through PFI, it would be exposed to private sector innovation and management skills and progressively would benefit from the latest commercial techniques in vehicle management. MOD selected two portions of the white fleet to test the PFI approach: the support vehicles for British forces in Germany and a stratified group of RAF support vehicles in the UK. According to MOD officials, preliminary data showed that both of these projects were delivering efficiencies in terms of the number of vehicles operated and the cost of operation. At the time of our review, MOD officials estimated that in Germany, savings of 27 percent were being realized compared to in-house operation; and in the UK, savings under PFI were estimated to be about 15 percent. In both cases, MOD believed it was getting a newer and more reliable fleet, along with a better information system and commercial fleet management expertise. According to MOD officials, they drew on the experience gained from these two test efforts, did detailed research, and held extensive discussions with the leading service providers to develop a strategy for subjecting the rest of the white fleet to competition. Five regional projects covering approximately 12,000 vehicles were developed inside defined boundaries to cover all the requirements of all three services (Royal Navy, Army, and RAF). Each project covered the full range of vehicles in the white fleet and encompassed short and medium term needs. Under the PFI arrangements, MOD staff were to assess fluctuating vehicle needs, and the industry providers are to be under contract to meet all levels of demand, even on short notice. This was a departure from the traditional procurement philosophy MOD had in the fleet area, in which vehicle holdings were to be matched to anticipated maximum usage requirements. Instead, MOD would be paying for vehicle services as an alternative to owning and operating them. As of October 1998, the time of our discussions with MOD officials, the projects had been advertised and MOD had received expressions of interest and proposal outlines from a number of leading vehicle suppliers who were backed by international financial institutions. Binding bids were to be obtained by the early part of 1999, which would lead to the selection of a preferred supplier and detailed negotiations before projected contract signings in September 1999. Implementation of the contracts would then commence and would be completed in 6 months. To ensure value for money, MOD has established an in-house cost benchmark that reflected the best of several other in-house options MOD has considered for managing the fleet. Should the bids MOD receives be above the in-house benchmark, MOD officials said they would instead implement the in-house alternatives. Also, MOD was planning to provide the in-house benchmark to industry to prevent providers from submitting bids if they could not provide the required service at a price below the in- house benchmark. The National Audit Office (NAO), our counterpart organization in the UK, reports to the British parliament on whether individual PFI projects represent good value for money. NAO supports the PFI concept because it offers, in appropriate cases, the prospect of improved value for money. NAO also recognizes that successful implementation will require well- thought innovation and risk-taking by public servants, which it supports. According to NAO officials, the merits of the PFI initiative will be judged on the success of individual projects. To date, these officials said that some projects could have benefited from better up-front planning; as a result, it was questionable whether some projects were achieving value for money. On the other hand, other projects have represented clear value for money and other benefits to the government. Overall, NAO officials said that as time went by, public servants likely would become more adept at using the PFI approach, and the overall gains would be positive. Information on NAO’s findings for individual projects can be accessed at its Internet address, http://www.open.gov.uk/nao/home.htm. In April 1997, Australia’s Minister for Finance announced a major initiative to outsource, or contract for, the government’s IT infrastructure. The initiative was to cover everything from the large mainframe computers agencies operated to the equipment for over 140,000 desktop computers across the government. According to officials with the Office of Asset Sales and IT Outsourcing, this infrastructure had an estimated value of between $6 and $7 billion Australian (between about $4 and $4.6 billion U.S., assuming that $1 U.S. = $1.51 Australian). Under the initiative, the government committed to achieving best value for money for its information technology dollar in order to support the delivery of services at the lowest cost to the taxpayer. This major announcement had its origins in 1996, when the government tasked the former Office of Government Information Technology (OGIT) with studying the potential savings that could accrue through consolidation of systems and potential outsourcing. The resulting study indicated that a very strong case for outsourcing existed and led to a decision that agencies should undertake extensive market testing. Potential savings of $1 billion Australian (about $662 million U.S.) were estimated over 7 years if agencies shifted to outsourcing. The April 1997 announcement reflected the government’s commitment to apply the outsourcing concept for IT across government, with the exception of some systems related to national security. In November 1997, IT outsourcing functions managed by OGIT were transferred to the Office of Asset Sales, which was renamed OASITO. OASITO was given responsibility for leading and managing the implementation of the initiative, with the aim of delivering savings, developing the Australian IT industry, and improving service delivery. The initiative had clear objectives related to the Australian IT industry. That is, OASITO was committed to ensuring substantial and sustainable development of the domestic IT industry, encouraging the industry to achieve a global focus, and assisting regional development and the creation of jobs. OASITO’s executive coordinator told us that in addition to the Australian IT industry, firms from overseas, including the United States, had been, and would likely continue to be, active participants in the initiative. OASITO was managing the initiative through a multi-year, phased process where agencies’ IT needs were being grouped together and the requirements offered to the private sector for bids. The government originally estimated that the initiative would be completed by mid-1999. At the time of our review, the schedule for outsourcing had to be adjusted to accommodate the capacity of agencies to prepare for outsourcing and the capacity of industry to absorb the requirements. OASITO was responsible for identifying the groupings and structuring them to maximize the benefits of outsourcing. It also provided guidance and assistance to agencies and managed the sequence and timing of the offerings to maximize competition. In addition, OASITO was the central coordinator for development of project documentation and was to oversee the financial evaluation of the offers. After contracts were signed, OASITO planned to remain involved to ensure that issues affecting the overall success of the initiative were effectively addressed. In close consultation with OASITO, each agency was responsible for defining its business and technical requirements, assisting with the evaluation of bids, participating in negotiations, and otherwise preparing the agency for transition to an outsourcing relationship and subsequent contract management. At each step of the process, agencies were expected to ensure that sufficient resources were dedicated to the process to enable the project timetable to be met. Agencies were also expected to implement strategies for internal matters, such as human resource transition, and to execute the change to the new operating environment. The government anticipated that IT professionals within government would transfer to the private sector, thus enhancing their skills and furthering their career opportunities. OASITO’s executive coordinator told us that the government undertook this initiative because of a belief that agencies should focus on their core missions and allow the private sector to perform government activities of a business nature. Further, he said that in meeting their IT needs through purchasing IT services, agencies, and the government as a whole, would also benefit from access to the latest technologies and current commercial expertise in information management. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on how foreign governments perform procurement activities that in the United States fall under the responsibility of the General Services Administration's Federal Supply Service (FSS) and Federal Technology Service (FTS). GAO noted that: (1) none of the countries had organizations that completely mirrored FSS and FTS; (2) Canada and the United Kingdom (UK) had the closest models in that they had organizations available to assist agencies in the procurement of supplies, vehicles, telecommunications, and information technology (IT); (3) however, these organizations had different features from those of FSS and FTS; (4) the two organizations in the UK differed from FSS and FTS because they were given more flexibility than traditional government departments in the personnel and financial areas; (5) Australia and New Zealand had very different models from the United States; (6) Australia had only an organization that performed activities similar to those of FTS, and its role in assisting agencies with the acquisition of IT systems and related services was minor; (7) New Zealand had no government organizations that performed activities similar to those of FSS and FTS because it sold its central procurement agency to the private sector several years ago; (8) this private sector business assisted government agencies with the procurement of supplies and did business only with the government; (9) GAO's analysis also showed that there were similarities and differences in the programs and policies these countries used in the procurement of supplies compared to those of FSS and FTS; (10) according to officials in these countries, procurement reform evolved over a number of years and was primarily influenced by a desire to rely more on the private sector to perform activities of a business nature so that government could operate more efficiently, improve its services, and focus on its core mission; (11) it is important to recognize that such factors as differences in political and economic environments, the role of social objectives in the procurement process, and the volume of contracting activity would have to be considered in a discussion of whether these approaches had applicability to FSS and FTS operations in the United States; (12) furthermore, some reforms were very recent, and performance data on the effectiveness of the various reforms were generally unavailable or were in the early stages of development; (13) consequently, GAO could not, from an overall perspective, gauge how well these reforms were working; and (14) nonetheless, officials GAO interviewed who were end-users of the procurement organizations and policies GAO observed said they were generally satisfied with the reforms and believed their governments were operating more efficiently than under old policies.
|
DCMA has undergone an evolutionary process to become the agency it is today. In 1990, DOD decided to consolidate and streamline contract administration services, which, at the time, were performed by the Defense Logistics Agency (DLA) as well as each of the military services. DOD made this change to achieve several benefits, including savings to the government through decreased overhead costs and increased efficiencies that would allow the elimination of thousands of DOD contract administration positions. As a result of these decisions, in 1990 the Defense Contract Management Command was formed as a command under DLA, with the responsibility of performing contract administration services that had previously been performed by DLA and the military services. In 2000, this command became DCMA, an independent agency no longer under the umbrella of DLA. As of June 2011, DCMA had approximately 10,900 staff, including roughly 10,400 civilians and 500 military. The FAR identifies 71 functions for which a contract administration office (such as DCMA) is generally responsible, including activities such as issuing contract modifications, reviewing and approving contractors’ requests for payments, performing production and engineering surveillance, ensuring contractor compliance with contractual quality assurance requirements, and maintaining surveillance of flight operations. A wide range of employees within DCMA perform these responsibilities, including ACOs, engineers, property administrators, quality assurance representatives, and government flight representatives. Government flight representatives, among other things, are responsible for approval of contractor test flights, procedures, and crew members, and for ensuring contractor compliance with DCMA guidance on contractor flight and ground operations. DCMA is assigned administrative oversight of a contract when delegated that authority by the procuring contracting office. Procuring contracting officers, who are responsible for awarding contracts, generally make the decision whether to retain some or all areas of contract administration or to delegate that authority to DCMA. When DCMA is delegated contract administration responsibilities for major programs, a memorandum of agreement is established between the program office that is buying the products or services and the CMOs. DCMA also relies on DCAA in executing some of its contract oversight responsibilities. For example, DCMA contracting officers can use DCAA audits to assist in determining whether a contractor’s business systems are adequate, although audit opinions can also be rendered by a licensed certified public accountant or persons working for a licensed certified public accounting firm or a government auditing organization. DOD has recently defined contractor business systems to include six systems, as shown in table 1. Whether or not a business system is determined adequate can affect the contracts between the government and the contractor. For example, cost- reimbursement contracts are to be used only when the contractor’s accounting system has been deemed adequate for determining costs applicable to the contract. If a contractor does not have an approved purchasing system, it is required to get the consent of the contracting officer before entering into certain subcontracts, for example, cost- reimbursement subcontracts, and fixed-price subcontracts over certain thresholds. The DCMA contracting officer is ultimately responsible for determining whether a contractor business system is acceptable. If the determination is made that a business system contains significant deficiencies, the contracting officer can withhold contract payments. The percentage of the withholds may be reduced if a contractor submits an acceptable corrective action plan and the contracting officer determines, in consultation with an auditor, such as DCAA, for example, that the contractor is effectively implementing the plan. Recently, there have been concerns about the overlap of responsibilities between DCMA and DCAA in areas such as contractor business systems, proposal audits and findings, and forward pricing rate agreements. The Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L), has worked with both DCMA and DCAA, and has recently issued new policies and spearheaded changes to the DFARS to help clarify the roles of the two agencies. Operationally, DCMA performs its contract administration mission in two environments—(1) based out of its contract management offices and (2) in the contingency environment. As an agency, DCMA reports to the Under Secretary of Defense for AT&L, but for the purpose of its contingency contract administration responsibilities, DCMA also has an indirect line of reporting to the Chairman of the Joint Chiefs of Staff. CMOs are located domestically and internationally and are geographic based, plant-based, or specialized. Geographic CMOs provide oversight of contractors located within a specific geographic area, whereas plant- based CMOs are located within a specific contractor’s plant and oversight is focused on that contractor and location. Specialized CMOs provide oversight of contracts focused on a specific type of product, such as aircraft propulsion, or aircraft overhaul, maintenance, modification, and repair. CMOs’ leadership can be either military or civilian. DCMA provides contract administration services and support to combatant commanders during contingency operations. In 2000, when DCMA became an independent agency, it was also established as a combat support agency for DOD. As such, one of DCMA’s major roles is to deploy alongside warfighters to provide contingency contract administration services (CCAS). DCMA’s Combat Support Center manages DCMA’s CCAS support requirements. As the liaison between DCMA, the Joint Chiefs of Staff, and combatant commands, the Center develops and administers contingency policies for the agency and, in particular, manages CCAS deployments. Based on requirements from the CCAS commanders, the Center supervises the process of selecting, training, and deploying qualified DCMA candidates. The types and lengths of deployments are shown in table 2. Our past work and that of others have identified concerns with DCMA’s oversight in the contingency setting, but have also noted some positive outcomes. For example, over the past decade, we, the DOD Inspector General, and others expressed concerns that DCMA was not adequately staffed to provide sufficient oversight in contingency settings. Other contingency-setting issues included a lack of sufficient training for deployed staff and DCMA’s inability to determine its resource requirements. However, we also found that DCMA’s oversight in Iraq produced good results, reporting in 2004 for example that DCMA contracting officers had eliminated unnecessary airfield services and identified equipment and materials that could be reused to reduce contract costs. Further, in 2008 we found that DCMA had made progress in increasing its oversight personnel in Iraq. DCMA has undergone significant shifts in its workforce, organizational structure, and policies and procedures over the past 10 years. After its formation in the early 1990s, DCMA’s workforce numbers declined and there was significant erosion of some areas of expertise, such as the cost and pricing function. Ultimately, the workforce became so out of balance with workload after 2000 that the organization could not fulfill all of its oversight functions. A shift to a substantially decentralized, customer- oriented approach was intended to mitigate the impact of this workforce imbalance, but resulted in a number of unintended consequences, such as inefficiencies in how work was done at the CMOs. In light of recent, significant workforce growth, DCMA is rebuilding its expertise in areas that had been bereft, instituting new, centralized policies and procedures and developing agency-wide performance indicators intended to gauge how well the agency is meeting its missions. DCMA officials told us that by 2004, the agency faced significant strains on its workforce, brought on by staffing and budget reductions that had been occurring since the agency’s formation. Senior DCMA officials said the workforce downsizing made sense for much of the 1990s because there were efficiencies to be gained when DOD consolidated its contract administration services into one agency and because DCMA’s workload was also decreasing for much of the decade. By the early 2000s, however, while DCMA’s total workforce numbers continued to decline, its workload—measured in obligations the government has incurred but not yet paid, also known as unliquidated obligations—started to increase. From a low point of about 9,300 in 2008, DCMA has been increasing the size of its workforce. The agency has steadily increased its numbers since that time and expects to reach about 13,400 total civilian staff by 2015—about a 43 percent increase from its size in 2008. Figure 1 depicts the fluctuations in DCMA’s workforce from 1993 to 2015. Figure 2 depicts the fluctuations in DCMA’s workload in terms of its unliquidated obligation balance from 1990 to 2015, including the growth over the past decade. To build and support its workforce, DCMA is using several sources of funding. Based on DCMA’s data, about 78 percent of the civilian workforce is currently funded through operations and maintenance (O&M) funds, but a growing number of new employees are hired using funds authorized in Section 852 of the National Defense Authorization Act for Fiscal Year 2008, called the Defense Acquisition Workforce Development Fund. For example, in fiscal year 2011 DCMA reported it hired 1,221 new employees under this authority, a substantial increase from 166 hired in fiscal year 2009. These new employees include 3-year interns as well as journeymen, described by DCMA officials as individuals with extensive experience in a certain business area. Two particular areas of emphasis in terms of building workforce numbers have been quality assurance and contracting (which includes cost and price analysis). As of December 2010, more than half of DCMA’s civilian workforce was employed in one of these two areas. Trends in these particular skill sets from 2005 through 2010 are depicted in figure 3. Further, from 2011 to 2016, DCMA estimates that these two job categories will continue to be the areas where the agency will experience the most growth in the number of positions. While DCMA is hiring many new people to fill out its workforce, it is also facing a large percentage of retirement-eligible employees. As of the end of fiscal year 2010, about 24 percent of the DCMA workforce was eligible to retire, and an additional 28 percent qualified for early retirement incentives. The large number of retirement eligible employees makes DCMA vulnerable to the loss of valuable technical expertise and organizational knowledge. In part, DCMA plans to mitigate this risk through aggressive recruiting and knowledge management activities, such as bringing back retired annuitants to help raise the skill levels of the newer employees. Building workforce skills and expertise is just as important as increasing numbers of employees. In addition to its precipitous drop in workforce numbers, DCMA had experienced an atrophying of some key skill sets. At the CMO level, one way DCMA is looking to build expertise of its new employees is by changing the workforce structure. Specifically, CMO staff are organized in one of three functional areas: contracting, engineering, or quality assurance. Previously, CMOs were organized in multifunctional teams, with employees from different disciplines (e.g., an ACO, a quality assurance representative, an engineer, etc.) on one team and responsible for overseeing a certain number of contracts. With the new alignment, quality assurance representatives, for example, report to a quality team lead, and this team lead reports to a CMO level quality director. Senior DCMA officials view this change as important for new employees’ skills, as they will be able to learn from supervisors with expertise in the same area. Following are some examples of how DCMA is rebuilding certain skill sets: A particular area of focus for DCMA is re-building its cost and pricing expertise, which had been depleted over time. For example, by the late 1990s DCMA was routinely combining the duties of its contract cost/price analyst positions with the duties of its contracting specialists; and at that time, the agency had lost the majority of its contract cost/price analysts. Loss of this skill set, according to DCMA, meant that many of its pricing- related contract administration responsibilities, such as negotiating forward pricing rate agreements and establishing final indirect cost rates and billing rates, were no longer performed to the same level of discipline and consistency as in prior years. As a result, DCMA reported that DOD’s acquisitions were subjected to unacceptable levels of cost risks. In one recent example, a DCMA official told us about a case where an ACO, lacking support from contract cost/price analysts had, for simplicity, incorrectly blended a contractor’s overhead rates rather than deriving separate rates for different areas (e.g., general and administrative, and manufacturing). DCMA has taken several steps to rebuild its cost and pricing capabilities: In 2009, DCMA created the Cost and Pricing Center, with a mission of developing and sustaining the agency’s expertise in pricing. DCMA officials said the center has helped to hire contract cost/price analysts for the CMOs. It also develops and conducts training for the growing DCMA contract cost/price analyst workforce. Over the last 2 years, DCMA reports that it has hired 279 new contract cost/price analysts and cost monitors, extensively using the Defense Acquisition Workforce Development Fund to do so. Currently, DCMA employs a total of about 400 contract cost/price analysts and cost monitors. Since 2008, DCMA has also concentrated on rebuilding its earned-value management (EVM) expertise through workforce increases and extensive training. DCMA has increased its workforce for its EVM Center, which was established in fiscal year 2000. Officials told us the workforce has grown from 5 or 6 in 2000 to 46 people in 2011, with plans to fill an additional 12 vacancies. In addition to its other responsibilities (such as overseeing the process for ensuring a contractor’s EVM system is validated), center officials provide guidance and direction to EVM specialists located at the CMOs and develop EVM policy in coordination with the CMOs. Approximately 500 industrial specialists, located in the CMOs, are responsible for determining whether contractors have the manpower, machinery, materials, and methods to meet contract requirements; overseeing contractors’ manufacturing processes to track progress in meeting contractual delivery dates; and notifying buying commands if the contractor might not meet those dates. There had been a substantial decline in the number of industrial specialists at the agency, but the number has started to grow again. While senior DCMA officials would like industrial specialists to spend more time “on the shop floor” at contractor facilities to gain an understanding of the root causes of scheduling delays, we found that they were not consistently doing so at the CMOs we visited. Several of the industrial specialists we spoke with primarily remain at their workstations monitoring contractor production schedules electronically, checking the accuracy of data entered into DCMA’s contracts database, or assisting as needed with technical reviews of contractor proposals. Senior officials acknowledged that the focus of industrial specialists over time has shifted away from some of their more important tasks, such as performing on-site surveillance of contractor facilities. Workforce changes within DCMA have contributed to this shift. For example, DCMA procurement technicians traditionally performed routine administrative functions such as entering and maintaining contract data, but this role has been understaffed, resulting in more senior personnel, such as industrial specialists, performing such functions. DCMA is currently taking steps to rebuild the industrial specialist function by hiring more personnel, developing a new manufacturing and production policy, and upgrading training. DCMA has identified ongoing concerns with its ability to effectively carry out its quality assurance responsibilities because of workforce size and capability shortfalls, increasing the risk to the warfighter and the taxpayers. For example, DCMA reported an increase in customer complaints in the form of reported quality deficiencies in products. To address the issues related to its quality assurance capabilities, DCMA reports it is, for example, defining certification training for its quality assurance personnel. DCMA also reports it is moving towards standardizing position descriptions as a way to establish consistent expectations for its quality assurance workforce. By the mid 2000s, when according to senior officials, DCMA did not have the workforce to fulfill its mission, it undertook a shift in its operations in an attempt to focus on the areas of most importance to its customers— the DOD program offices. This approach included a CMO re-alignment to mirror the organization of DOD program offices and a heavily decentralized approach to DCMA’s policies and procedures. In the fall of 2005, DCMA re-aligned its CMOs under four product-oriented divisions: aeronautical, space and missiles, ground vehicles and munitions, and naval systems. In addition to providing a structure more in line with its DOD customers, the alignment was intended to improve the technical expertise of DCMA staff in these particular product areas. At the same time, DCMA implemented what it called performance-based management, wherein CMOs gauged their success based on metrics reflecting their contributions to outcomes of importance to their customers. In fact, the memorandums of agreement between CMOs and the DOD program offices were designed to hold the CMOs accountable to the program office for such things as reducing the number of functional defects of a product or ensuring that a component was delivered on time. For example, a 2008 memorandum of agreement with a program office purchasing heavy payload tactical trucks was designed to hold the CMO accountable for reducing the number of functional defects on the vehicles. In another example, a CMO committed to a variety of customer outcome performance standards with its Air Force customer that was purchasing unmanned aerial vehicles, including zero-defect products and timely product and shipment delivery. According to DCMA officials, thousands of metrics flourished at this time, which some officials noted were too many. DCMA also embarked on a substantially decentralized approach to its policies and procedures, again with the intent of becoming more customer-focused. As a key example, DCMA rescinded its compliance and procedures manual for the agency’s required contract management functions—known as the “DCMA One Book.” Contents of the manual that were still deemed required—reportedly a small portion—became a DCMA instruction, and the rest was considered to be guidance and not mandatory for CMOs to follow. The intent of the change was to allow more flexibility for the CMOs to modify existing processes and explore new ones to better support their own customers’ expected outcomes and objectives. However, officials from some CMOs we visited said the loss of the “DCMA One Book” resulted in loss of consistent agency guidance and procedures, with one official characterizing this situation as a “free for all.” Ironically, this focus on providing CMOs the flexibility to meet their customers’ needs as well as the absence of specific guidance and procedures resulted, according to DCMA officials, in a level of confusion among their program office customers. Other unintended consequences included concerns on the part of DCMA that it had shifted too far toward focusing on the customer. Relatedly, the decentralized nature of DCMA guidance led each product division to develop and execute its own policies and provided CMOs the leeway to develop additional policies and procedures to respond to their own customers’ needs. This led to inconsistent oversight and surveillance activities among CMOs. Another unintended consequence was inefficiencies in how CMOs operated. For example, CMOs in close proximity but under different product divisions sometimes did not share resources or expertise and thus did not leverage their workforces to help meet workload surge requirements. To address these unintended consequences and in light of its new and growing workforce, in 2009 DCMA undertook a number of changes to its organization, procedures, and policies. Rather than being aligned under the four DOD product areas, the 40 CMOs are now aligned under three regional commands, as shown in figure 4. According to DCMA, while the product-division alignment allowed for a strong customer focus, going back to a regional alignment has permitted more efficiencies among the CMOs by facilitating consistent execution of policy and tools throughout each region. DCMA is also updating and developing centralized instructions and procedures to help regain consistency among the CMOs and to help ensure the agency meets all of its contract administration responsibilities. Since 2009, DCMA has issued more than 40 instructions and over 100 modifications or revisions to instructions. For example, in November 2010, DCMA revised in its entirety its Major Program Support Instruction. The purpose of major program support is to provide DCMA customers and internal management with timely analysis and insight to prevent and/or resolve a program’s cost, schedule, or performance problems, and the instruction provides guidance on how DCMA will accomplish these objectives. DCMA has also updated three EVMS instructions since November 2010, and its quality assurance policy implementation includes 22 new instructions issued since March 2009. While some CMO staff told us the plethora of new, centralized guidance and instructions can be overwhelming at times, several also indicated they were pleased to be moving away from the prior, decentralized model where they were largely left to their own devices. In 2009, DCMA also began to shift from a focus on customer-based metrics to using performance indicators intended to gauge how well the agency is meeting its missions. The agency currently has 122 indicators in place, addressing contractor supplier-base issues and DCMA processes, workload, and resources. DCMA officials noted that these indicators are reviewed for trends and to help identify root causes of any problems. For example, DCMA officials explained that the performance indicator related to contract closeouts showed a marked decline in timely closeouts over the last few years, indicating a major problem. Further analysis showed a severe resourcing problem in the two CMOs responsible for nearly half of all contract closeouts. DCMA identified a need for greater training on contract closeouts; after implementing a training program, the indicators revealed that the timeliness of contract closeouts had improved. In some cases, DCMA is looking to improve performance indicators to ensure they are motivating performance in the way the agency intends. For example, a primary performance indicator for industrial specialists involves prediction of schedule delays, which according to a DCMA official, encourages industrial specialists to track schedules from “behind their computers,” rather than to be on the shop floor, where DCMA senior officials would like them to spend their time. Senior DCMA officials acknowledged that they are still in the process of reassessing the indicators. Additionally, DCMA is evaluating which indicators need to be reviewed at the headquarters level. DCMA also takes steps to identify and rectify workforce imbalances through its workload allocation processes. The headquarters directorate holds regular workload and resourcing sessions with each regional command and the CMOs under its purview to evaluate CMO workload requirements. DCMA officials expect these sessions to be important for making resource allocation decisions across the CMOs. In addition, headquarters officials had conducted resource reviews to identify the positions, by job series, required at each CMO based on current and future workloads and on the CMOs’ performance. However, because of fiscal year 2011 funding constraints, the resource reviews have been put on hold. While the overall requirement for support of contingency operations has increased fourfold over the past 5 years and the portion of that increase shouldered by DCMA staff has more than tripled from 2007, the number of DCMA staff deployed remains relatively small compared to the size of the agency workforce. CMO officials told us it is difficult to isolate the impact of CCAS deployments on overall CMO performance from other resource constraints DCMA faces. Nevertheless, the officials identified a number of ways deployments impact domestic operations— including some instances of work being delayed or not completed—and identified a variety of approaches they use to manage workload given the deployments. A number of CMO leaders deploy, in part because a high proportion of them are military, and these deployments can have a significant impact on the operations of a CMO. Also, DCMA civilians may deploy multiple times, and CMO officials report they had little notice to plan for these deployments. To minimize the impact of civilian deployments, DCMA has established a position for a corps of 250 personnel hired specifically to support the contingency mission, but CMOs report management challenges with using these resources. Based on requirements agreed to by the Joint Chiefs of Staff, combatant commands, and DCMA, the DCMA CCAS mission in Iraq, Afghanistan, and Kuwait currently requires an in-theater presence of 450 personnel. This number represents more than a fourfold increase from the July 2007 deployment number of 83 and includes a recent, 80-person increase for an enhanced presence in Afghanistan as well as support for the Department of State following the expected withdrawal of troops in Iraq in December 2011. From 2001 through 2008, DCMA had a small, clearly defined role administering the Army’s Logistics Civil Augmentation Program support contracts. Then in 2007, an independent commission recommended significantly expanding DCMA’s in-theater role. The Commission’s report concluded that the Army’s workforce was inadequately staffed, trained, or structured for handling contract management in Iraq and Afghanistan, and as a result, the Army reassigned contract administration to DCMA for contracts involving delivery of supplies and services in these two countries. As of July 2011, DCMA was deploying 272 of its own people— approximately 2.5 percent of the workforce—with the balance of the 450 total CCAS requirement filled with contractors and DOD military service personnel. The portion of the current CCAS requirement shouldered by DCMA staff, when compared to 2007, has more than tripled. Figure 5 shows the upward trend in CCAS deployments over the past 5 years and the types of personnel deployed. 16The conclusions from the Commission on Army Acquisition and Program Management in Expeditionary Operations were instrumental in reaffirming the role of DCMA in the execution and oversight of all contracts in support of contingency operations. Subsequently, however, a DOD task force proposed transferring the majority of CCAS responsibility to the military services by 2013. According to a DOD review team, some CCAS participants are of the opinion that the military services do not possess, nor can they master, necessary core competencies to assume CCAS responsibility by that time. In terms of the type of work these personnel are performing in-theater, nearly three-quarters of the required contingency positions are in the areas of contracting and quality assurance. Further, the need for these two areas of expertise in-theater has grown dramatically from 2007 to 2011, with the requirements for contracting positions increasing from 20 to 144, and the requirements for quality assurance increasing from 20 to 182. Several of the CMOs we visited report that it is difficult to isolate the impact of CCAS deployments on overall CMO performance from other resource constraints DCMA faces. DCMA conducted an analysis of the relationship between other agency performance indicators (e.g., percentage of contract closeouts completed and percentage of completed quality surveillance plans) and CMOs with high proportions of CCAS hours in fiscal year 2010, but this analysis showed no discernable correlation between high CCAS hours and CMO mission performance. While DCMA is working to develop performance indicators to assess the future impact of CCAS on the agency’s domestic mission, these indicators are not yet fully implemented. Nevertheless, several DCMA officials we interviewed believe that CCAS deployments have a definite, constraining impact on the agency’s domestic mission, and CMO officials identified specific examples of how their operations are affected by deployments. They cited delays in quality assurance response times, for example, and noted that audits of a contractor’s processes and contract closeout activities have been delayed or not done. The officials provided numerous other examples, including: In the contracting area, at one CMO, contract receipt and review and funds cancellations were delayed when a key person deployed. The CMO officials affirmed that activities were still performed, but took longer than usual to complete or the quality of the work was lower than customary. In the quality area, a DCMA internal review team found that since quality assurance representatives must focus first on conducting necessary inspections, other functions—such as completing documentation, reviewing low or medium risk processes, and performing data analysis—were suffering. CMO officials identified a number of ways they manage the workload when someone deploys, such as adjusting workloads of the remaining staff, granting overtime and compensatory time, and implementing staggered work shifts. They also reported backfilling positions with temporary hires, seeking temporary promotions for CMO staff, bringing back retired annuitants or reservists, or hiring permanent replacements. In some cases, CMO officials said they had temporarily assigned staff to other locations. CMOs commonly use a risk-based approach to ensure that what they view as the highest priority or most critical work is completed first. For example, a team leader might focus on getting the mission work done but may not have time to mentor staff. Officials in one of the regional commands said they have to take care that tasks such as inspection of items critical to safety and mandatory government inspections are performed first. Lower priority items often will be deferred, such as contract audit follow-up and contract closeout. In another instance, when an Industrial Specialist volunteered to deploy, CMO officials were able to come to agreement with the customer that schedule surveillance could be conducted less frequently, because the contractor typically made deliveries ahead of schedule. CMOs we visited noted that the impact of CCAS deployments on CMOs varies based on the type of deployment (civilian volunteer or EE), deployment of CMO leadership, and rates of deployment at the CMO. CMO officials told us they often cannot plan for civilian volunteer deployments because of short notice of the impending deployment (usually issued by the Combat Support Center 60-90 days in advance), which creates challenges in backfilling the position. In addition, once selected, the volunteer’s time available to the CMO before deployment can be curtailed by more than a month because of requirements for training, medical checks, and other pre-deployment activities. The impact on CMO workload is magnified when civilian volunteers extend their deployments—which happens frequently—or deploy multiple times. For example, in 2010, 55 civilian volunteers requested an extension of their deployment, and only 4 were denied. CMOs we visited reported different challenges in relation to EE personnel. First, some said that EEs spend so little time at their CMO that they cannot be used effectively. The EE workforce, during its initial 3- year commitment, deployed for 6 to 9 months, and then returned to a home CMO for 6 to 9 months before deploying again. In one situation, a CMO commander placed an EE in an ACO position, but lost that person when he was denied a request to defer deployment, highlighting the challenges of using EEs in key positions or assigning them significant levels of responsibility. Second, CMO commanders noted that they have little say in the selection and deployment of EEs; some CMOs have a relatively high concentration of EEs—about 6 percent of the CMO’s workforce in two cases. According to senior leadership, DCMA now realizes it needs to improve its management of the EE placement process and has begun targeted hiring in areas where there may be a large untapped skill-base of potential EEs. Third, officials said they had unanticipated challenges as a result of the temporary promotions provided to deploying EE personnel. These temporary promotions were used as one of several means to incentivize potential hires, at a time when DCMA needed to quickly increase numbers to meet expanding requirements. However, DCMA officials reported that when these staff returned to their home CMO, they had adjusted to the higher salary and the associated work, but often, corresponding higher-level positions were not available at the CMO. DCMA officials stated that temporary promotions for EEs have been discontinued, noting the temporary promotions were not cost effective and that the CMO work did not always justify the higher grade. Deployments can have an especially significant impact when they involve a CMO’s leadership. A number of CMO leaders deploy, in part, because a high proportion of them are military. Specifically, according to senior DCMA officials, nearly half of its CMO heads and deputies at domestic CMOs are military, and all O-5 military commanders stationed with DCMA in the United States are scheduled to deploy for one year sometime within their 3-year tour with DCMA. Officials at several CMOs commented that losing leadership is difficult and challenging, resulting in deputies taking on the role of CMO leader, with other personnel then being detailed or temporarily promoted to Acting Deputy. Deployment of commanders in the middle of their tours can be particularly difficult, according to another CMO official, because a commander often requires 2 to 3 continuous years in a leadership position to implement new initiatives, and an interruption can result in loss of momentum for change and improvements. While DCMA endeavors to have senior military personnel complete their deployments as close to the beginning of their DCMA tours as they can—ideally leaving them back at the CMO for a 2-year period at the tour’s end for the sake of continuity—some senior officers nevertheless deploy in the middle of their tours, resulting in interruptions and a lack of continuity within the CMO. As an example, CCAS deployments had a considerable impact with respect to leadership at one CMO. Deployment of the commander in 2010 resulted in turnover of the commander’s position five times in the following 16 months, during which time a series of replacements was appointed for a variety of performance and operational reasons. According to CMO officials, instability in the leadership at this office contributed to morale and performance shortfalls that were exacerbated by significant growth in new program requirements and significant contractor quality issues at that site. Overcoming these issues required extensive temporary duty costs to split a commander between two sites and personal sacrifice and hardship for the entire leadership team. Some CMOs have higher rates of deployment than others, which leads to a disproportionate impact of deployments. According to DCMA data, CMOs’ hours dedicated to CCAS in fiscal year 2010 ranged from 10.2 percent of total workforce hours to 0.6 percent. For example, in the last 2 years, a total of 55 people— or about 11 percent of the total workforce—deployed from one CMO, of which 37 were civilian volunteers. In contrast, officials at another CMO reported that only 4 of their 202 employees were deployed in the last 2 years. Reasons why some CMOs have higher rates of deployed personnel vary. It can be due to high concentrations of EE personnel at a CMO, a high proportion of military personnel, or large numbers of motivated civilians with skills that are in high demand in contingency situations, such as contracting or quality assurance. CMO officials also told us that deployments disproportionately impact some of their suboffices with small numbers of staff or in remote areas. For example, officials at one CMO told us a suboffice had 2 of 10 quality assurance staff deployed simultaneously. To fill the gap, personnel from the CMO had to drive to the suboffice—a distance of over 400 miles. Officials said that the level of quality assurance suffered, being limited to only the minimum required inspections. In reaction, some DOD program offices sent their own technical people to assist in this work; in other cases shipments were delayed. DCMA senior officials told us that they are trying to support CMOs that may be hit harder than others by considering delays or waivers for CCAS assignments if needed—but these situations have to be balanced against the high priority of the CCAS mission. DCMA has taken steps to mitigate the impact of deployments on individual CMOs. For example, it allows individuals to request deployment waivers, but few requests are made. From the start of 2010, through June of 2011, DCMA employees submitted 21 waiver requests. While 19 of these requests were approved, most (14) were because of medical or family emergencies or significant personal hardship; 5 were granted because of a significant mission impact to the CMO organization. Officials say that a high bar is set for granting waivers because of the CCAS mission, and that supporting the warfighter has a very high priority at DCMA. The agency has also lengthened deployment time frames to reduce their frequency. Military deployments have been increased from 6 months to 9 months, and EE deployments are in the process of being extended from 179 days to 12 months. DCMA has also begun a CCAS track for third-year DCMA interns, intended to help meet increasing CCAS deployment requirements by enlarging the base of eligible civilian volunteers. In addition to the impact of contingency deployments, other factors present risks to DCMA’s ability to execute its domestic oversight and surveillance mission. A key external risk to DCMA’s ability to effectively carry out its responsibility to determine the adequacy of defense contractor business systems comes from delays in obtaining audits from DCAA. We also found that DCMA contracting officers maintained their determination of many contractor business systems as adequate despite the fact that the systems had not been audited by DCAA in a number of years—in many cases well beyond the time frames outlined in DCAA guidance. Another potential risk for DCMA is a recent DOD policy change that increased the dollar threshold at which DCAA will conduct certain audits; as a result, DCMA’s own pricing workload will increase. In addition, DCMA must manage two sources of internal risk. First, some CMOs are uncertain how newly hired personnel using the Defense Acquisition Workforce Development Fund, and EEs hired under Overseas Contingency Operations funds, affect CMOs’ authorized staffing levels and funding. Second, the agency faces a potential increased workload in oversight and surveillance of key suppliers as defense subcontracting grows. Contractor business systems and internal controls are the first line of defense against waste, fraud, and abuse on government contracts, and so the government is at greater risk of overpaying contractors if possible deficiencies exist in the systems. A role of some DCMA ACOs is to determine the acceptability or adequacy of business systems for contractors under their purview. While DCMA has additional resources to support assessment of purchasing systems, EVMS, and property management systems, one method it relies on to arrive at status determinations for contractors’ accounting, estimating, and material management and accounting systems is the DCAA audits of each system. DCAA policy establishes guidelines for how often contractor business system audits should take place, as shown in table 3. We examined the status of these three business systems for the 17 defense contractors responsible for programs included in this review, as provided by the cognizant ACO. We found a substantial number of systems that had not been audited within the DCAA time frames; 12 of the contractors had at least one system without a current and timely audit. For example, as of May 31, 2011, 10 contractors had not had an overall accounting system audit within the last 4 years, and 9 had not had an estimating system audit within the last 3 years. In one case, a contractor which has increased its government business more than sevenfold since 2000 has not had an overall accounting system audit since 1998, despite the ACO requesting that DCAA perform such an audit. Further, one estimating system audit and two MMAS audits have never been conducted because, according to DCAA and DCMA officials, DCAA has not had the resources available to perform them. For contractors where an audit was conducted, figure 6 illustrates the date of the last audits of the three business systems, relative to DCAA’s guidelines. When an audit of a system becomes outdated, we found that the cognizant ACOs generally maintain their prior status determination even if it was made several years in the past. For example, the ACOs still termed as “adequate” or “approved” all 10 of the defense contractor accounting systems that have not been audited in the last 4 years—including the accounting system that has not been audited since 1998. And ACOs considered all but two of the estimating systems that have not been audited in the last 3 years “adequate,” “approved,” or “acceptable.” Some ACOs also told us that, when audits are outdated, a program office may need to rely on DCMA and DCAA’s more informal assessment of a business system’s status. Officials with one DOD program office told us that while they were aware of the time that had passed since the last audit of business systems for the prime contractor, they continue to rely on the expertise of DCMA and DCAA to identify problems with the systems and oversee resolution. Some ACOs expressed concern that they did not have more up-to-date information with which to determine the status of the business systems, especially if they knew that a contractor had undergone significant change, such as rapid growth. Many expressed frustration at the lack of timely DCAA audit support and identified it as a significant impediment to their ability to assess the status of contractor business systems, particularly accounting and estimating systems. Further, most noted that their DCAA counterparts were unable to provide clear and firm time frames for when the next audits would take place. In some cases, ACOs reported that expected audits planned by DCAA for a given fiscal year were not completed, so were moved back to the next year or canceled. When business systems are not audited in a timely manner, the government is at increased risk of paying for unallowable and unreasonable costs, as a contractor’s cost structure or accounting procedures may change over time. The Director of DCAA acknowledged that the agency has been behind on business system audits and that these audits had not been a top priority for fiscal years 2010 or 2011. He stated that DCAA has been focusing on addressing other priorities identified as higher risk with its limited workforce, such as support for overseas contingency operations contracts, reviewing contractors’ forward pricing rates prior to contract award, and incurred cost audits. A DCAA official stated that compared to the resources expended, forward pricing rate audits have the greatest return to the taxpayer. DCAA officials noted that they were still assessing which business system audits need to take place and that the agency has to balance this requirement with its other current priorities. They added that they have recently launched a pilot program to conduct corporate- level business system audits for major defense contractors, aimed at improved coordination of DCAA audits and resources. DCAA officials told us that for fiscal years 2011 and 2012, this pilot primarily involves overseas contingency operations contractors, but also includes one other major defense contractor. In addition, DCAA plans to build its workforce, expecting to hire approximately 250 more auditors by the end of fiscal year 2011. Our recent work confirmed the challenges DCAA is facing in terms of its workforce and workload. In September 2011, we reported that while DCAA’s workforce grew by 16 percent from fiscal years 2000 to 2011, DOD research and procurement spending (an indicator of DCAA’s workload) increased by 87 percent. As a result, auditors have prioritized time-sensitive activities such as audits to support new awards. Further, in that report we found that DCAA’s initiatives to address contractor business systems will take several years. Officials from the Office of the Secretary of Defense recognize the importance of contractor business systems and have taken some steps designed to improve their quality and transparency, but gaps in their approaches remain. For example, DCMA officials told us that DOD officials directed them to increase visibility into the status of business systems by developing a data repository for this information for use across the department. DCMA officials explained that this database, launched in March 2011, is intended to provide DOD buying commands and DCMA personnel more timely and accessible information on the status of defense contractors’ corporate and division level business systems. While ACOs can document dates of the last business system audits in this database, because the system requires attaching documentation of the status of business systems rather than entering that information into a database, the system is not set up to allow DCMA officials visibility across all defense contractors to understand the full extent and impact of audit timeliness problems. In May 2011, DOD also issued an interim rule containing changes to the DFARS that clarified the department’s definition of contractor business systems, delineated the roles of DCMA and DCAA with regard to systems oversight, and put in place processes for withholding payments from contractors with business system deficiencies. Our findings are consistent with recent issues raised by others about the timeliness of contractor business system audits. The House of Representatives Committee on Armed Services expressed concern in May 2011 over DCAA’s personnel shortfalls and audit delays and the impact these might have on competition in DOD contracting. In September 2009, the Commission on Wartime Contracting in Iraq and Afghanistan also noted the challenge of determining the real-time status of contractor business systems because of staffing shortages at DCAA that reduce the timeliness of audits. DCMA personnel will face greater responsibilities as a result of a recent policy change spearheaded by the Office of the Under Secretary of Defense for AT&L, aimed at freeing up DCAA resources to prioritize high- risk audit work. This change came in response to recommendations made by oversight organizations and guidance from AT&L to ensure that DCAA’s audit effort focuses on areas with greatest risk to the taxpayer and that it align workload requirements with available resources. Effective September 17, 2010, and in response to a change to guidance for defense acquisition regulations, DCAA generally no longer performs audits on contractor cost-type proposals below $100 million or on fixed- price proposals below $10 million. As a result of this change, most pricing requests below these dollar thresholds will now be referred to DCMA. Although the new policy was developed in consultation with senior DCMA leadership, we found that in some instances, CMO officials were surprised by the change and expressed concern about implications for their workload. DCMA headquarters officials have conducted some analysis of how much work the agency might take on as a result of the threshold change. Based on data provided by DCAA and assumptions about how much work may be retained by either DCAA or DOD buying activities, DCMA estimates that it will receive approximately 1,250 additional pricing requests on contractor proposals in fiscal year 2011. DCMA officials told us that the agency plans to rely on newly hired contract cost/price analysts at the CMOs to shoulder this workload, even as they are undergoing a significant amount of training to achieve their necessary certifications. DCMA’s ability to conduct oversight and surveillance domestically may also be affected by how the agency responds to internal sources of risk. Our work identified two areas of potential internal risk for DCMA going forward: first, uncertainty among some CMO officials about the status of funding sources for new CMO personnel, and second, provision of adequate oversight of key suppliers in light of growing defense subcontracting. Some CMO officials are uncertain how newly hired personnel using the Defense Acquisition Workforce Development Fund, and EEs hired under Overseas Contingency Operations funds, affect their authorized staffing levels and funding. In building its workforce, DCMA has made increasing use of the Defense Acquisition Workforce Development Fund for journeymen employees and entry-level interns. DCMA leadership noted that the agency is requesting increased O&M funding to convert these positions in the future. Some CMO leaders told us, however, they were not sure that they would have enough O&M-funded positions available to be able to retain the journeymen and interns they had originally hired using the new funding source. CMO leaders told us they were monitoring attrition to make sure they have spaces for conversions, if needed. DCMA leadership explained that decisions about funding sources for personnel take place at the agency headquarters level, rather than at the CMO level, and as a result, the mix of funding sources for a particular CMO may change over time but should not affect the number of positions at the CMO. Going forward, DCMA will continue to face the issue of ensuring adequate O&M funds to cover these positions. We also found some confusion about the source of funding for the EE personnel. According to DCMA headquarters officials, EE personnel are generally hired using Overseas Contingency Operations funds managed in a separate pool at headquarters. Because of this arrangement, EE personnel do not count against authorized CMO funding or manning levels—they are over and above those levels. Nevertheless, some CMOs and one of the regional commands we visited expressed concern that EEs take away staffing and/or dollars from the CMOs. For example, at the regional command we were told that EEs are paid out of O&M funds when they are working at the CMOs, but when deployed, they are compensated from another independent pot of money. Headquarters officials surmised that when the EE program was first initiated in 2008, O&M and Overseas Contingency Operations funds were mixed together for a short while and that there may be some lingering confusion as a result. In its fiscal year 2010-2015 strategic human capital plan, DCMA identified internal communication as a weakness, and its employees have noted that they get incomplete and mixed messages because of inconsistent flows of information from the top to lower levels of the agency. DCMA officials cite the agency’s shift to a functional structure as a method for simplifying communication up and down the chain of command. Our previous work has noted that prime contractors are subcontracting more work on the production of weapon systems, while concentrating their own efforts instead on systems integration. Based on some estimates, 60 to 70 percent of work on defense contracts is now done by subcontractors, with some industries aiming to outsource up to 80 percent of the work. We have also identified parts quality problems in DOD systems that were, among other issues, directly attributed to a lack of effective supplier management, with the costs borne by the government. Per DCMA policy, CMOs responsible for monitoring the prime contractor’s activities should exercise oversight and surveillance of those primes’ subcontractors through delegations to the CMOs responsible for those subcontractors. The amount of delegated workload varies across CMOs. DCMA leadership generally did not express concern about the amount of delegated work or its potential growth. However, leadership has noted the need for improved data to provide visibility into the supply chain so that DCMA can receive and communicate to customers earlier warnings that a subcontractor’s delivery might be late. For example, a contractor may be a prime contractor on one program, and a subcontractor on another. A senior DCMA official told us that better data about performance on the prime contract could provide DCMA with insight into potential delays or other issues that may affect the program on which the contractor has a subcontract. From the customer’s perspective, several program office officials noted that DCMA surveillance across key suppliers was of value to them. DCMA has acknowledged the need to address supply chain risks that may affect program cost and schedule, such as poor supply chain management by prime contractors that are subcontracting, by defining where those risks lie and influencing prime contractor oversight in those areas. To support these activities, DCMA plans to increase the size and quality of its supply management specialist workforce, including provision of training and certification and creation of development plans for supply management professionals. DCMA is also placing more supply management specialists at the CMOs and has tasked one of its divisions with providing policies, training, and tools to the supply chain management workforce. In addition, DCMA’s Industrial Analysis Center’s mission is to provide insight into the ability of the supplier base to support DOD programs. Recovering from years of a seriously eroded workforce that left the agency unable to fulfill all of its missions has posed a significant management challenge for DCMA. It has taken several key steps— including reorganizing the agency, strengthening its guidance and procedures, and rebuilding areas of expertise—aimed at putting the agency on the path to successfully meeting its missions going forward. The issues we have raised regarding the impact of contingency deployments on DCMA and its responsibilities domestically can be expected to continue, as the agency’s contingency role is not expected to diminish in the near future. DCMA leadership is largely aware of the challenges in this regard and has indicated that steps will be taken to mitigate, to the extent possible, the impact on domestic CMOs. At the broader DOD level, the recent change to defense regulations is a positive step toward achieving better visibility into contractor business systems. However, because we found consistent delays in the audit time frames for the business systems that require support from DCAA, higher- level attention is needed to mitigate the risk to the government of outdated business system audits. DCAA, because of workforce challenges of its own, is not at present able to fulfill its business system audit responsibilities and is not likely to be in a position to do so in the near term given its other priorities. Thus, the department needs to consider alternative methods to accomplish these critical audits in a timelier manner. Other factors we identified, however, are largely in DCMA’s control and can be addressed in the shorter term. In particular, DCMA’s practice of considering contractor business systems adequate even when they have not been audited or reviewed in a number of years may put taxpayer funds at risk by suggesting a system is sound when that may not in fact be the case. And the uncertainty on the part of CMO leaders about sustained funding for their new hires brought in under the Defense Acquisition Workforce Development Fund and the source of funding for EE personnel suggests that clearer communication is warranted. We recommend that the Secretary of Defense work with DCMA and DCAA to identify and execute options, such as hiring external auditors, to assist in conducting audits of contractor business systems as an interim step until DCAA can build its workforce enough to fulfill this responsibility. We recommend that the Director of the DCMA take the following two actions: Identify ways to accurately and transparently reflect the current status of business systems, such as changing the status of a system to “unassessed” when a system has not been audited within DCAA’s time frames. Clarify for the CMOs the specific plans for how O&M funding is to be provided to enable CMOs to continue supporting new hires brought in under the Defense Acquisition Workforce Development Fund and how EE personnel are funded when working at domestic CMOs, given the confusion regarding this issue. DOD provided us with written comments on a draft of this report. DOD agreed with two of our recommendations and partially agreed with one. DOD’s written response is reprinted in appendix II. Regarding our recommendation that the department consider alternative approaches to audits of contractor business systems, DOD agreed to consider alternative approaches but did not elaborate with any planned actions or time frames. DOD also agreed with our recommendation that DCMA clarify for the CMOs how O&M funding is to be provided to enable them to continue supporting new hires brought in under the Defense Acquisition Workforce Development Fund, as well as how EE personnel are funded when working at CMOs. The response explained that DCMA has O&M funding and full-time equivalents in its fiscal year 2012-2015 fiscal guidance for the conversions and noted that DCMA is pursuing funding for future year conversions. It also clarified that EE personnel under the current 3-year program are funded by Overseas Contingency Operations funds wherever they are working, including at domestic CMOs. Given the confusion we found on these issues, we believe it is important that the Director of DCMA regularly share this funding information with the CMOs. DOD partially agreed with our recommendation that the Director of DCMA identify ways to accurately and transparently reflect the current status of contractor business systems. The response outlined steps DCMA is planning to take, including issuing a new policy on contractor business system requirements and updating the agency’s existing data repository, to include adding data fields, to supplement current information. DOD expressed concern that automatically changing the status of a previously “approved” system to “not assessed” solely because status determinations had not occurred within the specified time frames may adversely impact the department’s procurement process. The intent of our recommendation was not that all outdated business system assessments be automatically or retroactively changed to “unassessed.” Rather, we intended that DCMA determine how a more accurate status could be conveyed. The actions DOD has outlined, if implemented, should provide greater transparency and visibility into the status of the business system assessments. We are sending copies of this report to the Secretary of Defense, interested congressional committees, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) assess how the Defense Contract Management Agency (DCMA) is positioning itself to meet its missions; (2) determine the extent to which contingency missions have impacted DCMA’s ability to provide oversight and surveillance domestically; and (3) identify other factors that may affect its capability to conduct oversight and surveillance domestically going forward. To conduct our work for each objective, we reviewed key documents, such as relevant sections of the Federal Acquisition Regulation (FAR) (e.g., FAR Part 42.3, Contract Administration Office Functions) and the Defense Federal Acquisition Regulation Supplement (DFARS), (e.g., DFARS 242.3 – Contract Administration Office Functions). We also reviewed DOD policies, such as the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics memorandums on Better Buying Power. We reviewed DCMA documentation, such as agency guidance and instructions; historical documentation related to DCMA’s organizational structure; workforce data (e.g., the number of DCMA staff in different job series); and information on contingency deployments (e.g., total requirements, documentation on the types of deployment, and waiver and extension requests). We reviewed Defense Contract Audit Agency (DCAA) documentation such as relevant sections of the DCAA Contract Audit Manual and audits related to contractor business systems. We also reviewed prior reports concerning DCMA, including our prior work as well as reports of the Commission on Wartime Contracting in Iraq and Afghanistan and others. Further, we interviewed DCMA officials at headquarters as well as some DCMA centers and divisions, including the Combat Support Center; the Cost and Pricing Center; the Industrial Analysis Center; the Manufacturing Engineering/Supply Chain Predictability Division; and others. To learn more about DCMA processes and procedures, we interviewed DCMA headquarters officials about agencywide initiatives such as performance indicators and resource reviews. We also interviewed senior officials at DCMA’s three domestic regional commands, and interviewed the heads of the Contract Management Offices (CMO) at 14 out of the 40 primary CMOs located across the country. We selected this nonprobability sample of CMOs based on a number of factors, including geographic location, obtaining a mix of CMO types (plant-based, geographic, and specialized), percentage of CMO hours spent on contingency contract administration services, and total contract dollar value at the CMO. The findings from the CMOs we visited are not generalizable to the population of all DCMA CMOs. Within the geographic and plant-based CMOs, we selected a nonprobability sample of one or two DOD weapons system programs (19 in total) to gather more detailed information about how DCMA provided support. The findings from these programs are not generalizable to all programs, but were chosen to ensure that programs with large dollar values were selected, and to ensure representation of a range of DOD military services and contractors. For each program, we reviewed DCMA oversight documentation such as surveillance plans and memorandums of agreement between DCMA and the program offices. We also interviewed members of DCMA’s Program Support Teams for each selected program, including program integrators, administrative contracting officers, quality assurance representatives, engineers, industrial specialists, and others. To develop a more in-depth understanding of how DCMA provides oversight, we toured seven contractor facilities in relation to CMOs we visited. We also collected information on the status of contractor business systems related to each of the selected DOD programs and interviewed the DCMA administrative contracting officer responsible for oversight of those business systems. To gain their insights on DCMA oversight and surveillance, we also interviewed officials from eight DOD program offices and representatives from nine contractors, which we selected by taking into account factors such as obtaining the perspectives of a range of military services and contractors. To develop an understanding of DCAA’s perspective on issues related to DCMA and DCAA, particularly oversight of contractor business systems and changes in DCAA’s thresholds for conducting pricing-related audits, we also interviewed senior officials at DCAA. We conducted this performance audit from October 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michele Mackin, Assistant Director; Janet McKelvey; Robert Bullock; Virginia Chanley; John Krump; Suzanne Sterling; Roxanna Sun; and Peter Zwanzig made key contributions to this report.
|
The Defense Contract Management Agency (DCMA) provides contract administration services for DOD buying activities. Its contract management offices (CMO) work with defense contractors to help ensure that goods and services are delivered on time, at projected cost, and that they meet performance requirements. DCMA also supports combatant commanders during contingency operations. As DCMA recovers from years of significant downsizing, GAO was asked to (1) assess how the agency is positioning itself to meet its missions, (2) determine the extent to which contingency missions affect its oversight domestically, and (3) identify other factors that may affect its domestic missions going forward. GAO reviewed regulations, policies, and guidance, analyzed the status of contractor business systems for 17 defense contractors, and interviewed a wide range of DCMA officials. After undergoing significant shifts in its workforce, structure, and policies and procedures over the past 10 years, DCMA has taken steps to rebuild its capacity. As the workforce declined, the agency experienced significant erosion of expertise in some areas, such as the cost and pricing function, such that it could not fulfill all of its oversight functions. A shift to a substantially decentralized, customer-oriented approach in the mid-2000s, intended to mitigate the impact of this workforce imbalance, resulted in unintended consequences such as inefficiencies in how work was done at the CMOs. DCMA has since begun to rebuild workforce expertise and has instituted new, centralized policies and procedures. The agency expects to reach about 13,400 total civilian staff by 2015--a 43 percent increase from about 9,300 staff in 2008. DCMA's military workforce has generally ranged between 500 and 600 in recent years. A growing number of DCMA's new employees have been hired using the Defense Acquisition Workforce Development Fund. To help gauge progress in meeting its missions, the agency uses performance indicators for contractor supplier base issues and DCMA processes, workload, and resources. Agency staff deployed on contingency missions are small in number--272-- when compared with the number of total DCMA employees, but several DCMA officials told GAO that deployments have a constraining impact on the agency's domestic mission. CMO officials identified examples of how their operations have been affected by deployments, such as delays in conducting timely quality assurance, audits of contractor processes, and contract close-out activities. The impact of deployments depends on the type of deployment or on certain features of the CMO; the timing of military leaders' deployments; and multiple or extended deployments of civilian volunteers. DCMA has noted support for the warfighter is a high priority for the agency, but has taken steps to mitigate the impact of deployments, such as lengthening deployment time frames to reduce their frequency. To minimize the impact of civilian deployments, DCMA established a position for a corps of personnel to support the contingency mission. Several factors may affect DCMA's ability to meet its missions going forward. One significant source of external risk stems from DCMA's reliance on the Defense Contract Audit Agency (DCAA) to conduct audits of certain contractor business systems. Business systems--such as accounting and estimating systems--are the government's first line of defense against fraud, waste, and abuse. Because of its own workforce struggles, DCAA has lagged in completing a number of such audits and is currently focusing on other high priority areas. GAO found, however, that DCMA contracting officers maintained their determination of many contractor business systems as adequate despite the fact that the systems had not been audited in a number of years--in many cases well beyond the time frames outlined in DCAA guidance. GAO recommends that DOD work with DCMA and DCAA to identify and execute options to assist in audits of contractor business systems. GAO also recommends that DCMA clarify for CMOs the agency's plans to continue funding existing workforce positions and that it identify ways to accurately reflect the status of contractor business systems, such as changing the status to unassessed when audits are delayed. DOD concurred with the first two recommendations. DOD partially concurred with the remaining recommendation but discussed several planned actions which, if implemented, should improve the transparency of system assessments.
|
The MHS operated by DOD has two missions: (1) supporting wartime and other deployments and (2) providing peacetime health care. In support of these two missions, DOD operates a large and complex health care system that employs more than 150,000 military, civilian, and contract personnel working in military medical facilities, commonly referred to as military treatment facilities (MTF). In terms of the MHS organization and structure, OASD HA serves as the principal advisor for all DOD health policies and programs. OASD HA has the authority to issue DOD instructions, publications, and memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness and govern the management of DOD medical programs. In October 2013, the Defense Health Agency (DHA) was established to support greater integration of clinical and business processes across the MHS. The DHA manages the execution of policies issued by OASD HA, oversees the TRICARE health plan, and also exercises authority and control over the MTFs and subordinate clinics assigned to the NCR Medical Directorate. MTFs and their subordinate clinics are operated by either a military service or the NCR Medical Directorate. Neither OASD HA nor DHA have direct command and control of MTFs operated solely by the military services. medical personnel to administer medical programs and provide medical services to beneficiaries. The NCR Medical Directorate has direct authority over civilian providers and personnel working within its facilities; however, the military services maintain authority over all military providers and personnel working within NCR Medical Directorate MTFs. See figure 1 for the current organizational and governance structure of the MHS. The Army and Navy each have a medical command, headed by a surgeon general, who manages each department’s MTFs and other activities through a regional command structure. The Navy provides medical services for both Navy and Marine Corps installations. Unlike the Surgeons General for the Army and Navy, the Air Force Surgeon General exercises no command authority over Air Force MTFs; instead, Air Force MTF commanders report to local line commanders. The Air Force Surgeon General does not have direct authority over Air Force MTFs; however, the Air Force Surgeon General exercises similar authority to that of the other Surgeons General through his role as medical advisor to the Air Force Chief of Staff. The Assistant Secretary of Defense for Health Affairs is the principal advisor for all DOD health policies and programs. The Office of the Assistant Secretary of Defense for Health Affairs (OASD HA) also has the authority to issue DOD instructions, publications, and memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness. DHA was established to support greater integration of clinical and business processes across the MHS. DHA manages the execution of policies issued by OASD HA, oversees the TRICARE health plan, and also exercises authority and control over the MTFs and subordinate clinics assigned to the NCR Medical Directorate. The NCR Medical Directorate was initially established as a DOD joint task force in September 2007 to operate DOD’s medical facilities in the national capital region—including Walter Reed National Military Medical Center, Fort Belvoir Community Hospital, and their supporting clinics. The NCR Medical Directorate reassigned civilian personnel from the military services to the NCR Medical Directorate, while retaining military health care providers within the appropriate military service’s command and control. The MHS has increased its overall mental health provider staffing level by 34 percent between fiscal years 2009 and 2013. Specifically, DOD increased the number of providers across the MHS from 4,608 providers in fiscal year 2009 to 6,186 providers in fiscal year 2013. (See app. I for more information on fiscal year 2013 mental health provider staffing.) This increase was in response to a requirement in the NDAA for Fiscal Year 2010 that DOD increase its mental health capabilities. (See app. II for more information on the recruitment and retention of DOD mental health providers.) The type of mental health providers added to the MHS from fiscal year 2009 to fiscal year 2013 varied. (See fig. 2.) Specifically, social workers and psychologists were the most frequently added types of mental health providers during this period, while psychiatrists and mental health nurses were the least frequently added. The Army drove the overall increase in social workers and psychologists by adding 496 of the 705 social workers and 421 of the 559 psychologists to the MHS during this period. The Air Force added more social workers (64) than any other type of provider during this period, while the Navy added more psychologists (32) and other licensed providers (32). (See app. III for additional information on the breakdown of mental health provider staffing level changes for each military service from fiscal year 2009 to fiscal year 2013.) During this time frame, the composition of DOD mental health provider staff by employment category also changed. Across the MHS, the number of civilian mental health providers increased by 52 percent (1,129) and military mental health providers increased by 33 percent (479), while the number of contract mental health providers decreased by 3 percent (30). (See fig. 3.) The services’ individual changes varied, with the Army driving this systemic shift to civilian providers. Specifically, the Army added 863 new civilian mental health providers (a 50 percent increase), while decreasing the number of contract mental health providers by 153 (a 33 percent decrease). The Air Force also increased its civilian mental health provider staffing by 5 providers (a 2 percent increase) and increased its contract mental health provider staffing by 72 providers (a 39 percent increase). The Navy increased its number of military mental health providers by 113 (a 38 percent increase), added 12 civilian mental health providers (a 5 percent increase), and decreased contract mental health providers by 37 (an 11 percent decrease). While all three military services increased their mental health provider staffing from fiscal year 2009 to fiscal year 2013, the Army’s addition of 1,010 mental health providers represented the largest portion of the DOD- wide increase. The Navy’s increase of 88 mental health providers was the smallest portion of the DOD-wide increase. (See fig. 4.) DOD created the Psychological Health Risk-Adjusted Model for Staffing (PHRAMS) to show current and estimate future mental health provider staffing needs of the MHS. In fiscal year 2014, PHRAMS was used for a common purpose by the military services for the first time—the development of the fiscal year 2016 DOD budget request for mental health programs. However, the military services are either not using PHRAMS as the primary basis of their estimates of mental health provider staffing needs or supplementing their PHRAMS results with service- specific staffing methods. This limits DOD’s ability to consistently assess mental health provider staffing needs throughout the MHS. PHRAMS projects the number and mix of providers needed to meet the mental health care needs of the MHS. In fiscal year 2007, DOD contracted with a non-profit research and analysis organization to develop PHRAMS in response to recommendations from the DOD Task Force on Mental Health.fund and DOD allocate sufficient staff to provide a full continuum of mental health services to servicemembers and their dependents and (2) DOD adopt a risk-adjusted population-based model to calculate mental health staffing needs. As of September 2014, the contract to develop and maintain PHRAMS had cost DOD $2 million, according to DOD officials. These recommendations included that: (1) Congress PHRAMS is designed to be a common DOD-wide model that can be used by the military services to assess current mental health provider staffing needs and forecast these staffing needs over a 5-year timeframe. DOD intended PHRAMS to allow the Department to fulfill two goals: (1) assess whether or not there are enough mental health providers within the MHS to meet the increased mental health needs of servicemembers and their dependents that resulted from their experiences in recent conflicts, and (2) allow the Department to report the mental health provider staffing needs of the MHS to Congress. DHA and the PHRAMS contractor engage in an annual model review process to incorporate changes requested by the military services into the next version of the model. According to DHA officials, PHRAMS was used for a common purpose for the first time in fiscal year 2014—the development of the DOD fiscal year 2016 budget request for mental health programs. To assess current mental health provider staffing needs and determine 5-year forecasts of these needs, PHRAMS places MHS beneficiaries— including servicemembers, dependents, and other beneficiaries—into 40,500 individual risk groups based on unique combinations of eight risk factors. (See fig. 5.) PHRAMS uses a number of mental health diagnoses in its calculations, including those in the following groups: (1) psychoses, (2) non-psychotic depressive disorders, (3) anxiety-related disorders, (4) neurotic disorders, (5) post-traumatic stress disorder, (6) adjustment reaction disorders (excluding post-traumatic stress disorder), (7) acute reaction to stress, (8) substance-induced mental disorders, (9) substance dependence, (10) non-dependent substance abuse, (11) psychotic disorders of childhood, (12) non- psychotic disorders of childhood, (13) schizophrenic disorder, (14) personality disorders, (15) disturbance of conduct not elsewhere classified, (16) other psychotic disorders, and (17) other non-psychotic disorders. PHRAMS also estimates the prevalence of a number of mental health related events, such as (1) personal or family history of mental or psychiatric diagnosis, (2) mental or behavioral problem influencing health status, (3) specific mental health circumstances, (4) mental health examination and observation with no reported diagnosis, (5) mental health condition in a mother complicating pregnancy, (6) post-deployment health assessments and post-deployment health reassessments, and (7) other cases where the diagnosing provider is a mental health provider. demand for mental health services each risk group will place on the MHS. Number of appointments (encounter rate). PHRAMS calculates the number of appointments that will be needed to treat diagnosed beneficiaries within each risk group. To do this, the model applies predetermined encounter rates that specify how many times a beneficiary with each mental health diagnosis included in the model will interact with an MTF provider. Availability of MHS mental health providers. PHRAMS then determines the number of encounters each MTF-based mental health provider can supply each year by multiplying the number of encounters that can be completed each hour (encounter time) by the total number of annual hours each mental health provider can spend supplying mental health services to beneficiaries (provider time). To establish the encounter rates used in PHRAMS, the PHRAMS contractor created a composite encounter rate for each mental health condition included in the model based on five inputs: (1) recommendations from a Navy work group, (2) recommendations from an Air Force work group, (3) information gathered from reviews of clinical literature, (4) information gathered from reviews of clinical practice guidelines, and (5) other interviews. Encounter rates are also adjusted based on historical data. Military and civilian mental health providers have different amounts of hours they can devote to encounters each year. For example, the default value in PHRAMS version 5 for military mental health providers’ clinical encounter time was set at 1,190 hours per year, while the default value for civilian mental health providers’ clinical encounter time was set at 1,399. This difference accounts for the hours military mental health providers spend each year performing military-specific duties not related to beneficiary care and differences in assumed productivity for military and civilian mental health care providers. Despite all military services agreeing to use PHRAMS to generate their estimates of mental health provider staffing needs for the fiscal year 2016 budget request, the military services either did not use PHRAMS as the main basis for their mental health provider staffing estimates or supplemented PHRAMS results using other service-specific methods prior to submitting their fiscal year 2016 budget requests. Standards for internal control in the federal government state that agencies’ control activities should ensure that management’s directives are carried out.The military services reported making these adjustments because PHRAMS does not account for several factors that are crucial to their assessment of mental health provider staffing needs, specifically the following: Army. Army officials reported that they did not use PHRAMS as the basis for their fiscal year 2016 budget request and instead determined their mental health provider staffing needs through their legacy staffing model and adjusted PHRAMS to ensure it produced similar results. Unlike PHRAMS, which bases its mental health provider staffing estimates on beneficiary demand for services, the Army legacy staffing model uses historical workload data to estimate future staffing needs in multiple specialties, including mental health. The Army legacy staffing model uses projected beneficiary population changes to adjust the historical workload for Army MTFs up or down as needed. According to the Army official responsible for generating manpower estimates for Army MTFs, PHRAMS does not currently meet the needs of the Army. This is because PHRAMS’ assumption that all military services experience the same encounter rates for mental health conditions included in the model is an overgeneralization of mental health service demands. This official believes that this is particularly problematic for the Army because deployments are more traumatic for Army servicemembers and may result in some servicemembers requiring more than the average number of encounters. As a result, the Army only ran PHRAMS after the military service had already determined its mental health provider staffing needs through its legacy staffing model. Air Force. According to Air Force officials, while the Air Force uses some aspects of PHRAMS, it did not rely exclusively on PHRAMS to generate their estimates of mental health staffing needs included in the fiscal year 2016 budget request. These officials explained that PHRAMS was the first step in a three-step process used to generate the Air Force’s fiscal year 2016 budget request for mental health provider staffing. First, Air Force manpower staff ran PHRAMS and provided the PHRAMS-generated mental health provider staffing estimates to Air Force’s mental health consultants for consideration. Second, the Air Force mental health consultants developed multiple staffing level proposals by combining the PHRAMS output with their own expertise and information received during conversations with Air Force MTF officials. Finally, the Chief of Clinical Operations for the Air Force Medical Support Agency selected the best staffing proposal among those submitted for review by the Air Force mental health consultants. Air Force officials reported that this process was applied because PHRAMS relies on data that is several years old and does not take into account all aspects of Air Force mental health provider staffing, such as mental health providers embedded in operational units. Air Force officials also explained that they plan to continue using this process in the future to generate mental health provider staffing estimates. Navy. Navy officials reported that they used PHRAMS, but supplemented PHRAMS estimates of mental health provider staffing needs with additional information. According to Navy officials, this was necessary because PHRAMS does not include estimates of mental health provider staffing needs on Navy vessels and for deployed Marine Corps units. As a result, Navy officials adjusted their PHRAMS output to account for these additional needs for mental health providers. These officials explained that they relied on traditional methods—such as on-site industrial engineering reviews and industry standards—to calculate these operational requirements for Navy mental health providers. According to Navy officials, the fiscal year 2016 budget request submitted by the Navy for mental health provider staffing is the sum of the estimated staffing levels generated by PHRAMS and the calculated operational requirements for mental health providers. When we shared this information with DHA officials, they told us that they were unaware of specific supplemental or alternative methods used by the military services to determine their final mental health provider staffing estimates. However, these officials did note that the military services do make modifications to their PHRAMS results through modifying certain aspects of the model and DHA does not collect information on these modifications. DHA and the PHRAMS contractor review the model annually to incorporate changes requested by the military services in the next version of the model. Standards for internal control in the federal government state that information should be recorded and communicated to management and others within the agency that need it in a format and time frame that enables them to carry out their responsibilities. However, since DHA did not have access to this information on how the military services supplemented PHRAMS for their fiscal year 2016 budget request, this critical information was not included in this annual update process. As a result of the military services’ alterations to PHRAMS estimates of mental health provider staffing needs, DHA cannot consistently determine how beneficiary demand affects the mental health provider staffing needs for the MHS. Specifically, due to the Army’s use of a workload-based staffing estimate, the resulting mental health provider staffing needs estimates submitted for the fiscal year 2016 budget process may not consistently reflect the beneficiary demand for mental health services across military services. In addition, without an accurate picture of the ways the military services altered or supplemented PHRAMS results, DHA cannot evaluate the role PHRAMS played in the development of the fiscal year 2016 budget request for mental health provider staffing and cannot ensure that it is directing the PHRAMS contractor to make the most appropriate changes to the model that minimize the need for these service-specific supplements. The military services submit quarterly reports to DHA through the OASD HA human capital office that include information on their current mental health provider staffing levels and should, as requested, include information on their future needs for these providers. However, the military services do not include reliable information about their mental health provider staffing needs on these quarterly reports, despite having access to PHRAMS since fiscal year 2010. As a result, DHA does not have an accurate picture of the mental health provider staffing needs of the MHS and cannot accurately report this information to Congress. Standards for internal control in the federal government state that information should be recorded and communicated to management and others within the agency that need it in a format and time frame that enables them to carry out their responsibilities. DHA requests information each quarter from the military services and the NCR Medical Directorate on mental health provider staffing in order to understand the MHS-wide use and need for these providers and report this information to Congress when requested. Each military service and the NCR Medical Directorate submits quarterly staffing reports to DHA through the OASD HA human capital office that include information on three areas of mental health provider staffing: (1) the number of mental health providers each military service needs to fulfill the needs of their beneficiaries, referred to as requirements; (2) the number of authorized positions each military service has for various types of mental health providers, referred to as authorizations; and (3) the actual number of mental health providers each military service has working within their MTFs and subordinate clinics that quarter, referred to as on-board providers. However, we found that information reported is unreliable. Specifically, we found the following: According to DHA officials, only the Army submits information on the number of mental health providers its MTFs and subordinate clinics need to serve Army beneficiaries and it derives these numbers from the Army workload-based legacy staffing model. DHA officials told us that the Navy and the Air Force do not track needs for mental health provider staffing. Instead, they submit the number of authorized mental health provider positions for both the requirements and authorizations sections of these quarterly reports. NCR Medical Directorate officials told us that the requirements section of their quarterly reports are populated using the staffing needs identified in the intermediate manpower planning documents that were created during the formation of the NCR Medical Directorate. According to DOD officials, the NCR Medical Directorate is currently reviewing the staffing needs of its MTFs and subordinate clinics and anticipates completion of this review by December 2014. Without reliable information from the military services and the NCR Medical Directorate on the quarterly reports, DHA cannot assess the need for mental health providers throughout the MHS and cannot ensure that it is providing assistance to the military services in meeting their unmet needs. The military services have had access to PHRAMS since fiscal year 2010 and the model could be used to assess the mental health provider needs of each military service and the NCR Medical Directorate on an ongoing basis. Incorporating this information into the requirements section of the quarterly reports each military service and the NCR Medical Directorate submit to DHA through the OASD HA human capital office would provide this important information to DHA. In addition, this information would also ensure greater consistency in the military services’ and the NCR Medical Directorate’s assessment of this aspect of mental health provider staffing and ensure greater accuracy in DOD’s reports to Congress about mental health provider staffing. While PHRAMS has been in development since fiscal year 2007, the military services only recently began using the model for a common purpose—the fiscal year 2016 DOD budget request. However, PHRAMS is not meeting its intended goals because the military services are not using it consistently to assess their mental health provider staffing needs. Instead, the military services are supplementing PHRAMS mental health provider staffing estimates with additional information. It is critical that the military services report how they have supplemented PHRAMS to ensure (1) that DHA and the PHRAMS contractor can correctly analyze and interpret the military services’ mental health provider staffing estimates, and (2) that PHRAMS is updated regularly to meet the needs of the military services. DHA is also unable to generate accurate reports to Congress on the staffing needs of the entire MHS, because the military services are not using PHRAMS to generate consistent mental health provider staffing needs estimates and are instead reporting unreliable estimates on their quarterly reports. DHA is therefore unable to assess and report on current mental health provider staffing needs. To ensure DHA can accurately and consistently assess mental health provider staffing needs across each of the military services, we recommend that the Secretary of Defense direct the Secretaries of the Army, Air Force, and Navy to take the following two actions: Require the medical commands of each military service to report any additional service-specific methods they use to determine their final estimates of mental health provider staffing needs; and Require the medical commands of each military service to include its estimated mental health provider staffing needs generated through PHRAMS in the requirements fields of DHA’s quarterly mental health staffing reports. We further recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to take the following two actions: Ensure DHA, through the PHRAMS contractor, continue to refine PHRAMS to incorporate the needs of the military services to reduce the need for additional service-specific methods of determining mental health provider staffing needs; and Require the NCR Medical Directorate to include its estimated mental health provider staffing needs generated through PHRAMS in the requirements fields of DHA’s quarterly mental health staffing reports. DOD provided comments on a draft of this report, which we have reprinted in appendix IV. In its comments, DOD generally concurred with two of our four recommendations. DOD also provided technical comments, which we have incorporated as appropriate. DOD concurred with our recommendation that the Secretary of Defense should direct the Secretaries of the Army, the Air Force, and the Navy to require the medical commands of each military service to report any additional service-specific methods they use to determine their final estimates of mental health provider staffing needs. DOD did not provide a time frame or action plan for implementing this recommendation. In addition, in response to our recommendation that the Secretary of Defense ensure DHA, through the PHRAMS contractor, continue to refine PHRAMS to incorporate the needs of military services to reduce the need for additional service-specific methods of determining mental health provider staffing needs, DOD said that DHA continues to serve in an advisory role to the military services to ensure that the next version of PHRAMS meets each service’s needs. DOD did not provide a time frame or action plan for implementing this recommendation. DOD did not concur with our recommendations to require the medical commands of each military service and the NCR Medical Directorate to include their estimated mental health provider staffing needs generated through PHRAMS in the requirements field of DHA’s quarterly mental health staffing reports. DOD stated in its comments that using PHRAMS in the requirements fields of these reports will not add value to the quarterly mental health staffing reports and noted that the military services do not use PHRAMS as the sole source of mental health requirements. We disagree with DOD’s conclusion and maintain that our recommendations should be implemented. The military services and the NCR Medical Directorate are not currently providing DHA with consistent information that it can rely on to: (1) make informed decisions regarding the MHS-wide usage and need for mental health providers and (2) develop reports to Congress based on this information. Specifically, only one military service—the Army—reports the number of mental health providers that its MTFs need to serve Army beneficiaries in the requirements field of DHA’s quarterly mental health staffing reports. The other two military services—the Air Force and the Navy—enter the number of mental health providers that were authorized by DOD for that fiscal year in the requirements field because they do not track mental health provider staffing needs. Additionally, the NCR Medical Directorate told us that it populates the requirements field of DHA’s quarterly mental health staffing reports with information that was created during the formation of the NCR Medical Directorate several years ago and not with the current needs of its beneficiary population. We believe that to adequately assess the need for mental health providers throughout the MHS, DHA needs to have access to consistent and reliable information on mental health provider staffing needs in the quarterly mental health staffing reports. By not supplying consistent information on mental health provider staffing needs generated through PHRAMS—a common staffing model all military services and the NCR Medical Directorate have access to—the military services and the NCR Medical Directorate make it difficult to properly assess relative mental health provider staffing needs across the services. If our recommendations were implemented, DHA would have access to consistent information about mental health provider staffing needs throughout the MHS and would be able to more reliably report this information to Congress. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides results from our analysis of Department of Defense (DOD) fiscal year 2013 quarterly mental health staffing reports. Each military service and the National Capital Region (NCR) Medical Directorate submit these reports to the Defense Health Agency (DHA) through the Office of the Assistant Secretary of Defense for Health Affairs (OASD HA) human capital office each quarter to identify their mental health staffing levels and needs. Figure 7 shows the total number of mental health providers working within the MHS by provider type as of September 2013. Figure 8 shows the total number of mental health providers working within the MHS by employment category as of September 2013. Figure 9 shows the total number of mental health providers working within the MHS by military service and the NCR Medical Directorate as of September 2013. Table 1 shows the mental health provider staffing levels for the Army as of September 2013. Table 2 shows the mental health provider staffing levels for the Air Force as of September 2013. Table 3 shows the mental health provider staffing levels for the Navy as of September 2013. Table 4 shows the mental health provider staffing levels for the NCR Medical Directorate as of September 2013. This appendix provides information on the recruitment and retention of Department of Defense (DOD) mental health providers. Specifically, we discuss (1) the mechanisms the military services use to recruit and retain mental health providers, and (2) the challenges the military services experience in recruiting and retaining mental health providers. To determine the mechanisms the military services use to recruit and retain mental health providers, we reviewed relevant laws, including each National Defense Authorization Act (NDAA) from fiscal years 2010 through 2014, to determine the recruitment and retention mechanisms available to DOD for mental health providers. We also spoke with officials from Office of the Assistant Secretary of Defense for Health Affairs (OASD HA), Army, Air Force, and Navy about their use of these mechanisms. To determine the challenges the military services experience in recruiting and retaining mental health providers, we spoke with officials from OASD HA, Army, Air Force, and Navy. We also reviewed the Health Resources and Services Administration’s Health Professional Shortage Area designations to determine whether other health care delivery systems also experienced challenges in recruiting and retaining certain mental health providers. All three military services reported using numerous recruitment and retention mechanisms, many of which are cited in the NDAA for Fiscal Year 2010. These mechanisms include the following: Health Professions Scholarship and Financial Assistance Program (HPSP). Officials from all three military services reported using this program to recruit various types of medical providers, including mental health providers. Through HPSP, the military services provide scholarships, stipends, and other benefits for students in advanced health care fields—including physicians, psychiatric nurse practitioners, and psychologists. The military services reported that HPSP was a particularly important recruitment tool for physicians, including psychiatrists. However, officials from all three military services stressed that they cannot predict the exact number of psychiatrists HPSP will produce annually because it begins funding medical students’ general training prior to their selection of a specialty. Uniformed Services University of the Health Sciences (USUHS). Generally, USUHS students do not pay tuition and receive full salary and benefits for a junior officer (second lieutenant or ensign) in exchange for a 7-year active duty military service commitment. psychology students by service is Army, three students; Air Force, two to three students; and Navy, five students. Bonuses for mental health providers. Officials from all three military services reported using a variety of bonuses for mental health providers. Specifically, the Army reported using accession, relocation, and retention bonuses for both military and civilian mental health providers. The Air Force reported that they provide accession bonuses to fully-qualified military mental health providers, as well as bonuses for specialty board certification and a retention bonus after providers have completed a specified number of years-of-service. Finally, the Navy reported that all mental health specialties are eligible for some combination of accession and retention bonuses and board certification pay. Direct-hire authority for civilian mental health providers. Both the Army and Navy reported using direct-hire authority to recruit civilian mental health providers. The Office of Personnel Management can grant direct-hire authority to executive branch agencies to fill vacancies when a critical hiring need or severe shortage of candidates exists. Direct-hire authorities expedite hiring by eliminating some competitive hiring procedures, such as rating and ranking candidates, that would otherwise be required. Agencies may also pursue agency-specific direct-hire authorities. Training program for licensed clinical social workers. Both the Army and Navy reported using the Army’s training program for licensed clinical social workers. In 2008, the Army created a program for training licensed clinical social workers with Fayetteville State University to address a shortage of Army social work military providers. This program provides participants with a Masters in Social Work and internship placements. The program annually trains up to 30 Army social work military providers, 5 Army National Guard social work military providers, and 2 Navy social work military providers.Army officials reported that this program is satisfying all of its need for social work military providers annually, and Navy officials told us that this program was an important recruitment tool for their social work military providers as well. In the face of nationwide shortages of mental health professionals, the ability to recruit and retain mental health providers, particularly psychiatrists, poses a challenge according to officials from all three military services. The Health Resources and Services Administration has reported nationwide shortages of psychiatrists and identified 3,900 health professional shortage areas throughout the nation with a relative scarcity of psychiatrists. As of January 2014, the Health Resources and Services Administration reported that it would take approximately 2,600 additional psychiatrists nationwide to eliminate the current shortages it has identified. In addition to nationwide shortages of mental health professionals, there are other overarching military-specific challenges for all three military services as they compete for scarce mental health resources. Mental health provider recruitment and retention challenges specific to military service include: Frequent deployments and relocations. Officials from all three military services reported that both frequent deployments and relocations made it difficult for them to recruit and retain mental health military providers. For example, Navy officials told us that they have received feedback from psychiatrists leaving military service that requirements to move frequently and deploy were reasons they were leaving the Navy. Assignment to work in remote locations. According to officials from all three military services, the remote locations where many military treatment facilities are located posed recruitment and retention challenges for mental health providers. For example, Army officials explained that many Army bases are located relatively far away from major metropolitan areas and that mental health military, civilian, and contract providers are reluctant to be located in what they perceived to be remote and isolated locations for lengthy periods of time. Competitive compensation for mental health providers. Officials from all three military services reported that the inability of DOD to create compensation packages for civilian mental health providers, particularly psychiatrists, that were competitive with private sector compensation affected their ability to recruit and retain these providers. For example, Army officials stated that both a 3-year long DOD pay freeze and recent furloughs affected their ability to create competitive salaries for providers and contributed to the Army’s 15 percent turnover rate in their psychiatrist and psychologist mental health provider populations in recent years. This appendix provides results from our analysis of Department of Defense (DOD) quarterly mental health staffing reports for fiscal years 2009 and 2013. Each service and the National Capital Region (NCR) Medical Directorate submits these reports to the Defense Health Agency (DHA) through the Office of the Assistant Secretary of Defense for Health Affairs (OASD HA) human capital office each quarter to identify their mental health staffing levels. The NCR Medical Directorate is not represented in this appendix because in fiscal year 2009 mental health provider staffing levels were included in the military service totals and, as a result, comparisons of NCR Medical Directorate staffing levels from fiscal year 2009 to fiscal year 2013 are not available. Table 5 provides results for mental health provider staffing levels for the Army in fiscal year 2009 and fiscal year 2013. Table 6 provides results for mental health provider staffing levels for the Air Force in fiscal year 2009 and fiscal year 2013. Table 7 provides results for mental health provider staffing levels for the Navy in fiscal year 2009 and fiscal year 2013. Randall B. Williamson, (202) 512-7114 or williamsonr@gao.gov. In addition to the contact named above, Marcia A. Mann, Assistant Director; A. Elizabeth Dobrenz; Mary Giffin; Cathleen Hamann; Katherine Nicole Laubacher; Vikki Porter; Dharani Ranganathan; and Laurie F. Thurber made key contributions to this report. Jacquelyn Hamilton provided legal support.
|
Mental health providers are essential to DOD's delivery of health care to servicemembers and other beneficiaries. DOD's need for these providers has grown as increasing numbers of servicemembers experience life-threatening combat situations. This led to congressional attention—such as the NDAA for Fiscal Year 2010, which included provisions to help DOD increase the number of mental health providers it employs. GAO was asked to review DOD's efforts to increase its mental health provider workforce. Among other objectives, GAO examined (1) how staffing levels changed in response to congressional direction and (2) how DOD and the military services assess current and future needs for mental health providers. GAO reviewed DOD's mental health staffing estimation model and the military services' quarterly mental health provider staffing reports for fiscal years 2009 through 2013, the latest information available. GAO also interviewed DOD and military service officials responsible for assessing mental health staffing needs. In response to the enactment of the National Defense Authorization Act (NDAA) for Fiscal Year 2010, the Department of Defense (DOD) military health system (MHS) increased its mental health provider staffing level by 34 percent. Specifically, DOD increased the number of mental health providers across the MHS from 4,608 providers in fiscal year 2009 to 6,186 providers in fiscal year 2013. Social workers and psychologists were the most frequently added types of mental health providers during this period. In 2007, DOD created the Psychological Health Risk-Adjusted Model for Staffing (PHRAMS) to assess the MHS's current and future mental health provider staffing needs and DOD annually revises this model. Fiscal year 2014 marked the first time the model was used by the three military services responsible for providing health care—the Army, Air Force, and Navy—for a common purpose, which was the development of DOD's fiscal year 2016 budget request for mental health services. However, GAO found that the military services either were not using PHRAMS as the main basis of their mental health provider staffing needs estimates or were supplementing PHRAMS results with other service-specific methods. The services reported making these adjustments because PHRAMS does not account for factors that are crucial to assess mental health provider staffing needs, such as mental health providers needed for deployments. As a result, the military services' estimates of mental health provider staffing needs may not consistently reflect the beneficiary demand for mental health providers across the military services, and the current version of PHRAMS may not fully capture the military services' needs. GAO recommends that the military services report on service-specific or supplemental processes for generating mental health provider staffing estimates and that DOD continue to refine its staffing estimation model. DOD generally concurred with these recommendations, but did not concur with two others related to the use of PHRAMS that are also included in the report. GAO continues to believe these recommendations are valid as discussed further in the report.
|
The United States has a long history of refugee resettlement, but there was no formal program for the resettlement and admission of refugees until the Refugee Act of 1980 (Refugee Act) amended the Immigration and Nationality Act (INA) to, among other purposes, establish a more uniform basis for the provision of assistance to refugees. Under the INA, as amended, an applicant seeking admission to the United States as a refugee must (1) not be firmly resettled in any foreign country, (2) be determined by the President to be of special humanitarian concern to the United States, (3) meet the definition of refugee established in U.S. immigration law, and (4) be otherwise admissible to the United States as an immigrant under U.S. immigration law. Under USRAP, USCIS officers determine an applicant’s eligibility for refugee status by assessing whether the applicant has, among other things, credibly established that he or she suffered past persecution, or has a well-founded fear of future persecution, and that he or she is not otherwise statutorily barred from being granted refugee status or admission to the United States. Among other things, USCIS officers may not classify an applicant as a refugee or approve an applicant for refugee resettlement in the United States if he or she: has participated in the persecution of any person on account of race, religion, nationality, membership in a particular social group or political opinion; is inadmissible for having engaged in terrorist activity or associating with terrorist organizations; is inadmissible on certain non- waivable criminal or security grounds; or is firmly resettled in a foreign country. Under USRAP, cases may be presented for USCIS adjudication with a single applicant or may include a principle applicant with certain family members. All applicants on a case must be deemed admissible, but only the principal applicant must prove his or her past persecution or fear of future persecution. Before the beginning of each fiscal year and after consultation with Congress, the President is to establish the number of refugees who may be admitted to the United States in the ensuing fiscal year (i.e., a “ceiling”), with such admissions allocated among refugees of special humanitarian concern to the United States (e.g., by region or country of nationality). For example, for fiscal year 2016, the administration proposed and met a ceiling of 85,000 refugees in fiscal year 2016 (including a goal of admitting 10,000 Syrian refugees) and established a ceiling of 110,000 for fiscal year 2017. Since 2001, annual ceilings for refugee admission have generally been between 70,000 and 80,000 admissions; in the early 1990s, the ceilings were at more than 100,000 admissions. Actual admissions of refugees into the country have been at or below the ceiling in recent years. For example, the combined ceiling for fiscal years 2011 through 2016 was 451,000, during which the United States admitted about 410,000 refugees. Figure 1 shows refugee admissions by region during this time period. There are a number of steps in the USRAP screening process for applicants. Figure 2 provides an overview of the refugee screening process. Program access. First, State and USCIS make initial determinations about whether an individual will be accepted into or excluded from USRAP (referred to as program access) for subsequent screening and interview by USCIS officers. There are multiple mechanisms by which State and its partners receive USRAP applications. For example, most applicants are referred to USRAP by UNHCR, but applicants who meet certain criteria can apply directly. State has identified three categories of individuals who are of special humanitarian concern and, therefore, can qualify for access to USRAP: Priority 1 (P1), or individuals specifically referred to USRAP generally because they have a compelling need for protection; Priority 2 (P2), or specific groups, often within certain nationalities or ethnic groups in specified locations, whose members State and its partners have identified as being in need of resettlement; and Priority 3 (P3), or individuals from designated nationalities who have immediate family members in the United States who initially entered as refugees or who were granted asylum. Access: Determination by the Department of State and its U.S. Refugee Admissions Program (USRAP) partners of whether the applicant qualifies for the U.S. Citizenship and Immigration Services (USCIS) adjudication based on if he/she is of special humanitarian concern (i.e., if he/she is within a Priority 1, Priority 2, or Priority 3 category), among other things. Adjudication: USCIS’s process for deciding whether to approve or deny an applicant for refugee status. The adjudication process includes, among other things, at least one in- person interview; security checks; and, in some instances, additional review of the applicant’s case to address national security concerns. Approved application: Determination by USCIS officer that the applicant meets the refugee definition and is otherwise eligible for resettlement in the United States, and will subsequently be processed for travel to the United States. partners conduct biographic checks of certain applicants who are members of groups or nationalities designated by the U.S. government as requiring more thorough vetting. Interagency Check. Partners, including NCTC and elements of the intelligence community, screen biographic data of all refugee applicants within a designated age range against intelligence and law enforcement information within their databases and security holdings. Specifically, all refugee applicants within certain ages are required to undergo an Interagency Check. Further, security vetting partners are to continuously check interagency refugee applicant data against their security holdings through a refugee’s admission to the United States and, in some instances, after an applicant’s arrival and admission to the United States. Through these checks, applications are screened for indicators that they might pose a national security or fraud concern or have immigration or criminal violations, among other things. USCIS and FBI officials have testified at congressional hearings that security checks are limited to the records available in U.S. government databases (which may include information provided by foreign governments and other information on foreign nationals). According to State SOPs, security check responses are communicated through WRAPS, and RSC staff include them in the case file provided to the USCIS officer adjudicating the application. If at any time an applicant is identified as having a match for the Security Advisory Opinion or Interagency Check, the case is to be placed on hold. For Security Advisory Opinion results that are completed before the USCIS interview, State officers are to review any matches to determine if they relate to the applicant and should preclude the applicant from access to the USRAP. USCIS is responsible for reviewing security check results that are completed after the USCIS interview. Further, the CLASS check may require a Security Advisory Opinion or additional DHS review. Once prescreening is complete and RSC staff have received the results of certain security checks, they are to notify State and USCIS that the applicant is ready for interview and adjudication. DHS is to, based on policy, conduct an additional review of Syrian and certain other applicants prior to adjudication as part of prescreening. USCIS Adjudication. Third, USCIS adjudicates applications. USCIS coordinates with State to develop a schedule for refugee interviews each quarter of the fiscal year. USCIS officers conduct individual, in-person interviews overseas with applicants to help determine their eligibility for refugee status. RAD and IO—within USCIS’s Refugee, Asylum, and International Operations (RAIO) Directorate—share responsibility for adjudicating USRAP cases. In 2005, USCIS created the Refugee Corps, a cadre of USCIS officers within RAD who, according to USCIS officials, are to adjudicate the majority of applications for refugee status. These officers are based in Washington, D.C., but they travel to multiple locations for 6 to 8 weeks at a time (called circuit rides), generally making four trips per year, according to RAD officials. In addition, IO officers posted at U.S. embassies overseas can conduct circuit rides and interviews in embassies to adjudicate refugee applications, among other responsibilities. Before or during the circuit ride, USCIS officials are to take the applicants’ fingerprints, which are screened against DHS, Department of Defense, and FBI biometric databases, and if new information from the biometric check raises questions, USCIS officers may ask additional questions at the interview, require additional interviews, or deny the case. In addition, if USCIS officers identify new biographic information during the interview, such as an alias that was previously unknown or not disclosed to RSC staff, that information is vetted through the biographic security checks described above, per State and DHS policy. The officers are to place these applications on hold, pending the outcome of these checks. Further, consistent with USCIS policy, officers are required to place a case on hold to do additional research or investigation if, for example, the officer determines during the interview that the applicant may pose a national security concern. Based on the interviews and security checks conducted, USCIS officers will either approve or deny an applicant’s case. USCIS supervisory officers are to review 100 percent of officers’ adjudications, according to USCIS policy. Final processing and travel to the United States. If USCIS approves an applicant’s refugee application, RSCs are to generally provide the applicant with cultural orientation classes on adjusting to life in the United States, facilitate medical checks, and prepare the applicant to travel. Prior to admission to the United States, applicants are subject to the standard CBP and Transportation Security Administration vetting and screening processes applied to all travelers destined for the United States by air. CBP is to inspect all refugees upon their arrival at one of seven U.S. airports designated for refugee arrivals and make the final determination about whether to admit the individual as a refugee to the United States. From fiscal year 2011 through June 2016, WRAPS data indicate that USRAP received about 655,000 referrals and applications, associated with about 288,000 cases. As figure 3 indicates, during this time frame, more than 75 percent of applications were from refugees fleeing 6 countries—Iraq, Burma, Syria, Somalia, the Democratic Republic of Congo, and Bhutan—and the number of applicants from certain countries has changed over time. For example, the number of Bhutanese and Burmese applications decreased, but the number of Syrian and Congolese applications increased. State officials said that UNHCR submitted a large number of P1 Syrian referrals to USRAP in fiscal year 2016 because more people were fleeing that country due to conflict and the goal of admitting 10,000 Syrian refugees. From October 2015 through June 2016, WRAPS data indicate that more than one-third of USRAP applicants were Syrian. In addition to nationality, USRAP applicants’ characteristics varied in other ways. For example, as shown in figure 4, applications to USRAP from fiscal year 2011 through June 2016 were largely split between the P1 or P2 categories and about two-thirds were processed in one of three RSCs (Middle East and North Africa, Africa, and East Asia). Further, 75 percent of applicants were associated with cases that included immediate family members (which includes a spouse and unmarried children under the age of 21), while 25 percent of cases included only 1 individual. At any given time, there are a number of applicants at different stages of the USRAP process. According to State and RSC officials, State and USCIS process applications in the general order they were received. For example, table 1 shows that, of the applications received in fiscal year 2011, 56 percent were approved and admitted to the United States as of June 2016, 13 percent were still in process (pending access to USRAP, actively being processed, or on hold), and 31 percent of applications were closed before the applicant completed the USRAP process, as of June 2016. By comparison, as of June 2016, almost 70 percent of applications received in fiscal year 2015 were in process. Program Access. Of the total number of applications received from fiscal years 2011 through June 2016 (about 655,000), State and its USRAP partners made access determinations for about 590,000 of that amount— 569,000 (or 96 percent) of which they accepted, as of June 2016. As described earlier, State and its USRAP partners makes the initial determination on whether to grant an applicant access (accept) to USRAP for subsequent screening and interview by USCIS officers. According to State officials, one reason the acceptance rate is high is because State Refugee Coordinators stationed overseas provide feedback to UNHCR on the types of P1 applications that are not likely to be accepted or ultimately approved by USCIS officers. Further, according to State officials, State coordinates with UNHCR and USCIS to develop predefined eligibility criteria for certain P2 groups and applicants meeting those criteria may access USRAP once UNHCR submits the application to State. For example, State and UNHCR created a new P2 group in 2015 for Congolese who fled to Tanzania. To be part of the P2 group, applicants must have registered with UNHCR and verified their residence in the Nyaragusu refugee camp. From fiscal year 2011 through June 2016, acceptance of applications to USRAP for adjudication varied by nationality of the applicants. For example, excluding pending applications, USRAP partners did not accept 8 percent of Iraqi applicants. USRAP partners also did not accept 4 percent of Syrian applicants, and did not accept less than 1 percent of Burmese and Somali applicants. According to State officials, the most common reason why applicants are not accepted is that they fail to meet criteria to access USRAP. For example, according to State officials, acceptance rates were lower for Iraqi applicants because some Iraqis could not prove their association with the United States—a requirement under various P2 programs. As part of the adjudication process, USCIS officers are to confirm that applicants were appropriately granted access to USRAP. WRAPS data from fiscal year 2011 through June 2016 show that USCIS officers confirmed that over 99 percent (all but about 1,000 out of 351,000) of the applicants interviewed were appropriately granted access to USRAP (i.e., qualify for adjudication by USCIS), as of June 2016. USCIS Adjudications. According to WRAPS data, as of June 2016, USCIS officers interviewed about 62 percent (351,000) of the applicants who were granted access to USRAP from fiscal year 2011 through June 2016. USCIS officers approved 89 percent (314,000 of 351,000) and denied 7 percent (24,000) of these applications. Approval rates varied by RSC (see fig. 5). Applications may also be put on hold for a number of reasons. For example, holds may occur because of security check results, a USCIS officer did not have sufficient information at the time of the interview to approve or deny the applications associated with the case, or as a result of new information that came to light after the interview. For applications in our time period of analysis, WRAPS data indicate that 12 percent (about 81,000) were on hold as of June 2016. USCIS officials stated that they would make a final decision on these cases after receiving additional information, which could include outstanding or additional security checks results, information from family members’ cases, and conducting additional interviews. About 24 percent (138,000) of the applicants who were granted access to USRAP from fiscal year 2011 through June 2016 were awaiting interviews with USCIS (i.e., the applicant had an active case or a case that was on hold but had not received an interview), as of June 2016. RSC Middle East and North Africa (58,000) and RSC Africa (40,000) had the largest number of applications awaiting interviews. Some applicants have waited years to receive a USCIS interview. For example, according to WRAPS data, about 9,000 applications submitted in fiscal years 2011 or 2012 were active in June 2016 and the applicants had not yet received a USCIS interview. About 87 percent of these applications were applicants from Iraq or Somalia. In addition, there were about 6,000 applications received in fiscal years 2011 and 2012 that were on hold and had not received a USCIS interview, 93 percent of whom were from Iraq, Somalia, or Burma. According to State officials, the security situations in Iraq and a refugee camp on the border of Kenya and Somalia where many Somali applicants are located have made it difficult to schedule USCIS interviews at certain times in these locations, among other reasons. For applications received from fiscal year 2011 through June 2016 with security check results noted in WRAPS, the Interagency Check was the one that most often resulted in a result of “not clear” based on thresholds set by an interagency process including the intelligence community and law enforcement agencies. However, “not clear” results—meaning, the checks identified security or fraud concerns—represented a small percentage of all results for each of the three biographic checks and the fingerprint check, as of June 2016. Further, of the applicants who were admitted to the United States or had closed applications as of June 2016, the median number of days from initiation of the biographic security checks (at the time of the RSC prescreening interview) through the last completed Interagency Check (which is often the last check prior to departure for the United States) was 247 days. According to WRAPS data, the overwhelming majority of the about 227,000 applicants from fiscal year 2011 through June 2016 who were admitted to the United States as refugees had “clear” security check results, as of June 2016. However, one applicant who was admitted to the United States in 2012 had his security check status change from “clear” to “not clear” days before his planned travel. The security vetting process at that time did not account for responses from a vetting agency that had not been specifically requested and, therefore, an additional check of security vetting responses after receipt of a final response of “clear” had not been conducted. According to State officials, when the RSC realized the applicant had a “not clear” response, it notified local USCIS officials immediately. USCIS data show that the refugee has since adjusted to legal permanent resident status. According to a USCIS branch chief, at the time of the individual’s adjustment application, the derogatory information (which predicated the “not clear”) had been resolved and there was no basis for USCIS to deny the individual’s adjustment application. State has since updated security SOPs to require RSCs to run daily reports to check if any applicants with imminent travel plans have received an unsolicited Interagency Check “not clear.” The length of time to process a USRAP application varies. For example, of the applicants who applied from fiscal year 2011 through June 2016 and had been admitted to the United States, as of June 2016, 27 percent were processed in less than 1 year, 47 percent between 1 and 2 years, and 26 percent in more than 2 years. Figure 6 shows the cumulative length of time (median number of days) of key phases in the USRAP process. The lengthiest phase was from the time USCIS approved the applicant through arrival in the United States (a median of 189 days). State and RSCs have various policies and SOPs, trainings, and quality checks related to refugee case processing and prescreening. Policies and SOPs for USRAP. State’s USRAP Overseas Processing Manual provides an outline of the policies and procedures involved in overseas processing for USRAP, including instructions for using WRAPS, requirements for what information RSCs should collect during prescreening, and instructions and requirements for initiating certain national security checks, among other things. In addition, State developed SOPs for processing and prescreening refugee applications at RSCs, which State officials indicated provide baseline standards for RSC operations. Further, all four of the RSCs we visited provided us with their own local SOPs that incorporated the topics covered in State’s SOPs. Directors at the remaining five RSCs also told us that they had developed local SOPs that covered the overarching USRAP requirements. We observed how RSC staff implemented State’s case processing and prescreening policies and procedures during our site visits to four RSCs from June 2016 to September 2016. Specifically, we observed 27 prescreening interviews conducted by RSC caseworkers at the four RSCs we visited and found that these caseworkers generally adhered to State requirements during these interviews. For example, RSC caseworkers we observed reviewed applicants’ identification documents (e.g. passport, birth certificate, or marriage certificate) and recorded name variations (e.g. alternate spellings to confirm identity and for use in security checks); recorded “family tree” information (for security checks and to confirm family relationships for subsequent applicants); and recorded the applicants’ flight paths and persecution stories (to be used by USCIS officers in their interviews and to determine if the applicant qualifies as a refugee). In one location, we observed that RSC caseworkers were not consistently asking applicants during the prescreening interviews if they had any other aliases or nicknames. Further, USCIS officers identified the same issue in three separate RSCs, during circuit rides in fiscal years 2014, 2015, and 2016, according to RAD trip reports. Asking about aliases and nicknames is an expected practice for all RSC staff conducting prescreening interviews, according to State and RSC officials, because the information could be useful and important during an applicant’s biographic national security checks. Further, State officials said that if aliases are not identified prior to USCIS interview, it may delay processing because when USCIS officers identify additional names the RSCs must resubmit security checks. We brought the issue to the attention of RSC and State management, and, in response to our observations, the RSC revised its local SOPs to more clearly instruct RSC caseworkers to ask applicants if they had any aliases or nicknames, and State revised its prescreening SOPs and informed all other RSCs of the change. In addition, we observed how RSC staff in all four locations implemented additional required procedures during our site visits, such as initiating required security checks through WRAPS and compiling case file information for USCIS interviewing officers, and found that these RSC staff were complying with SOPs. Further, all nine RSC directors we interviewed stated that they were familiar with State’s requirements for their location and reported implementing them. Training on USRAP requirements. On the basis of our analysis of State’s cooperative agreements, RSC monitoring documents submitted to State, and interviews with State headquarters’ officials and all nine RSC directors, we found that these RSCs reported having various trainings for their staff. According to State officials, they have not developed specific training requirements for all RSCs because each RSC has different needs and conditions requiring individualized training programs. All nine RSC directors with whom we spoke said they have training programs ranging from technical trainings (e.g., WRAPS or interview training) to shadowing programs in which newly-hired staff observe more experienced RSC employees performing their duties. During our September 2016 site visit to RSC Africa, for example, we observed new-hire training for RSC caseworkers, as well as more experienced caseworkers mentoring and coaching the newer staff. At RSC Latin America, according to the director, new staff receive 1 week of WRAPS training and observe more experienced caseworkers conduct prescreening interviews until the new staff member is able to conduct the interviews alone. Quality control checks. On the basis of our analysis of RSC monitoring documents submitted to State, cooperative agreements, observations at the four RSCs we visited, and interviews with State headquarters’ officials and all nine RSC directors, RSCs have quality control checks to oversee case processing and prescreening to help ensure that RSC staff collect accurate and reliable information. For example, at all four RSCs we visited, we observed staff conducting both electronic and manual quality control checks of case information. Specifically, after the prescreening interview, RSC staff in all four locations reviewed the hard copy case file and ran checks in WRAPS for errors or omissions. Further, all four RSCs we visited had a dedicated quality control unit that is to monitor data quality and review regular data monitoring reports. Moreover, RSC directors in the other five locations stated that they have similar quality control checks in place, and all nine RSC directors stated that there are quality control checks at every stage of the USRAP process from case creation in WRAPS to when refugees are about to depart for the United States. According to USCIS officials we interviewed at headquarters and in the field, RSCs generally provide the information that USCIS officers needed to adjudicate applications, but they also identified areas for improvement during some circuit rides. For example, all 10 USCIS officers that we interviewed who participated in the circuit rides associated with our site visits stated that the information gathered by RSCs during the prescreening process was generally accurate, complete, and useful. However, 4 of these officers stated that they have encountered some errors when RSCs provided case files with missing documents or information. In addition, 70 out of the 107 RAD trip reports we analyzed contained feedback on RSC activities. Of these 70 reports, 10 reports stated that RSCs generally prepared the cases well, but 45 reports identified concerns with the quality of certain case files, including missing documentation. According to USCIS officials, missing documentation can lead to delays during the circuit ride while RSC staff obtain and provide copies of the missing documents or USCIS officers obtain the missing information during the interview. In addition, USCIS officers may need to place the application on hold until the missing documentation can be obtained. USCIS officers and State officials we interviewed stated that some of the missing information could only have been obtained during the USCIS interviews with applicants, while others stated that applicants can forget or neglect to give RSC staff all of their documentation despite repeated reminders from RSC staff. Further, five of the nine RSC managers stated that they request USCIS officers submit feedback at the end of circuit rides on the quality of the case file content and interpreters, and three of these RSC managers stated that they take action based on USCIS’s feedback. For example, RSC managers in two locations stated that they have excluded certain interpreters—who are hired on daily contracts—from subsequent circuit rides based on the feedback from USCIS officers. Additionally, USCIS officials stated that their supervisory officers often meet with RSC staff throughout and at the conclusion of a circuit ride to offer feedback on case preparation, among other things. USCIS headquarters officials also offer feedback to State headquarters officials on RSC operations after circuit ride teams return to Washington, D.C. State has control activities in place to monitor how RSCs implement policies and procedures for USRAP, but it does not have outcome-based performance indicators to assess whether RSCs are meeting their objectives under USRAP. Consistent with State’s January 2016 Federal Assistance Policy Directive, and according to State officials, State is required to monitor the RSCs it funds, whether through cooperative agreements or voluntary contributions. State funds four RSCs through cooperative agreements, four through a voluntary contribution to the International Organization for Migration (IOM), and self-operates the final RSC (RSC Cuba). On the basis of our interviews with State officials and as reflected in documentation from all nine RSCs, including quarterly reports to State, all RSCs have generally undergone the same monitoring regime regardless of funding mechanism. The four cooperative agreements and MOU with IOM establish objectives for the RSCs, which include interviewing applicants to obtain relevant information for the adjudication and ensuring the accuracy of information in WRAPS and the case files. State also establishes annual targets for the number of refugees who depart for the United States from each RSC. In addition, the cooperative agreements between the RSCs and State specify that State will periodically visit and evaluate the general performance of RSC operations. They also require RSCs to provide State with regular written reports on whether performance is in compliance with all the terms and conditions of the agreement. Consistent with funding requirements, the four RSCs with cooperative agreements submitted quarterly reports to State in fiscal year 2015, for example, that included information on how each RSC is addressing USRAP objectives. The reports included the number of applicants prescreened each quarter, the number of approved applicants who received cultural orientation training, and how RSCs compile applicant information. In addition, the four RSCs operated by IOM submitted quarterly reports using the same template. Further, according to State officials, the department has dedicated Program Officers located in Washington, D.C., and Refugee Coordinators based in U.S. embassies worldwide, who are responsible for providing support to RSCs and monitoring their activities. State headquarters officials and Refugee Coordinators we met with at the four RSC locations we visited told us that they have daily, informal interaction—via telephone, e-mail, or in-person—with the RSCs. State’s Program Officers also stated that they coordinate regularly with RSCs and conduct annual monitoring visits at RSCs to assess RSCs’ performance and complete monitoring reports based on their visits. We reviewed monitoring reports from eight State site visits to RSCs completed between 2015 and 2016 and found that some included narrative discussions of RSC case processing and timeframes, records management, coordination with other USRAP partners, and other topics. However, not all monitoring reports included consistent information on the same topics. For example, four of the eight monitoring reports we analyzed did not contain information on RSC case processing, prescreening interviews, and security check activities. Further, Program Officers are to complete separate monitoring reports for RSCs funded through cooperative agreements that assess the degree to which RSCs are making progress towards objectives based on project indicators. The indicators for RSCs, according to two fiscal year 2016 reports we reviewed, include the number of individuals prescreened and presented to USCIS for interview, the number of individuals who received cultural orientation training, the number of refugees that departed from those RSCs to the United States, and whether the RSCs ran security checks on all applicants. According to State officials, they also conduct daily monitoring of RSC activities through WRAPS data, which may be useful for monitoring RSC workload or data quality issues. Although State has established objectives and monitors several quantitative goals for RSCs—including the number of refugees that depart each year for the United States and the number of applicants who receive cultural orientation training—it has not established outcome- based performance indicators for key RSC activities such as prescreening applicants or accurate case file preparation, or monitored RSC performance consistently across such indicators. Specifically, neither the quarterly reports nor other monitoring reports we examined have or use consistent outcome-based performance indicators from which State Program Officers could evaluate whether RSCs were consistently and effectively prescreening applicants and preparing case files—key RSC activities that have important implications for timely and effective USCIS interviews and security checks. RSCs collect performance information from USCIS officers through surveys or in- person feedback sessions at the end of circuit rides, which could help inform the development of outcome-based performance indicators. For example, the survey asks USCIS officers to rate the quality of the RSC staff’s documentation of the applicants’ persecution claim. State could develop an indicator from this information and measure progress against it. According to State’s January 2016 policy directive, all assistance awards made by bureaus, offices, and posts—both domestic and overseas— within the department with assistance-awarding authority should have a monitoring plan that includes goals, objectives, and indicators that are outcome-oriented and capable of measuring the recipient’s progress in meeting these goals. In addition, according to State’s Performance Management Guidebook, a program requires a systematic process for monitoring the achievement of program activities; analyzing performance to track progress toward planned results; and using performance information and evaluations to influence program implementation and results. The guidebook states that each bureau, program, or project should establish goals; have specific measurable, outcome-oriented objectives; and develop and monitor performance indicators that focus on the results or effects caused. Moreover, in accordance with GPRA, as updated by GPRAMA, performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly towards pre-established goals, and agencies are to establish performance measures to assess progress towards goals. These measures should link program efforts to desired outcomes. While GPRAMA is applicable to the department or agency level, performance goals and measures are important management tools to all levels of an agency. State officials said that in September 2016 they began to staff a new policy section within State’s Office of Admissions, and staff within this section are to begin standardizing the reporting of monitoring efforts, among other things. In addition, as of March 2017, according to State officials, the department and IOM were in the process of revising the MOU to include, among other things, new monitoring and reporting requirements that includes performance indicators. These officials also stated that, in future cooperative agreements, they plan to build on performance indicators developed by IOM while ensuring outcome-based results. However, as of March 2017, State did not have documentation or timelines for its plans to develop outcome-based performance indicators. Developing outcome-based performance indicators, as required by State policy and performance management guidance, and monitoring RSC performance against such indicators on a regular basis, would better position State to determine whether all RSCs are processing refugee applications in accordance with their responsibilities under USRAP. USCIS has policies and procedures to determine how to assign officers— RAD, IO, and temporary duty officers from other USCIS divisions—on circuit rides to adjudicate USRAP applications. According to USCIS officials, each fiscal year, based on State’s determination of the estimated number of cases that will be ready for USCIS interviews, RAD and IO divide responsibility for the anticipated workload. In general, RAD officers adjudicate refugee applications in locations where the caseload is large, such as Jordan and Kenya. According to USCIS officials, IO officers generally adjudicate refugee applications in locations where the refugee caseload is small, such as Pakistan, or where IO has a permanent office presence, such as Moscow, Russia. In fiscal year 2016, USCIS interviewed 43,705 refugee cases comprising 120,919 individuals. RAD interviewed 36,706 and, IO interviewed 6,999 of these cases. USCIS solicits a pool of temporary duty (temporary) officers from offices throughout USCIS who have volunteered to adjudicate refugee applications on circuit rides, contingent upon receiving additional training, when RAD or IO do not have sufficient staff capacity to meet the workload demands. According to RAD headquarters officials, there are no restrictions on assigning temporary officers to particular circuit rides. RAD officials stated that they generally assign RAD and temporary officers based on officers’ availability and interview experience. Additionally, temporary officers may commit to doing one or more circuit rides over one or more fiscal years. USCIS headquarters officials acknowledged that applications in some locations are more difficult to adjudicate than others and some temporary officers may not be as proficient or experienced at adjudicating applications as permanent RAD and IO officers. As a result, as resources permit, RAD and IO officials stated that they try to place temporary officers on circuit rides with caseloads that are best suited for their experience level. RAD headquarters officials also stated that, historically, they have planned for temporary officers to conduct approximately 15 to 25 percent of RAD’s refugee interviews and expect temporary officers to continue to be part of their workforce plan. When the refugee ceiling increased to 85,000 in fiscal year 2016, RAD increased the number of temporary officers on circuit rides to meet immediate mission needs while also working to hire additional refugee officers. RAD officials stated that, in fiscal year 2016, temporary staff completed 41 percent of RAD’s interviews. As of March 2017, RAD officials stated that they have undertaken significant hiring efforts in the past year, reducing the need for temporary officers. In addition, IO officials stated that fewer than 25 percent of the officers who participated in IO circuit rides in fiscal year 2016 were temporary officers. As of March 2017, IO officials stated that they do not plan to use temporary officers to adjudicate refugee applications for the remainder of fiscal year 2017. USCIS has developed policies and procedures for adjudicating refugee applications. These policies and procedures apply to RAD, IO, and temporary officers and include policies and procedures for how officers are to review the case file before the interview and conduct the interview as well as how supervisors are to review applications to ensure they are legally sufficient. For example, USCIS has developed a refugee application assessment tool that all officers are to use when interviewing the applicant to determine if the applicant was appropriately granted access to USRAP, had past persecution or a well-founded fear of persecution, is credible, is not a persecutor, and is admissible to the United States—including whether the applicant might be inadmissible due to national security or terrorism-related concerns. According to the assessment tool, at the time of interview, the USCIS officer is responsible for ensuring that appropriate security checks have been completed before making a decision on the application. Further, after receiving a completed refugee case file, supervisors are to review all forms and documents in the file that are relevant to establishing eligibility for refugee resettlement, including any documents provided by UNHCR or the RSC and those completed by the interviewing officer. Supervisors are to review the case file completed by the interviewing officer to ensure that the officer’s decision is legally sufficient, that the officer has reviewed security check results, and that all sections of the refugee application assessment are accurate and complete. In addition, USCIS has developed policies and procedures for determining when to place applications on hold. Specifically, officers may place an application on hold when the officer cannot make a final decision at the time of the interview—for example, if the outcomes of all required security checks are not yet available or if national security indicators requiring additional research become known to the officer at any point during the interview. We observed 29 USCIS RAD refugee interviews (including interviews by RAD officers and temporary officers) at four RSCs that we visited from June 2016 to September 2016 and found that the interviewing officers completed all parts of the assessment tool and placed cases that had pending security checks on hold, as required. We also observed that the USCIS officers documented the questions they asked and the answers the applicants provided. We also observed RAD supervisors while they reviewed officers’ initial decisions, interview transcripts, and case file documentation, consistent with RAD policy, at two of the sites we visited. Further, all six of the USCIS officers that we met with stated that supervisors conducted the required supervisory case file review during their circuit rides and the four supervisory officers we met with were aware of the requirements and stated that they conducted the supervisory reviews. According to USCIS policy, all USCIS officers who adjudicate refugee applications must complete specialized training, and the training varies based on the USCIS division of the officer (for example, Asylum or Refugee Affairs). However, temporary officers receive a condensed (or shortened) version of the trainings received by full time refugee officers and do not receive infield training. Figure 7 shows the training USCIS officers receive depending on whether they are in RAD, IO, or another USCIS division (i.e., a temporary officer). Protection Training. RAD training requirements for refugee officers state that officers are to attend a 4 week, in-person training (referred to as refugee basic training) that is specific to the refugee adjudication process, including classroom sessions, a written exam, and at least three mock interviews. IO headquarters officials stated that IO officers are to attend refugee basic training before adjudicating refugee applications. To adjudicate refugee applications, temporary officers are to have received “protection training,” the content of which varies based on the experience and qualifications of the temporary officer. Specifically, RAD’s training requirements for temporary officers state that officers who have not interviewed refugees in the past year are to receive an abbreviated in- person training that is either 3 days (if the temporary officer has recent interviewing experience for RAIO directorates, such as an asylum officer) or 20 days (if the temporary officer does not have recent interviewing experience for RAIO directorates). On the basis of our review of training syllabi, topics in refugee basic training, and temporary officer trainings include, at a minimum, applicable refugee and other immigration laws, refugee case file review, national security concerns, bars to refugee admission to the United States, and credibility assessment. Middle East Refugee Processing Training. Further, since October 2014, all officers (including temporary officers) who adjudicate applications that include applicants from Iraq and Syria are required to take the week-long Middle East Refugee Processing training. This training provides information to officers on the region’s history, specific country conditions, and additional training on indicators of potential national security concerns—such as military service history—for refugee applicants from Iran, Iraq, and Syria. This training includes briefings from law enforcement and the intelligence community. Predeparture briefings. In addition, prior to each RAD circuit ride, all officers, including temporary officers, who will adjudicate applications on that circuit ride are to receive a predeparture briefing that includes, among other things, any updated information on national security concerns and caseload trends for the particular circuit ride population. IO officials stated that their circuit rides are smaller and they do not always have formal predeparture briefings. However, IO officials told us their officers spend time before circuit rides reviewing case files and researching any country conditions or policy updates that officers deem relevant to their cases. All 10 RAD or temporary officers we interviewed who adjudicated applications or reviewed cases on the RAD circuit rides we observed stated that the USCIS trainings—particularly the predeparture briefings—were valuable and helpful. In-field training. RAD officers receive 10 days of “in-field training” from a dedicated trainer on their first circuit ride. According to RAD training requirements and circuit ride trip reports, during the in-field training period, new officers are to observe experienced interviewers, conduct interviews on a reduced schedule, receive individual guidance and performance feedback, and discuss case-specific issues with the trainers. In particular, 3 of the 107 trip reports we analyzed covered circuit rides with in-field training and noted that the in-field training period was valuable for new officers. For example, one report stated that new refugee officers benefited greatly from the in-field trainer’s knowledge and ability to be fully available for training and development. IO officials stated that they do not require formal in-field training for all new officers. Some new IO officers receive formal in-field training on RAD circuit rides, while others on smaller circuit rides may be paired with a more experienced officer and receive in-field mentoring on their first circuit ride. However, temporary officers do not receive in-field training. Although temporary officers receive training prior to participating in circuit rides, we found that they sometimes face challenges adjudicating refugee applications. For example, we analyzed the 44 available trip reports completed from July 2014 through June 2016 from RAD circuit rides that included temporary officers. In 15 of the 44 reports (about one-third), the RAD circuit ride supervisors noted that temporary officers faced challenges adjudicating refugee applications on the circuit ride. For example, one report indicated that although temporary officers completed the required training and predeparture briefing, they seemed not to have retained much information. Other reports indicated that temporary officers stated that they did not feel prepared to conduct some aspects of administrative and interview processes or that most temporary staff on the circuit ride only began to grasp the full range of law, policy, and procedures after 4 to 5 weeks on the circuit ride. One report also noted that temporary officers required de-facto mentoring on a daily basis. Further, USCIS headquarters officials and two interviewing officers we spoke with told us that some temporary officers make more errors than experienced officers, which contributes to inefficiencies, such as extra hours worked by supervisors. In addition, one temporary officer we observed stated that, despite the training she received, she felt unprepared to adjudicate cases on the first few days of her circuit ride. According to USCIS officials, all adjudications that are finalized have been determined by a supervisor to be legally sufficient. Consistent with USCIS policy, a supervisor is to review each case file and determine that the officer’s decision on the application is legally sufficient, among other things, before the decision is finalized. The supervisor may agree with the officer’s decision, request a reinterview for more information, or overturn the officer’s decision. On the basis of their review of trip reports, USCIS headquarters officials stated that in 2016, they revised the Refugee Processing Overview training (which is required for temporary officers with recent RAIO interviewing experience, such as asylum officers) and the predeparture briefing content (up to 8 days) to include mock interviews and more practical exercises about the refugee adjudication process. We reviewed syllabi for the Refugee Processing Overview training and found that the training increased from 1 day, without mock interviews, in April 2015 to 3 days, with two mock interviews and additional practical exercises about national security and terrorism-related concerns, in March 2016. However, unlike RAD officers and IO officers, USCIS has not offered in- field training for temporary officers. Standards for Internal Control in the Federal Government states that management should demonstrate a commitment to recruit, develop, and retain competent individuals. The standards also note that competence is the qualification to carry out assigned responsibilities, and requires relevant knowledge, skills, and abilities, which are gained largely from professional experience, training, and certifications. USCIS officials told us that temporary officers receive a number of accommodations to help them adjudicate applications—including a reduced ramp-up schedule on their first days of interviewing and a compilation of frequently used adjudication tools and guidance—but that USCIS is unable to offer in-field training to temporary officers due to resource constraints. In March 2017, RAD officials also stated that they have undertaken significant hiring efforts in the past year, reducing the need for temporary officers. IO officials also said IO does not plan to use temporary officers for the remainder of fiscal year 2017. Nevertheless, interviewing officers, including temporary officers, play a critical role in the refugee adjudication process, and USCIS may use temporary officers to meet workload demands in the future. Some temporary officers have committed to working on three circuit rides in a 2-year period, and may interview hundreds of refugee cases over that time frame. Further, while enhancing training for temporary officers may require additional resources, the lack of experience and preparation among temporary officers has led to inefficiencies, as described by some USCIS supervisors. To the extent that USCIS uses temporary officers on future circuit rides, providing them with additional training, such as in-field training, would help better prepare them to interview refugees and adjudicate their applications, increase the quality and efficiency of their work, and potentially reduce the supervisory burden on those who oversee temporary officers. In addition to training, USCIS has developed guidance documents and tools to help officers identify USRAP applicants with potential national security concerns. However, USCIS could strengthen its efforts by developing and implementing a plan for deploying officers with national security expertise on selected circuit rides. USCIS provides a number of resources to officers to help them identify and address potential national security-related concerns in USRAP applications. Further, security check results (provided by interagency vetting partners) may help officers identify security concerns before the refugee interview. In addition, USCIS’s national security policies and operating procedures require that cases with national security concerns be placed on hold by interviewing officers. These cases are then reviewed by USCIS headquarters staff who have additional specialized training and expertise in vetting national security issues. These headquarters staff can clear the hold, deny the case, or refer the case back to USCIS officers for reinterview with suggested lines of questioning. Further, RAD maintains training lesson plans, guidance on particular issues (such as terrorism-related inadmissibilities), and country conditions information that is accessible to interviewing officers overseas. As discussed above, USCIS provides the most up-to-date guidance to interviewing officers during its predeparture briefings. While USCIS has training and guidance to adjudicate cases with national security-related concerns, USCIS trip reports and officers we interviewed indicated that it can be challenging to adjudicate such applications. About half of the trip reports we analyzed (52 of 107) identified national security concerns as a training need for future circuit rides or made policy or guidance requests regarding national security concerns. Of these 52 reports, 33 were from circuit rides with no temporary officers and 19 were from circuit rides with a mix of temporary officers and refugee officers. For example, some trip reports generally noted that officers had difficulty identifying what applicant characteristics would be considered potential national security concerns among certain populations being interviewed. In addition, one trip report stated that officers required repeated reminders to elicit all of the necessary details that headquarters reviewers would need to determine whether an applicant posed a national security concern. Further, one supervisor we spoke with during a site visit stated that guidance about identifying cases with national security indicators is ambiguous and, at times, contradictory. Moreover, both RAD and IO headquarters officials we met with stated that interviewing officers are hesitant to make decisions regarding cases with national security concerns, and, as a result, often place cases on hold that are ultimately determined not to have national security concerns. USCIS officials identified several reasons why it is challenging to provide training and guidance on how to adjudicate cases with potential national security concerns. For example, according to RAD and IO headquarters officials, indicators of national security concerns and the country conditions that give rise to them evolve and change; as a result, USCIS guidance on how to address those concerns also changes over time. To further help interviewing officers adjudicate cases with national security concerns, RAD initiated a pilot program in the second and third quarters of fiscal year 2016, through which it sent headquarters USCIS Security, Vetting, and Program Integrity (SVPI) unit officers with national security-related expertise to support interviewing officers on select circuit rides. During these circuit rides, according to RAD and SVPI officials, SVPI officers’ tasks included prescreening case files and applications for national security concerns, flagging those concerns, and recommending lines of questions for USCIS interviewing officers. During our August 2016 visit to Amman, Jordan, we observed 16 interviews of applicants whose case files the SVPI officer had prescreened for potential national security- related issues; interviewing officers used notes that the SVPI officer provided to inform the questions they asked applicants. In some instances, the SVPI officer stated that he met with the interviewing officer before the interview to discuss potential questions on issues of national security concern that the SVPI officer anticipated might arise. Further, during the interview, the SVPI officer was available for real-time webchats to answer officers’ questions as they arose. The SVPI officer, circuit ride supervisor, and interviewing officers we spoke with in Amman all stated that having SVPI present on the circuit ride was valuable. For example, one temporary officer, who was on her first circuit ride, stated that she could not imagine resolving cases with national security concerns without the in-field help of SVPI. USCIS headquarters officials stated sending an SVPI officer resulted in a decrease in cases requiring headquarters review, although other factors also played a role in this decrease. For example, the on-site SVPI officer was able to resolve some cases with potential national security concerns in the field and was also able to prioritize cases requiring headquarters’ review due to potential national security concerns. In December 2016, RAIO and SVPI officials said they had determined that the SVPI in-field pilot was successful and that they plan to make it a formal part of select circuit rides in the future. These officials stated that they plan to continue to send SVPI officers on select circuit rides with caseloads high in potential national security-related issues, as resources permit. The officials stated that RAD selects which circuit rides will have on-site SVPI support based on several factors, including the number of cases placed on hold for national security-related concerns during previous circuit rides to certain locations and the availability of SVPI staff. To increase SVPI’s ability to support circuit rides, in December 2016, USCIS posted an SVPI job announcement with eight potential vacancies. The announcement stated that the job may require travel of up to 180 days per year on overseas circuit rides. The USCIS officials told us that, as of March 2017, they continue to work to fill these positions and are drafting an SOP that will have guidance about roles and responsibilities for SVPI officers providing on-site support to RAD circuit rides, which they intend to finalize later in 2017. As of April 2017, USCIS reported having filled five of the eight positions. Further, according to USCIS officials, SVPI initiated a test of the operational aspects of the draft SOP by deploying a supervisory SVPI officer on a RAD circuit ride in March 2017. USCIS officials reported in March 2017 that the draft SOP had not yet undergone a legal review. According to these officials, they expect to issue the SOP by July 2017. However, USCIS did not provide documentation or timelines for its plans to expand the use of SVPI officers on selected circuit rides. We have previously reported that, in developing new initiatives, agencies can benefit from following leading practices for strategic planning. Congress enacted GPRAMA to improve the efficiency and accountability of federal programs and, among other things, to update the requirement that federal agencies develop long-term strategic plans that include agencywide goals and strategies for achieving those goals. The Office of Management and Budget (OMB) has provided guidance in Circular A- 11 to agencies on how to prepare these plans in accordance with GPRAMA requirements. We have reported in the past that, taken together, the strategic planning elements established under GPRA, as updated by GPRAMA, and associated OMB guidance, along with practices we have identified, provide a framework of leading practices that can be used for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. One of these leading practices is to define strategies and identify resources needed to achieve goals. Strategies should be designed to align activities, core processes, and resources to support the mission. Further, strategies should include milestones as well as a description of the resources needed to meet established goals. RAIO officials stated that they plan to deploy SVPI officers on additional circuit rides in the future. While these plans are a positive step for helping officers address potential national security concerns on circuit rides, USCIS has not yet documented these plans or completed an SOP. Given SVPI’s lack of documentation for future plans and challenges identified by USCIS staff in adjudicating cases with potential national security concerns, it is unclear how or when RAIO will fulfill its plans to send national security experts on circuit rides to support interviewing officers. In light of the evolving and significant nature of national security concerns, developing and implementing a plan to deploy additional SVPI officers with national security expertise on circuit rides—including timeframes for deployment and how USCIS will select circuit rides for SVPI deployment—would better ensure that USCIS provides interviewing officers with the resources needed to efficiently and effectively adjudicate cases with national security concerns. USCIS has not conducted quality assurance assessments of refugee adjudications since fiscal year 2015 and has not developed plans for subsequent assessments, which help ensure that case files are completed accurately and that decisions by RAD, IO, and temporary officers are well-documented and legally sufficient. The RAIO Directorate conducted a quality assurance review of refugee adjudications in fiscal year 2015. The RAIO Directorate’s 2015 review included a sample of applications adjudicated by RAD and IO during one quarter of the fiscal year, which was not representative of all RAD and IO applications for the fiscal year. The 2015 quality assurance review found that most cases in the sample were legally sufficient. However, the review indicated that there were differences between RAD and IO adjudications. Specifically, the review rated 69 of 80 RAD case files (86 percent) as good or excellent, and rated 36 of 73 IO case files (49 percent) as good or excellent. Two of 80 RAD case files (less than 3 percent) in the review and 17 of 73 IO case files (23 percent) were rated as not legally sufficient. According to the assessment, USCIS placed these cases on hold or requested that RSCs schedule the applicants for reinterview. Among cases rated not legally sufficient, the most common deficiency identified was that interviewing officers did not fully develop the interview record with respect to possible inadmissibilities. Other deficiencies reported included interview records not being fully developed with respect to well-founded fear of persecution, improper documentation and analysis of terrorism-related inadmissibility concerns, incorrect hold determination, and required sections of the assessment leading to the adjudication decision that were incomplete. RAIO identified issues related to training and guidance for IO officers as well as supervisory review that may have led to these deficiencies. RAIO developed six high-priority action items to address the identified deficiencies in the quality assurance review and, as of November 2016, RAD and IO officials have made progress toward implementing them. For example, in 2016, IO issued a memorandum with required qualifications for IO officers who conduct supervisory review of refugee applications and provided additional guidance on which questions officers must ask during the interview in specific locations to ensure legal sufficiency. RAD and IO officials stated that they have taken steps to implement other action items identified in the 2015 review, such as incorporating more national security-related research, as appropriate, into the case file reviews that IO officers complete as part of their circuit ride predeparture preparation. RAIO officials stated that they have not completed a quality assurance review since fiscal year 2015, and, as of March 2017, do not know whether they will do so in fiscal year 2017. USCIS officials stated that they did not conduct a review in fiscal year 2016 for two reasons. First, in fiscal year 2016, RAIO officials stated that they faced resource constraints because they were focused on hiring and training new staff, and training and quality assurance are handled by the same team within RAIO. Second, the officials stated that there was value in allowing time for the action steps identified after the 2015 review to be implemented before conducting another review to identify if the action steps addressed the deficiencies noted in the prior review. RAIO officials also stated that even though they do not yet know whether they will conduct a quality assessment in fiscal year 2017, supervisors continue to review each refugee case file for legal sufficiency and completeness at the time of the interview. While supervisory review is an important quality control step, it does not position USCIS to identify systematic quality concerns, such as those identified in the fiscal year 2015 quality assessment results. USCIS’s January 2015 RAD and IO Roles and Responsibilities with Respect to Refugee Processing memorandum states that RAD is to establish quality assurance criteria and design the quality assurance program for refugee adjudications, in consultation with IO. The 2015 memorandum further states that RAD will conduct quality assurance reviews of refugee cases adjudicated by temporary officers, IO staff, and permanent RAD staff. Further, Standards for Internal Control in the Federal Government states that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. The scope and frequency of evaluations are to depend on the assessment of risks, effectiveness of ongoing monitoring, and rate of change within the entity and its environment. In addition, standard practices for program management state that program quality should be monitored on a regular basis to provide confidence that the program will comply with the relevant quality policies and standards. Although there have been significant changes in the refugee caseload in the past 2 years (such as the increase in Syrian refugees), an increased use of temporary staff to conduct refugee adjudications in fiscal year 2016, and the difference in quality between RAD and IO adjudications noted in the 2015 quality assurance review, USCIS did not conduct quality reviews in 2016 and has no plans to conduct them in 2017. Regular quality assurance reviews could help provide USCIS reasonable assurance that RAD officers, IO officers, and temporary officers are consistently, accurately, and sufficiently documenting their adjudication decisions. Conducting regular quality assurance assessments of refugee adjudications would also provide USCIS officials with key information about the quality of USCIS refugee adjudications and allow them to identify any areas where officers face challenges, allowing RAD and IO to target training or guidance to areas where it may be most needed. Fraud can occur in the refugee process in a number of ways, and State, RSCs, and USCIS have implemented certain mechanisms to help detect and prevent fraud by USRAP applicants. In general, immigration benefit fraud often involves the willful misrepresentation of material fact for the purpose of obtaining an immigration benefit, such as refugee status. Immigration benefit fraud is often facilitated by document and identity fraud. Document fraud includes forging, counterfeiting, altering, or falsely making any document, or using, possessing, obtaining, accepting, or receiving such falsified documents in order to satisfy any requirement of, or to obtain a benefit under, U.S. law. Identity fraud refers to the fraudulent use of others’ valid documents. In the context of USRAP, applicants may attempt to apply for refugee status after having been denied refugee status or another immigration benefit, such as a visa, using another identity. Or, applicants may falsely present themselves as a national of a country eligible for resettlement to gain access to USRAP. Further, applicants may present false marriage claims or attempt to include unrelated children on their case. USCIS officers can encounter indicators of fraud while adjudicating refugee applications, and State has suspended USRAP programs in the past because of fraud. Examples include the following: Of the 107 RAD circuit ride trip reports we analyzed, 30 reports identified instances in which officers denied applications for fraud or misrepresentation. According to an SVPI official, applications with indicators of fraud may also be denied on other grounds, such as ineligibility, inadmissibility, and security check results, among others. In 2008, State suspended the P3 program, a family reunification program between a family member in the United States and the refugee applicant, because of widespread fraud, as discussed below. In 2015, State suspended a P2 program after discovering that two individuals who had been approved as refugees and admitted to the United States had submitted fraudulent documents gain access to USRAP. During the suspension period from March to December 2015, State and RSC officials reviewed all cases that they were processing for this P2 program. According to State officials, this review found additional applicants with fraudulent documents. State, RSCs, and USCIS have put mechanisms in place to help detect and prevent fraud by USRAP applicants. State. State has guidance intended to help RSC staff identify fraudulent refugee applicants, and State has strengthened access controls for some refugee applicants. For example, State SOPs require that, when entering a new case into WRAPS for prescreening, RSC staff verify that a duplicate record does not already exist in WRAPS for the applicants. According to State SOPs, one of the purposes of this step is to identify individuals who attempt to fraudulently access USRAP. RSC officials at all four locations we visited stated that they complete this procedure and our analysis of WRAPS data showed that RSCs have identified duplicate applicant records. State has also strengthened its controls for granting access to USRAP for some groups of refugee applicants. For example, after suspending the P3 program due to fraud in 2008, State restarted the P3 program in 2012 with additional controls in place, including a requirement for DNA testing for all claimed parent and child biological relationships. In addition, when State initiated the P2 Central American Minors program in 2014—which, like the P3 program, requires a familial relationship between someone residing in the United States and the refugee applicant—State instituted a requirement for DNA testing of all claimed biological relationships between the qualifying child and the qualifying parent. Further, after finding fraud in 2015 in a P2 program, as discussed above, State strengthened the mechanism for verifying access to USRAP. RSCs. RSCs have also implemented a variety of controls to help detect and prevent fraud among refugee applicants to USRAP. For example, according to all nine RSC directors, each RSC has a designated anti- fraud official or entity, consistent with GAO’s Fraud Risk Framework. Officials at all nine RSCs stated that they provide staff with training or information on applicant fraud trends. Further, RSC officials in two RSCs stated that they conduct their own research to detect potential applicant fraud. In addition, two of the four RSCs we visited conduct two prescreening interviews for each applicant rather than one. According to RSC officials, conducting more than one interview serves as a fraud deterrent because it allows the RSC staff to check for consistency across interviews and identify false information. Further, these RSCs require, where possible, that different interpreters participate in each interview to decrease the likelihood that applicants collude with interpreters. USCIS. Within USCIS, SVPI and adjudicators are responsible for antifraud activities related to the adjudication of the refugee application. USCIS has implemented a number of control activities to detect and prevent refugee applicant fraud. Through biometric checks, USCIS may identify that a USRAP applicant has multiple identities. According to SVPI officials, SVPI analyzes the results of the checks, identifies fraud indicators, and may complete a fraud referral so that the applicant can be interviewed or re-interviewed by an officer overseas to address the fraud concern. SVPI also receives fraud referrals from other sources, such as refugee officers and the RSCs, although SVPI officials stated that the number of such referrals is small. USCIS officials stated that, in many instances, interviewing officers deny an application with indicators of fraud on other grounds, which does not require the involvement of SVPI or a fraud referral. Interviewing officers may also place a case with indicators of fraud on hold for additional SVPI research. According to USCIS officials and training materials that we reviewed, USCIS officers who adjudicate refugee applications receive training in identifying fraud and processing cases with fraud indicators during basic training and predeparture briefings. We observed discussions about fraud trends at three of the four predeparture briefings that we attended. Additionally, the RAD trip report guide states that supervisors are to document any suspected fraud trends from the circuit ride, including how the fraud trend was identified, any actions taken in response to the trend, whether the trend was expected to continue, and examples of any suspected fraud. Of the 107 trip reports we analyzed, 72 contained information about applicant fraud or fraud trends. The information varied, ranging from detailed descriptions of individual cases denied due to misrepresentation or fraud to a more general description of potential fraud trends in certain populations, such as a lack of reliable marriage documentation. The remaining 35 reports stated that there were no fraud trends or left the section of the report about fraud trends blank, which indicates that the author of the trip report did not identify fraud trends on the circuit ride. State and USCIS have not jointly assessed applicant fraud risks across USRAP. Our Fraud Risk Framework calls for program managers to plan and conduct regular fraud risk assessments. According to our Fraud Risk Framework, there is no universally accepted approach for conducting fraud risk assessments, since circumstances among programs vary; however, assessing fraud risks generally involves five actions: (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and impact of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the program’s fraud risk profile. The framework provides managers with flexibility in deciding whether to carry out this and other aspects of fraud risk management at the program or agency level. In addition, Standards for Internal Control in the Federal Government states that management should consider the potential for fraud when identifying, analyzing, and responding to risks, and analyze and respond to identified fraud risks, through a risk analysis process, so that they are effectively mitigated. Although State and USCIS perform a number of fraud risk management activities and have responded to individual instances of applicant fraud, these efforts do not position State and USCIS to assess fraud risks program-wide for USRAP or know if their controls are appropriately targeted to the areas of highest risk in the program. State and USCIS officials told us that each agency has discrete areas of responsibility in the refugee admissions process, and each agency’s antifraud activities are largely directed at their portions of the process. State is responsible for managing USRAP at a programmatic level and, according to State officials, State has responded to instances of fraud in USRAP. State officials said that they have not conducted an assessment of the risks associated with applications to USRAP because, according to these officials, such an assessment is USCIS’s responsibility. However, USCIS officials told us that SVPI—USCIS’s antifraud entity for refugee applicant fraud—only has authority over antifraud activities related to the adjudication of the refugee application, including security checks. USCIS officials stated that they are not responsible for, and do not have the authority to respond to, applicant fraud program-wide in USRAP, although they coordinate with State when fraud is brought to the attention of SVPI. As of March 2017, SVPI has a draft Fraud Process SOP, which identifies three main types of applicant fraud in USRAP—individuals who are using multiple identities; individuals who are claiming false family composition, such as marriage fraud; and individuals who are claiming a false country of nationality. In addition, the draft SOP identifies the main sources by which USCIS detects fraud in the USRAP application process—results from biometric checks and testimony and evidence from the USRAP applicant. However, USCIS and State have not jointly conducted a fraud risk assessment of the risks associated with applications to USRAP or determined a fraud risk tolerance for the program. Because the management of USRAP involves several agencies, without jointly and regularly assessing applicant fraud risks and determining the fraud risk tolerance of the entirety of USRAP, in accordance with leading practices, State and USCIS do not have comprehensive information on the inherent fraud risks that may affect the integrity of the refugee application process and therefore do not have reasonable assurance that State, USCIS, and other program partners have implemented controls to mitigate those risks. Moreover, regularly assessing applicant fraud risks program-wide could help State and USCIS ensure that fraud prevention and detection efforts across USRAP are targeted to those areas that are of highest risk, in accordance with the program’s fraud risk tolerance. Screening and adjudicating refugee applicants and applications are challenging tasks that involve entities across the U.S. government. RSCs have an important role in the refugee admissions process because they collect applicants’ information and conduct in-person prescreening interviews that USCIS officers use to help determine applicants’ eligibility and credibility. Developing outcome-based performance indicators, as required by State policy and performance management guidance, and monitoring RSC performance against such indicators on a regular basis, would better position State to determine whether RSCs are processing refugee applications in accordance with their responsibilities under USRAP. In addition, adjudicating refugee applications can be challenging. During a face-to-face interview, USCIS officers must, among other things, determine if the applicant meets the definition of a refugee; is inadmissible because of, for example, national security concerns or criminal activities; and is credible. Further, indicators of national security concerns (and the country conditions that give rise to them) evolve and change. To the extent that USCIS uses temporary officers on future circuit rides, providing them with additional training, such as in-field training, would help better prepare them to interview refugees and adjudicate their applications, increase the quality and efficiency of their work, and potentially reduce the supervisory burden on those who oversee temporary officers. Moreover, developing and implementing a plan to deploy additional USCIS SVPI officers with national security expertise on select circuit rides would better ensure that USCIS provides interviewing officers with the resources needed to efficiently and effectively adjudicate cases with national security concerns. In addition, conducting regular quality assurance assessments of refugee adjudications would also provide USCIS officials with key information about the quality of USCIS refugee adjudications and allow them to identify any areas where officers face challenges, allowing RAD and IO to target training or guidance to areas where it may be most needed. Given that USCIS officers encounter indicators of fraud while adjudicating refugee applications and fraud has occurred in USRAP programs in the past, it is important that USCIS and State implement leading practices to combat fraud. Without jointly and regularly assessing applicant fraud risks and determining the fraud risk tolerance of USRAP, in accordance with leading practices, State and USCIS do not have comprehensive information on the inherent fraud risks that may affect the integrity of the refugee application process. Moreover, regularly assessing applicant fraud risks program-wide could help State and USCIS ensure that fraud prevention and detection efforts across USRAP are targeted to those areas that are of highest risk. To better assess whether RSCs are meeting USRAP objectives, the Assistant Secretary of State for Population, Refugees, and Migration should take the following two actions: develop outcome-based indicators, as required by State policy; and monitor RSC performance against such indicators on a regular basis. To better ensure that USCIS officers effectively adjudicate applications for refugee status, the Director of USCIS should take the following three actions: provide additional training, such as infield training, for any temporary officers who adjudicate refugee applications on future circuit rides; develop and implement a plan to deploy officers with national security expertise on circuit rides; and conduct regular quality assurance assessments of refugee application adjudications across RAD and IO. To provide reasonable assurance that USRAP applicant fraud prevention and detection controls are adequate and effectively implemented, we recommend that the Secretaries of Homeland Security and State conduct regular joint assessments of applicant fraud risk across USRAP. We provided a draft of the sensitive version of this report to the Departments of Homeland Security, State, Defense, and Justice, as well as the Office of Director of National Intelligence, for their review and comment. State and DHS provided written comments stating that they concurred with our recommendations, which are reproduced in full in appendixes III and IV, respectively. In emails, a Director in the Office of the Under Secretary of Defense for Policy at the Departments of Defense and the Legislative Liaison Officer at the Office of Director of National Intelligence stated that these agencies did not have any written comments on our draft report. State, DHS, the Department of Justice, and the FBI provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Homeland Security, State, and Defense; the Attorney General of the United States; and, the Director for National Intelligence. In addition, the report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Central American Minors (CAM) Program was established in November 2014 to promote safe, legal, and orderly migration of certain vulnerable children to the United States and began accepting applications on December 1, 2014. This family reunification program aims to deter children from El Salvador, Guatemala, and Honduras from undertaking a risky journey in an attempt to be reunited with a parent residing in the United States. CAM allows certain parents to request access to the U.S. Refugee Admissions Program (USRAP) for their children who are nationals of one of these three countries and are outside of the United States. Children who are found ineligible for admission as a refugee under USRAP but still at risk of harm may be considered for parole—in general a mechanism by which an individual not otherwise admitted to the United States may be permitted entry into the country on a temporary basis. CAM is jointly run by the Department of State’s (State) Bureau of Population, Refugees, and Migration and the Department of Homeland Security’s (DHS) U.S. Citizenship and Immigration Services (USCIS). To participate in CAM, both parent and child must meet certain qualifying criteria. Among other criteria, a qualifying parent must be 18 years of age and lawfully present within the United States at the time of application and at the time of admission or parole of the beneficiary (e.g., a qualifying child) to the United States. The qualifying child must be a biological, step, or legally adopted child of the qualifying parent; unmarried; under the age of 21 at the time the qualifying parent initiates the process; and a national of El Salvador, Guatemala, or Honduras. Other family members of the child who meet certain criteria are also eligible to be part of the qualifying child’s application. For example, an accompanying parent who is the legal spouse of the U.S.-based qualifying parent may be eligible to travel with the qualifying child. However, the accompanying parent cannot derive his or her refugee status from the qualifying child and therefore must independently establish that he or she qualifies as a refugee. In July 2016 State and DHS announced that CAM would expand to include additional eligible family members, when accompanied by a qualifying child—(1) the children, regardless of age or marital status, of a U.S.- based qualifying parent; (2) the biological parent of a qualifying child who is not legally married to the U.S.-based lawfully present parent; and (3) the caregiver of a qualifying child who is related to either the U.S.-based lawfully present parent or the qualifying child. State began accepting applications that included these additional family members in November 2016. As shown in figure 8, a qualifying parent initiates the CAM application process in the United States by completing a form (DS-7699, or “Affidavit of Relationship” (AOR)) with the help of a resettlement agency—a State- funded entity that provides support services to refugees once they arrive within the United States. The qualifying parent files an AOR with the assistance of a designated resettlement agency, which forwards the AOR to State. State is to conduct a preliminary review of the AOR for completeness, including a check that the qualifying parent has provided proof of his or her lawful status, and then provide the case to Resettlement Support Center (RSC) staff. RSC staff are to prescreen the qualifying children according to the standard operating procedures for all USRAP applicants. Shortly after prescreening, RSCs are to collect the child’s DNA to confirm biological relationships between the parent and the qualifying child. State has established policies and procedures specifically for the collection and processing of DNA samples from the qualifying child. We observed RSC staff taking 5 separate DNA samples at the RSC Latin America San Salvador office, during which staff adhered to the established standard operating procedures for DNA collection. Separately, within the United States, State is to notify the parent in the United States to provide DNA samples to a U.S.-based, accredited lab to confirm the biological relationship with his or her claimed child or children. The parent must also cover the costs associated with the DNA testing, but State is to reimburse the costs of the tests if all the claimed biological relationships are supported by the DNA evidence, even if the beneficiary is not ultimately admitted as a refugee or paroled to the United States. The U.S.-based lab reports the results of DNA testing for all cases to State, which then uploads the results into the Worldwide Refugee Admissions Processing System for viewing by USCIS. Although USCIS does not require DNA testing for other eligible family members included on applications (e.g., the children of the qualifying child or the siblings of the qualifying child who are not biologically related to the U.S.-based parent)—citing, among other factors, concerns over the reliability of such testing between, for example, siblings—USCIS officials stated that additional DNA testing will occur for new CAM categories announced in July 2016. After prescreening, but before USCIS interviews the child, USCIS’s Refugee Access Verification Unit (RAVU) is to, among other things, take steps to confirm the parent’s lawful status and to review the results of DNA testing, if available. According to USCIS procedures, if RAVU cannot confirm the parent’s status or DNA testing results do not confirm the relationship, USCIS will generally reject the application. According to State data, USCIS rejected or disqualified about 600 (5 percent) of the approximate 12,000 CAM AORs submitted from December 2014 through March 2017. USCIS generally adjudicates CAM applicants as they do all other USRAP applicants. However, according to USCIS policy, and consistent with characteristics of the targeted populations and stated objectives of the program, USCIS officials stated that CAM applicants undergo additional vetting for potential gang affiliations in cases with such indicators. If USCIS concludes that such an applicant is not eligible for admission as a refugee, the applicant may be considered for parole. Vetting CAM applicants for potential gang affiliation. USCIS policy requires that officers place CAM applications on hold if gang affiliation indicators exist. As with all USRAP applicants, CAM program applicants are inadmissible to the United States as refugees if USCIS officers find them to be persecutors of others, have committed certain crimes, or be a threat to the security of the United States, among other things. Consistent with USCIS policy, USCIS officers may place a case on hold to do additional research or investigation if the officer determines that the applicant or other case members may be inadmissible due to information provided during the interview (e.g., the applicant has a known or suspected gang affiliation). For example, to further review CAM applications from Salvadoran applicants identified by USCIS interviewers as having indicators of possible gang affiliation during the USCIS interview, USCIS staff are to contact the Federal Bureau of Investigation (FBI). For CAM applicants in El Salvador, FBI agents stationed in San Salvador are to coordinate with the government of El Salvador in sharing investigative information on gangs. According to FBI officials, if the FBI has any information on the CAM program applicant and potential gang affiliations, they are to forward the information to USCIS officials, who determine whether the information renders the applicant ineligible for the program. FBI officials in San Salvador said that they receive 6 to 10 requests per month from USCIS for any available information related to CAM program applicants. From December 2014 through March 2017, USCIS officers had placed about 14 percent of CAM applicants they interviewed on hold, and in most cases, according to State data, the hold was for USCIS’s headquarters’ review of possible gang affiliations. Parole. CAM program applicants found by USCIS to be ineligible for refugee status in the United States are to be considered on a case-by- case basis for parole, which is a mechanism to allow someone who is otherwise inadmissible to enter the United States on a temporary basis for urgent humanitarian reasons or significant public benefit. USCIS procedures require that, to support an authorization of parole, the qualifying child must assert to the USCIS officer during the interview that he or she has a fear of being harmed, and the objective evidence must demonstrate that the child would face a reasonable possibility of harm if he or she remains in their home country. The interviewing officer has discretion to conditionally approve parole, after consideration of the entire record, and several factors—such as the outcome of the security checks or derogatory information (which may include involvement in gangs or other criminal activity)—could lead to a denial of parole. The final decision regarding parole is made by a USCIS officer after review of medical exam results and an additional review of security checks. Once in the United States a parolee, unlike a refugee, is not considered to have been admitted into the country, has not been conferred a lawful immigration status, and does not have the benefit of a pathway to U.S. citizenship. Parole under CAM may be authorized for a period of up to 2 years and parolees are to file their request for re-parole no later than 90 days before the expiration of their authorized parole. Parolees may also apply for employment authorization but the extent to which they may be eligible for other public benefits is determined in accordance with U.S. law. Parole has been the most common outcome of CAM program applications, but a lower percentage of parolees have arrived in the United States than those granted refugee status through the program. From December 2014, when the program began accepting AORs, through March 2017, USCIS received AORs for about 12,100 individuals. Most of the AORs submitted were for applicants from El Salvador (86 percent). USCIS had made final decisions on half (6,300) of these applicants, approving 70 percent for parole and granting 29 percent refugee status. According to USCIS officials, more CAM cases receive parole because the generalized violence that applicants experience does not rise to the level of persecution or is not on account of a protected characteristic required to support a refugee determination. However, the officials noted that the conditions in El Salvador, Guatemala, and Honduras, and the fact that the children are living without at least one parent in their country of origin are generally sufficient to demonstrate the fear of harm required to support a parole determination. USCIS officers determined that the remaining 1 percent of applicants did not qualify for refugee status or parole and denied the associated cases. However, a higher percentage of CAM applicants who had received refugee status had arrived in the United States, as of March 2017. Program data on applications submitted from December 2014 through March 2017 show that 63 percent (about 1,100) of all CAM-approved refugees and 33 percent (about 1,500) of CAM approved parolees had traveled to the United States. Parolees must finance their travel to the United States and do not receive benefits upon arrival, circumstances that, according to State officials, most likely account for the difference in CAM refugee and parolee arrivals. Refugees have access to travel loans, and must sign a promissory note to assume responsibility for repaying the cost of travel to the United States. Parolees are also responsible for paying for the costs of medical exams. The U.S. Refugee Admissions Program (USRAP) provides refugees who are of special humanitarian concern from around the world with opportunities for resettlement in the United States. The Departments of State (State) and Homeland Security (DHS) have joint responsibility for the admission of refugees to the United States. Specifically, State’s Bureau of Population, Refugees, and Migration coordinates and manages USRAP and makes decisions, along with DHS’s U.S. Citizenship and Immigration Services (USCIS), on which individuals around the world are eligible to apply for refugee status in the United States. Nine State-funded Resettlement Support Centers (RSCs) with distinct geographic areas of responsibility communicate directly with applicants to process their applications, collect their information, conduct a prescreening interview, and prepare applications for adjudication by USCIS. State and its partners—including USCIS—make initial determinations about whether an individual will be accepted into or excluded from USRAP (referred to as program “access”) for subsequent screening and interview by USCIS officers. State has identified three categories of individuals who are of special humanitarian concern and, therefore, can qualify for access to USRAP—Priority 1, Priority 2, and Priority 3. Table 2 describes these priority categories—including the multiple programs that comprise the Priority 2 category—and how State applicants within these priorities gain access to USRAP. In addition to the contact named above, Kathryn Bernet (Assistant Director), David Alexander, Mona Nichols Blake, Eric Erdman, Cynthia Grant, Brian Hackney, Paul Hobart, Eric Hauswirth, Susan Hsu, Thomas Lombardi, Mike McKemey, Erin McLaughlin, Thomas Melito, Clair Peachey, Mary Pitts, Elizabeth Repko, Judith Williams, and Su Jin Yon made significant contributions to this report.
|
Increases in the number of USRAP applicants approved for resettlement in the United States from countries where terrorists operate have raised questions about the adequacy of applicant screening. GAO was asked to review the refugee screening process. This report (1) describes what State and DHS data indicate about the characteristics and outcomes of USRAP applications, (2) analyzes the extent to which State and RSCs have policies and procedures on refugee case processing and State oversees RSC activities, (3) analyzes the extent to which USCIS has policies and procedures for adjudicating refugee applications, and (4) analyzes the extent to which State and USCIS have mechanisms in place to detect and prevent applicant fraud. GAO reviewed State and DHS policies, analyzed refugee processing data and reports, observed a nongeneralizable sample of refugee screening interviews in four countries in 2016 (selected based on application data and other factors), and interviewed State and DHS officials and RSC staff. From fiscal year 2011 through June 2016, the U.S. Refugee Admission Program (USRAP) received about 655,000 applications and referrals—with most referrals coming from the United Nations High Commissioner for Refugees—and approximately 227,000 applicants were admitted to the United States (see figure). More than 75 percent of the applications and referrals were from refugees fleeing six countries—Iraq, Burma, Syria, Somalia, the Democratic Republic of Congo, and Bhutan. Nine Department of State- (State) funded Resettlement Support Centers (RSC) located abroad process applications by conducting prescreening interviews and initiating security checks, among other activities. Such information is subsequently used by the Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS), which conducts in-person interviews with applicants and assesses eligibility for refugee status to determine whether to approve or deny them for resettlement. a After receiving an application, USRAP partners determine whether the applicant qualifies for a U.S. Citizenship and Immigration Services (USCIS) interview. b USCIS officers may place an application on hold after their interview if they determine that additional information is needed to adjudicate the application. State and RSCs have policies and procedures for processing refugee applications, but State has not established outcome based-performance measures. For example, State's USRAP Overseas Processing Manual includes requirements for information RSCs should collect when prescreening applicants and initiating national security checks, among other things. GAO observed 27 prescreening interviews conducted by RSC caseworkers in four countries and found that they generally adhered to State requirements. Further, State has control activities in place to monitor how RSCs implement policies and procedures. However, State has not established outcome-based performance indicators for key activities—such as prescreening applicants and accurate case file preparation—or monitored RSC performance consistently across such indicators. Developing outcome-based performance indicators, and monitoring RSC performance against such indicators on a regular basis, would better position State to determine whether RSCs are processing refugee applications in accordance with their responsibilities. USCIS has policies and procedures for adjudicating applications—including how its officers are to conduct interviews, review case files, and make decisions on refugee applications—but could improve training, the process for adjudicating applicants with national security concerns, and quality assurance assessments. For example, USCIS has developed an assessment tool that officers are to use when interviewing applicants. GAO observed 29 USCIS interviews and found that officers completed all parts of the assessment. USCIS also provides specialized training to all officers who adjudicate applications abroad, but could provide additional training for officers who work on a temporary basis, which would better prepare them to adjudicate applications. In addition, USCIS provides guidance to help officers identify national security concerns in applications and has taken steps to address challenges with adjudicating such cases. For example, in 2016, USCIS completed a pilot that included sending officers with national security expertise overseas to support interviewing officers in some locations. USCIS determined the pilot was successful and has taken steps to formalize it. However, USCIS has not developed and implemented a plan for deploying these additional officers, whose expertise could help improve the efficiency and effectiveness of the adjudication process. Further, USCIS does not conduct regular quality assurance assessments of refugee adjudications, consistent with federal internal control standards. Conducting regular assessments of refugee adjudications would allow USCIS to target training or guidance to areas of most need. a All persons traveling to the United States by air are subject to standard U.S. government vetting practices. State and USCIS have mechanisms in place to detect and prevent applicant fraud in USRAP, such as requiring DNA testing for certain applicants, but have not jointly assessed applicant fraud risks program-wide. Applicant fraud may include document and identity fraud, among other things. USCIS officers can encounter indicators of fraud while adjudicating refugee applications, and fraud has occurred in USRAP programs in the past. Because the management of USRAP involves several agencies, jointly and regularly assessing fraud risks program-wide, consistent with leading fraud risk management practices and federal internal control standards, could help State and USCIS ensure that fraud detection and prevention efforts across USRAP are targeted to those areas that are of highest risk. This is a public version of a sensitive report issued in June 2017. Information that the Departments of Defense, Homeland Security, and State deemed to be sensitive is not included in this report. GAO recommends that State (1) develop outcome-based indicators to measure RSC performance and (2) monitor against these measures; USCIS (1) enhance training to temporary officers, (2) develop a plan to deploy additional officers with national security expertise, and (3) conduct regular quality assurance assessments; and State and DHS jointly conduct regular fraud risk assessments. State and DHS concurred with GAO's recommendations.
|
Under the Direct Loan program, Education issues three main types of loans—Subsidized Stafford, Unsubsidized Stafford, and PLUS Loans. Subsidized Stafford Loans are available only to undergraduate student borrowers with demonstrated financial need. The government subsidizes these loans by not charging borrowers for the interest that accrues while they are still in school and during a 6-month grace period after leaving school. Unsubsidized Stafford Loans are available to both undergraduate and graduate student borrowers, irrespective of financial need. Borrowers are responsible for paying all interest from disbursement to final payoff of the loan. Finally, PLUS Loans are available to graduate students and parents of dependent undergraduates, who must pay all interest on these loans as well. Borrower Rate: The interest rate charged to federal student loan recipients. rates and caps vary by loan type and borrower characteristics, as seen in table 1 below. Undergraduate student borrowers pay the lowest interest rate, with graduate student and parent borrowers paying somewhat higher rates. There are a variety of repayment plan options available to eligible student loan borrowers. Under the standard repayment plan, borrowers typically repay loans over a period of up to 10 years. Key features of other plans are extended repayment periods and repayment amounts that are linked to borrowers’ income. For instance, under the income-contingent repayment plan, repayment amounts are calculated annually based on the borrower’s adjusted gross income, family size, and total Direct Loan amount. Repayment periods under these plans may be up to 25 years. By extending their repayment periods, borrowers may lower their monthly payments, but may pay more in interest over time. The Direct Loan program has two main categories of costs: administrative costs and subsidy costs. See table 2 below for selected elements of these two types of costs. Some subsidy elements (like government borrowing costs) raise subsidy costs, while others (like borrower principal repayments) lower them. The funding for Direct Loan administration is generally discretionary, meaning that Congress periodically appropriates the level of funding it deems appropriate. The funding for subsidy costs, on the other hand, is mandatory, meaning that funds are not appropriated annually. Instead, Congress has enacted permanent statutory authority to appropriate funding for loans to eligible borrowers. The majority of the Direct Loan program’s administrative costs are funded by discretionary appropriations to Education’s Student Aid Administration account. The Student Aid Administration account provides funds to administer the Direct Loan program as well as other federal student aid programs including the Federal Family Education Loan Program. Administrative costs support activities such as educating students and families about how to obtain loans; processing loan applications; disbursing loans; administering existing guaranteed loans; servicing loan accounts; and taking action to prevent fraud, waste, and abuse. Costs to manage and collect payments on defaulted loans are also included in administrative costs. Administrative Costs: Loan program expenses that are excluded from subsidy cost calculations, such as costs related to processing loan applications or servicing existing loans. Education uses an activity-based costing model that tracks administrative cost by processes, such as application processing or loan servicing. In the model, Education calculates the full administrative cost of each process by first identifying its direct costs (or costs that can be tied to that specific process) and then allocating additional indirect costs, such as rent, equipment, and maintenance, to each process based on formulas intended to reflect how many indirect resources each process uses. Costs that include direct and indirect costs are called full costs. Education uses both full and direct costs to generate costs per unit, such as cost per application processed or borrower serviced. As required by the Federal Credit Reform Act of 1990 (FCRA), Education estimates loan subsidy costs annually for inclusion in the President’s budget. For Direct Loans, subsidy costs represent the estimated long- term cost to the government of extending credit over the life of the loan, excluding administration costs. Subsidy cost estimates that are recorded in a given year are calculated based on the net present value of lifetime estimated cash flows to and from the government that result from providing these loans to borrowers. For Direct Loans, cash flows from the government include loan disbursements and cash flows to the government include repayments of loan principal, interest and fee payments, and recoveries on defaulted loans. The Federal Credit Reform Act of 1990 (FCRA) was intended to improve the measurement of the budgetary costs of federal credit programs. Prior to the implementation of FCRA, credit programs were reported in the budget on a cash basis. Thus, loan guarantees appeared to be free in the budget year of the guarantee, while direct loans appeared to be as costly as grants. As a result, costs were distorted and credit programs could not be compared meaningfully with other non-credit programs and with each other. FCRA recognized that the true cost of a loan or guarantee is not captured by its cash flows in any one year, but by the net value of its cash flows over the life of the loan. This value is known as the “subsidy cost”—that is, the estimated long-term cost to the government of a direct loan or loan guarantee, calculated in current dollars, excluding administrative costs. Administrative costs remain on a cash basis and are excluded from subsidy calculations. Subsidy costs are influenced by a variety of variables, including government borrowing costs, the interest rate charged to student loan recipients, how quickly those recipients repay their loans, and how many ultimately default. A positive subsidy cost estimate indicates that the government anticipates a net cost, while a negative subsidy cost estimate indicates that the government anticipates generating net subsidy income, not counting administrative costs. To determine the overall cost of the Direct Loan program, both subsidy and administrative costs must be considered. For the government to break even on Direct Loans, net subsidy income should be equal to administrative costs. Subsidy Cost: The estimated long-term cost to the government of providing a loan, expressed in current dollars, and excluding administrative costs. Education calculates subsidy costs separately for each group of loans made in a particular fiscal year—referred to as a loan cohort. To estimate subsidy costs, Education has developed a Student Loan Model that contains a variety of assumptions. These assumptions are reflected through variables such as how quickly borrowers will repay their loans (and, thus, how much interest the government will collect), how many borrowers will default, and how successful default collection activities will be. Education annually reestimates subsidy costs for each loan cohort until all loans in the cohort have been repaid, which may take decades. Reestimates take into account actual loan performance as well as changes in assumptions about future performance, such as how many borrowers will default, or how many will participate in extended repayment plans. Reestimates may result in increases or decreases in subsidy cost estimates. Loan Cohort: A group of loans made in a particular fiscal year. In addition to Education’s subsidy cost estimates, CBO also estimates subsidy costs to identify the financial impact of legislation and inform budget projections, among other purposes. CBO and Education both estimate subsidy costs for the Direct Loan program following the requirements of FCRA; however, they use different estimation methodologies and assumptions. Officials from both organizations pointed to a number of key areas where their assumptions differed. See table 3 for examples of key differences. Officials cited a number of reasons for differences in these particular assumptions, including differences in economic forecasts and professional judgment when developing forecasting methodologies. Because of such differences, CBO and Education’s cost estimates for the program are not comparable. The Department of Education reported that full administrative costs (i.e. costs incorporating both direct and indirect administrative costs) for the Direct Loan program increased by $550 million from fiscal year 2007 to fiscal year 2012, to total $864 million in fiscal year 2012. These Direct Loan administrative costs represent 65 percent of the $1.3 billion in new budget authority made available to the Student Aid Administration account in fiscal year 2012, while administrative costs for other loan, grant, and loan guarantee programs and activities made up the Loan servicing, which includes activities related to remainder.processing loan payments and maintaining borrower information, is the largest category of reported administrative costs, comprising 63 percent of total administrative costs in fiscal year 2012. Figure 1 below shows total Direct Loan Program costs by category from fiscal year 2007 to fiscal year 2012. Total Direct Loan administrative costs reported by the Department of Education rose from $314 million to $864 million—a 175 percent increase—from fiscal year 2007 to fiscal year 2012. Loan servicing costs showed the greatest dollar increase at over $300 million (152 percent) during that time period. Although other categories—application processing, school oversight and monitoring, and originations and disbursements—showed smaller dollar increases, the percentage growth of these categories ranged from about 270 percent to about 440 percent. Only default collections, including the management and collection of defaulted loans and assistance to defaulted borrowers, stayed generally flat in total dollars. Education officials stated that total administrative costs are largely driven by loan volume and the number of borrowers and, therefore, costs have increased as the number of Direct Loans has increased. The reported number of outstanding Direct Loans increased over 300 percent, from 19 million to over 88 million, from fiscal year 2007 through fiscal year 2012. As shown in figure 2 below, the largest loan volume increases were in Subsidized and Unsubsidized Stafford loans. Several factors contributed to the increase in the number of Direct Loans. For example, beginning in 2008, changes in the student loan market led numerous schools to transition from the Federal Family Education Loan program to the Direct Loan program.Fiscal Responsibility Act terminated the authority to make or insure new loans under the Federal Family Education Loan program after June 30, 2010, with subsequent federal student loans originated under the Direct Loan program. Education officials also stated that the economic downturn in 2008 coincided with an increase in student loan volume as individuals returned to school. While total reported administrative costs increased from fiscal year 2007 to fiscal year 2012, cost per borrower and other unit cost measures remained stable or fell. Unit costs are a measure Education uses to track costs such as cost per loan origination, cost per borrower serviced, and cost per application processed. See table 4 for a description of these unit cost measures. According to Education officials, increased loan volume resulted in a decrease in many unit costs. For example, total loan servicing costs for all programs supported by the Student Aid Administration account increased from fiscal year 2007 to fiscal year 2012; however, the number of borrowers serviced for these programs also increased from 9.7 million in fiscal year 2008 to 29.7 million in fiscal 2012. As a result, the annual servicing cost per borrower decreased slightly during that time, remaining at roughly $25 per borrower. While the overall flat or downward trend persisted over most of the years we studied, Education reported a slight increase in unit costs from fiscal year 2011 to 2012, attributable to such changes as lower volumes of applications and originations. Recent changes in loan servicing contracts, combined with other factors, have increased uncertainty about what servicing costs per borrower will be in coming years. Prior to 2009, all loans were serviced under a single contract, referred to as the Common Services for Borrowers (CSB) contract. Under the CSB contract, Education paid servicers based on loan volume, paying a smaller fee per borrower as the number of borrowers serviced increased. In order to accommodate increasing loan volume, Education began to contract with additional loan servicers in 2009. The new contracts use a different pricing structure to encourage servicers to keep more borrowers in a repayment status. For example, under one such contract, the servicers receive the highest rate for borrowers who are in-grace, in current repayment, or delinquent for 30 days or less, and the lowest rate for borrowers who have been delinquent for 270 days or more. Education officials stated that under the new contracts, they may pay more per borrower but may also keep more borrowers in repayment. Education also recently reported that the large portfolio of Direct Loans originated after the move away from the Federal Family Education Loan Program is in the process of maturing from the cheaper “in-school” servicing cost status to the more expensive “in-repayment” servicing cost status. As a result of these changing circumstances—a new servicing payment structure, new servicers collectively managing an increasing volume of loans, and the maturing of the Direct Loan portfolio—whether future servicing costs per borrower will increase or decrease is uncertain, according to Education officials. Reestimate: Annual recalculation of estimated lifetime loan subsidy costs for each cohort, incorporating updated information on actual loan performance and revised assumptions about future cash flows. As of the end of fiscal year 2013, it is estimated that the government will generate about $66 billion in subsidy income from the 2007 to 2012 loan cohorts as a group. However, current estimates for this group of loan cohorts are based predominantly on forecasted cash flow data derived from assumptions about future loan performance.on actual cash flows for these loans becomes available, subsidy cost estimates will change. As a result, it is unclear whether these loan cohorts will ultimately generate subsidy income, as currently estimated, or whether they will result in subsidy costs to the government. This will not be known with certainty until all cash flows have been recorded after As more information loans have been repaid or discharged—which may be as many as 40 years from when the loans were originally disbursed. As seen in figure 3 below, overall subsidy rates—subsidy costs as a percentage of loan disbursements—are generally estimated to decrease across the 2007 to 2012 loan cohorts. Moreover, later loan cohorts in this range are estimated to generate more subsidy income than the earlier loan cohorts, as indicated by the increasingly negative subsidy rates.Our analysis of how the various components of subsidy costs differ across cohorts shows that, relative to the financing costs, the defaults, fees, and all other costs of the subsidy rate were relatively stable across the 2007 to 2012 loan cohorts. For example, these costs were estimated to vary between 1 to 2.4 percentage points across cohorts, while the financing costs were estimated to vary almost 19 percentage points across cohorts. As seen in figure 3, the financing component of the subsidy cost is estimated to generally decrease across cohorts. Financing costs are related to the interest payments borrowers make on Direct Loans and the government’s cost of borrowing to finance its lending. Past GAO work has found that the difference, or “spread,” between the borrower interest rate and government’s cost of borrowing was a key factor in determining whether there is a positive or negative subsidy for Direct Loans. As the spread increases, so does the difference between the interest payments Education receives from borrowers and the interest payments Education makes to Treasury. The spread between the borrower interest rate and the government’s cost of borrowing varies by loan type, because different borrower rates are established by law for subsidized Stafford, unsubsidized Stafford, and PLUS loans. Figure 4 compares borrowers’ rates for the various Direct Loan types, including the average borrower rate weighted by loan type (or “weighted average”), with the government cost of borrowing. Specifically, as seen in figure 4, Direct Loan borrower rates decreased for subsidized Stafford loans as a result of statutory changes made during this time period. The spread between the government’s cost of borrowing and the weighted average borrower rate (across all Direct Loan types) increased between 2007 and 2009, when it peaked. During this time, the government’s cost of borrowing fell more sharply than borrower interest rates. In instances where the borrower rate is greater than the government borrowing costs, as is the case between 2007 and 2012, Education would be expected to receive more in interest payments from borrowers than what it pays in interest to Treasury, increasing the likelihood that revenues will exceed costs for the loan. Education’s estimates of lifetime loan subsidy costs have varied over time based on updated information recognized during the reestimate process. Through the reestimate process, subsidy cost estimates are updated for each loan cohort to account for information on actual loan performance and the government’s cost of borrowing.2012 have experienced both downward and upward adjustments to the estimated subsidy costs over time due to these reestimates. Each year, the estimated lifetime subsidy cost for a cohort will change to reflect the most recent reestimate information. See textbox for more information on the reestimate process. Reestimate Process Reestimates reflect changes related to both interest rate assumptions and non-interest rate assumptions as actual cash flows are recorded after loans are disbursed. For example, reestimates show the difference between the estimated government borrowing costs (discount rate) when the original subsidy rate is calculated, and the final government borrowing costs that are determined when a loan cohort is substantially disbursed. OMB guidance states that government borrowing costs are updated through reestimates when a cohort is at least 90 percent disbursed. Because of the timing of disbursement for Direct Loans, two different fiscal years of interest rate data feed into the final government borrowing costs used for the reestimates. For example, actual data on the final government borrowing costs for the 2007 loan cohort would be available in fiscal year 2009. Once the government borrowing costs have been determined, the rate is set for the life of the cohort. Reestimates also reflect differences in actual versus estimated loan performance. For example, when data get updated on the number of loans that were cancelled due to death, the information is included in technical reestimates. The effects of changes in this variable on the overall cost estimate vary depending on how the mortality rate changes from year to year. Changes in other variables such as default rates and borrower participation in loan repayment plans also affect assumptions used for cost estimates. As shown in figure 5, there can be wide variations in the reestimated subsidy rates and, consequently, the estimated subsidy costs, for the same cohort over time. For example, the 2008 loan cohort was estimated to generate as much as $9.09 in subsidy income per $100 of loan disbursements based on the reestimate information published in the fiscal year 2011 President’s budget. However, in the estimates published in the fiscal year 2012 President’s budget, the same loan cohort was expected to generate a small cost of 24 cents per $100 dollars of loan disbursements, based on updated information. This represents a swing of $9.33 per $100 of loan disbursements. Similarly, the original subsidy rate estimate for the 2009 loan cohort indicated that the government would generate a subsidy income of almost $15 per $100 of loan disbursements. However, the revised estimate published in the fiscal year 2011 President’s budget indicated that the subsidy income the government was expected to generate dropped by about 74 percent, to $4 per $100 of loan disbursements. Figure 5 shows the original subsidy rate and subsequent reestimated subsidy rates for the 2007 to 2012 loan cohorts over time. Volatility in subsidy cost estimates for a given cohort is generally expected to decrease over time. An Education official explained that estimates for loan cohorts experience more volatility in early reestimates because less actual data, as opposed to forecasted data, are available to inform the estimates. As more actual data become available for a cohort, Education expects to see smaller changes in the reestimates over time. For example: Earlier cohorts with more actual data to inform the reestimate have become less volatile: The estimates for the 2007 and 2008 cohorts initially showed large downward adjustments in the reestimated subsidy rates. In general, these cohorts currently are estimated to have a subsidy rate close to zero, and the reestimated subsidy rates have not varied much in the most recent reestimate years. Later cohorts with less actual data to inform the reestimate are currently more volatile: The 2009 through 2011 cohorts exhibit the most volatility over the years for which reestimates are available, in that recent reestimated subsidy rates for these cohorts have exhibited larger changes than the 2007 and 2008 cohorts. Direct Loan costs fluctuate according to changes in certain variables, with varying levels of sensitivity. In particular, Direct Loan costs are sensitive to changes in the government’s cost of borrowing, which changes for each cohort of loans depending on economic conditions and the characteristics of the cohort.information on loan performance that results in fluctuations in the cost estimates themselves, means that total Direct Loan costs cannot be known with certainty until actual data are available at the end of the loans’ life cycle—a process which takes decades. To illustrate this, we conducted analyses that provide practical examples of how costs change in response to certain variables. Specifically, we tested the discount rate, which represents the cost of Education borrowing funds from Treasury to finance the Direct Loan program. loans that Education considers to be at high risk of default. We tested these variables under favorable and less-favorable conditions, from the perspective of government costs. Specifically, favorable conditions would likely reduce government costs, while less favorable conditions would increase costs to the government. By comparing the percent change in costs to the percent change in the variable, we determined how sensitive the costs were to each variable. We refer to this ratio as the sensitivity factor (see table 6). In the table, the sensitivity factor shows the percent change in costs that would be associated with a one-percent change in the indicated variable. Of the variables we selected, Direct Loan costs were most sensitive to changes in the government’s cost of borrowing in both the favorable and less favorable scenarios, as shown by the sensitivity factors in table 6. For example, in the favorable scenario, a 1 percent increase in the government’s cost of borrowing was associated with a 19.7 percent increase in subsidy costs. In contrast, costs were less sensitive to the loan risk category, percentage of subsidized Stafford loans, and percentage of income-contingent repayment plans in both the favorable and less-favorable cases. For example, in the favorable scenario, a 1 percent decrease in income-contingent loans was associated with a 2.9 percent decrease in costs. We found that, in both the favorable and less favorable scenarios, subsidy costs were not very sensitive to changes in income-contingent repayment plans. Percentage of Income Contingent Repayment Plans Government borrowing costs (discount rate) Proportion of loan risk group -13.0 (Increased percentage of lowest risk category) 0.4 (Increased percentage of highest risk category) Because Direct Loan costs are particularly sensitive to the government’s cost of borrowing, total costs associated with the program will vary accordingly. Further, since government borrowing costs change, the eventual spread between those costs and the borrower interest rate will not be known until the cohort is almost fully disbursed. As a result, the interest income from borrowers may or may not offset the government’s cost of borrowing at any given point in time. Since the total costs associated with Direct Loans, including administrative and subsidy costs, are in flux until actual data are recorded through the end of the loans’ life cycle, the point at which the government covers loan costs without generating additional revenue—known as the breakeven point—may also change throughout the life cycle of the loans until actual information is available. As a result, borrower interest rates that are needed to cover Direct Loan costs at one time may not cover costs at another time. If the borrower rates are set to offset the expected government costs according to initial estimates, because costs fluctuate, it is likely that the cohort will not ultimately break even over the life of the loans. For example, if costs are overestimated, borrower rates will be set too high and the Direct Loans will generate net income to the government. Likewise, if costs are underestimated, borrower rates will be set too low and the Direct Loans will generate net costs to the government. Index: The base market rate to which student loan interest rates are pegged. Mark-up rate: The percentage-point increase over the base rate that students are charged. To determine whether or not the conditions that would break even for one cohort would also break even for another cohort under different circumstances, we experimented with certain aspects of the borrower interest rate for two separate cohort years. Specifically, we altered the index (the base market rate that student loan interest rates are pegged to), the mark-up rate (the percentage-point increase over the base rate that students are charged), and the differences in the mark-up rates among loan types. We looked at how these changes to the borrower rates would affect total Direct Loan costs, taking into account both administrative and subsidy costs. We identified two potential pathways to temporarily break even for the 2014 cohort over the life of the loans, though these are only effective using currently available cost estimates, which will change over time. Using the 10-year Treasury Note as the index for all Direct Loan borrowers without any additional mark-up—which means there would be no differences in the interest rate borrowers pay for different loan types—the government could approximately break even for the 2014 cohort. See Figure 6 below. In this case, setting a flat interest rate for borrowers of all Direct Loan types is a notable divergence from current law, which provides lower interest rates to eligible undergraduates than to graduate students and parents. While there is no mark-up above the 10-year Treasury Note in this scenario, the government is still able to cover its estimated administrative and subsidy costs because borrower interest rates (in this case, the index) are higher than the government’s cost of borrowing. Using the 5-year Treasury Note as the index instead of the 10-year there can be slight differences in the interest rates Treasury Note,borrowers pay for different loan types (unlike in the previous scenario where rates were the same for all loan types), but in order to approximately break even, the differences in the mark-up for each loan type would need to be reduced by one half of the current rates set for the 2014 cohort. Additionally, undergraduate Stafford loans, which typically have the lowest interest rates, would need to be set at the same level as the 5-year Treasury Note, with no mark-up. Importantly, while changing the index and mark-up rates helped achieve a temporary breakeven point for the 2014 cohort, the same borrower rate scenarios did not yield the same results when applied to the 2019 cohort. In other words, the breakeven methodologies used for the 2014 cohort were not effective for the 2019 cohort. A difference in outcome for these two cohorts emerges because Direct Loan costs are sensitive to variables that are projected to look very different for 2019 than they did for 2014. Specifically, while interest rates for the government’s cost of borrowing were unusually low for 2014, they are projected to more than double by 2019, beginning to approximate pre-recession rates. This decreases the spread between the borrower interest rate and the government’s cost of borrowing, and therefore decreases the likelihood that the government will generate income on the loans. Indeed, when we changed the index and mark-up rates for the 2019 cohort in the same manner that approximated a breakeven point for the 2014 cohort, the resulting estimates show the government incurring costs that would not be covered by revenues from Direct Loans. See figure 6. While it would appear that borrower interest rates could be reset frequently to adjust to the continually-changing spread between the government’s cost of borrowing and the borrower interest rate that drives costs, the fundamental issue of not having full cost information until the end of the loan repayment period means that the breakeven point itself cannot be accurately predicted beforehand. Accordingly, in part because Direct Loan costs are sensitive to variables that change over time, borrower rates cannot be set to reliably enable the program to break even over the life of a loan cohort before those variables and cost estimates stabilize. Given the role federal student loans play in furthering access to postsecondary education, the federal government has a stake in preventing loan costs from posing an unnecessary burden to borrowers. Understanding how Direct Loan costs are estimated and change over time—and what factors drive those changes—is instructive for setting borrower interest rates. However, available information illustrates the difficulties of accurately predicting what these program costs will be, and how much borrowers should ultimately be charged to achieve a particular outcome. Specifically, fluctuations in the actual and expected costs of the student loan program over time make it impractical to establish a particular borrower interest rate that would consistently break even. In addition, the policy changes needed to influence the costs of the program could conflict with other policy goals. For example, setting a flat interest rate for borrowers of all Direct Loan types may help temporarily approximate a breakeven point, but it would be a considerable shift from current law, which provides a lower interest rate to eligible undergraduates. Similarly, making frequent changes to the borrower interest rate could help program costs more closely match revenue in the short term, but it may confuse potential borrowers and complicate efforts to make the program transparent to students. Moreover, it may be difficult to anticipate how any future policy changes might affect program costs, as shifting economic conditions and cost reestimates continually move the breakeven target. Understanding the uncertainties and substantial challenges associated with estimating student loan costs may help inform Congress as it considers how best to promote access to postsecondary education. We provided a draft of the report to Education for review and comment. In its comments, reproduced in appendix III, Education agreed with our findings. Education also described recent changes to federal student aid programs and noted the importance of the Direct Loan program, as well as the agency’s focus on promoting greater college affordability and access. In addition to these general comments, Education provided us with technical comments that we incorporated, as appropriate. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report addresses (1) how the costs of administering the Direct Loan program have varied in recent years, (2) how estimated subsidy costs associated with the Direct Loan program have varied in recent years, and (3) how changes in different variables influence the overall cost of the Direct Loan program and the borrower interest rate needed to cover those costs. To address these objectives, we reviewed relevant federal laws and guidance, as well as past GAO reports related to the Direct Loan program, administrative costs, and subsidy costs. We interviewed officials at Education and the Congressional Budget Office (CBO) regarding key issues related to Direct Loan costs. In addition, we reviewed data from Education on its Direct Loan administrative costs and analyzed data on subsidy costs for fiscal years 2007 to 2012, including data generated from Education’s Activity Based Costing (ABC) model and Education’s student loan cash flow model. We assessed the reliability of data on Direct Loan administrative costs to evaluate trends in costs by interviewing agency officials knowledgeable about the data and reviewing documentation on the ABC model. In addition to aggregate cost data, Education calculates both direct and full unit costs using the ABC model. In order to report costs that include indirect costs, we used full unit cost data. Education officials noted that, in some cases, these data cannot isolate costs specific to the Direct Loan program, from costs related to other Education loan programs, because such data have not been necessary for management purposes. We determined that the data were sufficiently reliable for the limited purposes of this report. We have noted the limitations in the report where they were relevant. For subsidy costs, we analyzed data on Education subsidy cost estimates and reestimates for the 2007 to 2012 Direct Loan cohorts that were reported in the Federal Credit Supplement (part of the annual President’s Budget) to understand trends in cost estimates and discern key cost drivers. Additionally, we analyzed data generated by Education’s student loan cash flow model used to estimate subsidy costs (referred to in this report as the Student Loan Model) to understand trends in subsidy cost components and cost drivers. We assessed the data’s reliability by reviewing relevant documentation, comparing information to published data sources, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. Our work did not include an assessment of the feasibility of actions that could be taken to reduce the impact of fluctuations in Direct Loan cost estimates, such as using variable borrower interest rates that change over the life of the loans, or the use of borrower rebates to offset any subsidy income generated from the loans. We conducted this performance audit from August 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Education uses the Student Loan Model to estimate future cost and revenue cash flows by loan cohort. Data from the Student Loan Model are input into OMB’s Credit Subsidy Model which, in accordance with FCRA, calculates the net present value of the annual cash flows of a given loan cohort, thereby obtaining a measure (subsidy rate) of the costs for each loan cohort. According to an Education official, the model uses a set of over 20 assumptions, including loan volume, defaults, and discount rates. For each assumption, the model contains a table with multiple possible values. The National Student Loan Data System (NSLDS), which processes and maintains data pertaining to Title IV programs, is a major source of the data for the assumptions used in the Student Loan Model. Data are pulled from a 4-percent random sample of loans from the NSLDS, and the data are calibrated by Education to generate assumptions about future behavior. We assessed the reliability of NSLDS data by reviewing existing information about the data and the system that produced them. We assessed the reliability of the Student Loan Model by reviewing model documentation and interviewing knowledgeable agency officials about the system. We determined that the data were sufficiently reliable for the purposes of this report. Education used the Student Loan Model to run various scenarios and generate data for GAO for the sensitivity and breakeven analyses described below. We worked with internal experts and Education to develop sensitivity scenarios which altered key assumptions in the Student Loan Model to illustrate how changes in certain variables could affect the overall cost of the Direct Loan program. The four key variables we altered were: (1) discount rates, (2) percentage of subsidized Stafford loans, (3) percentage of risk category, and (4) percentage of income contingent repayment plans. We selected the first three factors because Education officials identified them as being among the major factors that affect subsidy costs. We selected the percentage of income contingent repayment plans because Education identified it as the type of repayment plan most likely to face shifts in participation over time, particularly as more people become eligible for various income contingent repayment plans. 1. Discount rate: This is the collection of interest rates used to calculate the present value of cash flows that are estimated over a period of years. This rate also represents Education’s cost of borrowing from Treasury. 2. Percentage of subsidized Stafford loans: Subsidized Stafford Loans are available only to undergraduate student borrowers with demonstrated financial need. The government subsidizes these loans by not charging student borrowers for the interest that accrues while they are still in school and during a 6-month grace period after leaving school. 3. Percentage of risk category: These categories were created using school type and academic level to determine the potential risk of default, which can lead to higher subsidy costs for the loans. 4. Percentage of income contingent repayment plans: This is the percentage of loans that are structured to prorate the repayment plan based upon the borrower’s income. In the student loan model, there is one variable that encompasses all repayment plans based on income. These plans may result in increased subsidy costs. We tested the sensitivity of costs under two types of scenarios: 1. Favorable scenario: This scenario used past variable rates that represented more favorable conditions in terms of reducing costs to the government. For example, because income contingent repayment plans have a higher subsidy rate than other types of repayment plans, we used the lowest percentage of income contingent repayment plans that existed from 2007-2012 in this scenario. 2. Less-favorable scenario: This scenario used past variable rates that represented less favorable conditions in terms of increasing cost to the government. For example, because a higher discount rate increases Education’s cost of borrowing from Treasury, we used the highest discount rate that existed from 2007-2012 in this scenario. See table 7 for the variable rates used in each scenario. The scenarios forecast the overall cost of the Direct Loan program for each variable alteration using the baseline of the fiscal year 2014 cohort breakeven scenario (see below for a description of the breakeven scenarios). The baseline scenario forecasted costs for the Direct Loan program over the full life of the 2014 cohort, and include cash flow projections 40 years into the future. Using this baseline, we altered the variable rates to reflect historical values between 2007 and 2014, except for the loan risk category, to see how the cost for the loan cohort was affected. We also worked with Education to change specific loan parameters in the Student Loan model with the purpose of determining whether the government could cover Direct Loan costs without generating additional income under certain conditions. This is referred to as a breakeven analysis. The following steps were taken to conduct this analysis: First, the baseline subsidy rates for the 2014 cohort of Direct Loans were calculated, excluding loan consolidations, using the index and mark-up rates and caps as outlined by current law (see Table 8). We selected 2014 because it was the next cohort to be disbursed at the time of the simulation (September 2013) and allowed us to use forecasted data for the hypothetical simulations. The agency provided a breakout by loan type (Subsidized Stafford, Unsubsidized Stafford, PLUS). An administrative cost rate is included in this analysis. Scenario A: The interest rate mark-up was calculated that would be necessary to get the federal subsidy income for the 2014 cohort to cover estimated administrative costs (i.e. break even), still excluding consolidations. This scenario used the interest rate caps for each loan type and the 10-year Treasury note for the index, as designated under current law. Separately, two breakeven analyses were conducted for the 2014 cohort with altered inputs. In both cases, caps were kept the same as provided under current law. Scenario B: 5-year Treasury Note with differentials in mark-up rates between loan type as designated by the law. Scenario C: 5-year Treasury Note with differentials reduced by one half from the values designated in the law. Scenario D: After finding the breakeven rates for 2014, the same treatment of mark-up rates and differentials were applied to the 2019 cohort using the 10-year Treasury Note, still accounting for administrative costs. We selected 2019 to show what the resulting costs would be in a future year with different economic conditions (i.e., different discount rates). Scenario E: The parameters for the 2019 cohort were then changed to use the 5-year Treasury Note and reduced the differentials from current law by one half (because those differentials helped approximate a break even under the 2014 cohort estimate) to illustrate how this affects costs. Below is a summary of the loan parameters used for each breakeven scenario. In addition to the contact named above, Kris Nguyen, Assistant Director; Marcia Carlsen; Carol Henn; Elizabeth Gregory-Hosler; Amy Moran Lowe; Ellen Phelps Ranen; Amrita Sen; Srinidhi Vijaykumar; and Rebecca Woiwode made significant contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Jessica Botsford, Dan Concepcion, Holly Dye, Robert Dacey, Gary Engel, Cole Haase, Susan J. Irving, John Karikari, Thomas J. McCool, Sheila McCoy, Jean McSween, Brittni Milam, Mimi Nguyen, Omari Norman, Debra Prescott, and Michelle St. Pierre.
|
Federal student loans issued under the Direct Loan program play a key role in ensuring access to higher education for millions of students. The costs of the program to the government include administrative costs like loan servicing. They also include subsidy costs, which are the estimated long-term costs to the government of providing loans, such as the government’s cost of borrowing and defaults on loans. Some have questioned whether borrower interest rates can be more precisely set to cover these costs without generating excess federal income. The Bipartisan Student Loan Certainty Act of 2013 required GAO to provide information on issues related to the cost of federal student loans. This report addresses (1) how the costs of administering the Direct Loan program have varied in recent years, (2) how estimated subsidy costs have varied in recent years, and (3) how changes in different variables influence the overall cost of the program and the borrower interest rate needed to cover those costs. GAO reviewed Direct Loan administrative cost data and analyzed subsidy cost data from Education for fiscal years 2007 through 2012, which are presented in nominal dollars throughout the report. In addition, GAO worked with Education to illustrate how changes in variables such as government borrowing costs could affect Direct Loan subsidy costs. GAO also examined whether borrower rates could be set so the government could cover Direct Loan costs without generating excess revenue (known as a breakeven analysis). GAO reviewed relevant federal laws, guidance, and reports; and interviewed Education and other agency officials. GAO does not make recommendations in this report. The Department of Education agreed with our findings. Total Direct Loan administrative costs grew from $314 million to $864 million from fiscal years 2007 to 2012, but federal costs per borrower have generally remained steady or fallen. The increase in total administrative costs largely results from an increase of over 300 percent in the number of Direct Loans during that same time period. One key factor contributing to this loan volume increase was a law that ended student loan originations under a federally guaranteed loan program resulting in new originations being made under the Direct Loan program. Loan servicing--which includes activities like counseling borrowers on selecting repayment plans, processing payments, and collecting on loans in delinquent status--is the largest category of administrative costs, comprising 63 percent of total Direct Loan administrative costs in fiscal year 2012. While total administrative costs have increased, costs per borrower and other unit costs have remained steady or declined. For example, the servicing cost per borrower has remained roughly $25 over the six-year period we examined. However, a number of factors, including a new payment structure for loan servicing contracts to reward servicers for keeping more borrowers in repayment status, have created some uncertainty about the servicing cost per borrower in coming years. Separate from administrative costs, estimated subsidy costs vary by loan cohort--a group of loans made in a single fiscal year--and change over time. Based on the Department of Education's (Education) recent estimates, the government would generate subsidy income for the 2007 to 2012 Direct Loan cohorts as a group. However, estimates will change, because current subsidy cost estimates for these cohorts are based predominantly on assumptions about future revenue and costs. Actual subsidy costs will not be known until all cash flows have been recorded, generally after loans have been repaid. This may be as many as 40 years from when the loans were originally disbursed, because many borrowers do not begin repayment until after leaving school, and some face economic hardships that extend their payment periods. Subsidy cost estimates fluctuate over time due to the incorporation of updated data on actual loan performance and the government's cost of borrowing, as well as revised assumptions about future revenue and costs, through the annual reestimate process. As a result, there can be wide variations in the estimated subsidy costs for a given cohort over time. For example, the 2008 loan cohort was estimated to generate $9.09 of subsidy income per $100 of loan disbursements in one year, but in the next year that same cohort had an estimated subsidy cost of 24 cents per $100 of loan disbursements, a swing of $9.33. Volatility in subsidy cost estimates for a given cohort is generally expected to decrease over time as more actual loan performance data become available. Because Direct Loan costs fluctuate with changes in certain variables, borrower interest rates cannot be set in advance to balance government revenue with costs consistently over the life of the loans. In a simulation of how loan costs respond to changes in selected variables, the costs were highly sensitive to changes in the government's cost of borrowing. This, coupled with cost estimates regularly updated to reflect loan performance data, means the total costs associated with Direct Loans are in flux until updates are recorded through the end of the loans' life cycle, which takes several decades. Therefore, the borrower interest rates that would generate revenue to exactly cover total loan costs—known as breaking even—would change over time. To determine whether or not a set of conditions that would break even for one cohort would also break even for another cohort under different circumstances, GAO used data forecasted for future years to experiment with certain aspects of the borrower interest rate for two separate cohort years. • GAO selected cohort years 2014 and 2019 because economic conditions may be different several years apart. • For these cohorts, the following three aspects of the borrower interest rate were altered: the index (the base market rate to which student loan interest rates are pegged), the mark-up rate (the percentage-point increase over the base rate that students are charged), and the differences in the mark-up rates among loan types, including undergraduate, graduate student, and parent loans. • GAO looked at how these changes to the borrower rates would affect total government costs, taking into account both administrative and subsidy costs. • Changing the index and mark-up rates helped achieve a breakeven point based on current cost estimates for the 2014 cohort; however, cost estimates for this cohort will change as updated data become available over the life of the loans. • When GAO applied the same index and mark-up rates that temporarily resulted in a breakeven point for the 2014 cohort to the 2019 cohort, it resulted in a net cost to the government. • The difference in outcome for these two cohorts is because Direct Loan costs are sensitive to variables, such as government borrowing costs, that are projected to look very different for 2019 than they did for 2014. • As illustrated in the simulation, the borrower interest rates that are needed to cover costs at one point in time may not be effective at another point in time and cannot be precisely determined in advance to enable the government to break even consistently. Available information on Direct Loan costs illustrates the difficulties of accurately predicting what these program costs will be, and how much borrowers should ultimately be charged to achieve a particular outcome. Specifically, fluctuations in the actual and expected costs of the student loan program over time make it challenging to target a particular borrower interest rate that would consistently break even. Making frequent changes to the borrower interest rate could help program costs more closely match revenues in the short term, but it could confuse potential borrowers and complicate efforts to make the program transparent to students.
|
Oil and natural gas are found in a variety of geologic formations distributed across the country, such as shale and tight sandstone. Shale plays—sets of discovered or undiscovered oil and natural gas accumulations or prospects that exhibit similar geological characteristics—are located within basins, which are large-scale geological depressions, often hundreds of miles across, that also may contain other oil and gas resources. Figure 1 shows the location of shale plays and basins in the contiguous 48 states. Shale plays can contain oil, natural gas, or both. In addition, a shale gas play may contain “dry” or “wet” natural gas. Dry natural gas is a mixture of hydrocarbon compounds that exists as a gas both underground in the reservoir and during production under standard temperature and pressure conditions. Wet natural gas contains natural gas liquids, or the portion of the hydrocarbon resource that exists as a gas when in natural underground reservoir conditions but that is liquid at surface conditions. The natural gas liquids are typically propane, butane, and ethane and are separated from the produced natural gas at the surface. Operators may then sell the natural gas liquids, which may give wet shale gas plays an economic advantage over dry gas plays. According to a 2014 EIA publication, operators moved away from the development of shale plays that are primarily dry gas in favor of developing plays with higher concentrations of crude oil and natural gas liquids such as the Eagle Ford in Texas, because given natural gas prices at that time, crude oil and natural gas liquids were more valuable products. Another advantage of liquid petroleum and natural gas liquids is that they can be transported more easily through different modes of transportation than dry natural gas, which is transported almost entirely by pipelines to markets and consumers. In recent years, domestic onshore production of oil and gas has been steadily rising. For example, from 2007 through 2012, annual production from shale and tight sandstone formations increased more than sixfold for oil and approximately fivefold for gas (see fig. 2). Horizontal drilling and hydraulic fracturing have advanced significantly over the last decade and are largely credited with spurring the boom in oil and gas production in the United States. Oil: Average domestic crude oil production from shale and tight sandstone formations in 2012 has increased more than sixfold compared with average production in 2007, from 0.34 million barrels per day in 2007 to 2.25 million barrels per day in 2012. To put this into context, according to EIA data, the United States consumed an average of more than 18 million barrels of petroleum products per day in 2012. According to EIA officials, oil from shale and tight sandstone formations accounts for 31 percent of total U.S. production. According to EIA, increased production in 2012 and 2013 was the largest annual increase since the beginning of U.S. commercial crude oil production in 1859. Much of the increase in crude oil production has been from shale formations, such as the Bakken in North Dakota, the Eagle Ford in Texas, and the Niobrara in Colorado. According to EIA officials, U.S. production of crude oil is expected to continue to increase—by 48 percent from 2012 to 2019—and will remain above the 2012 level through 2040. Natural Gas: Domestic natural gas production in 2012 has increased about fivefold compared with production in 2007, from less than 2 trillion cubic feet in 2007 to more than 10 trillion cubic feet in 2012. To put this into context, annual domestic consumption of natural gas was just over 25 trillion cubic feet in 2012, according to EIA data. In September 2012, we found that, assuming current consumption levels without consideration of a specific market price for future gas supplies, the amount of domestic technically recoverable shale gas could provide enough natural gas to supply the nation for the next 14 to 100 years. Much of the increase in natural gas has been from shale formations, such as the Barnett, Fayetteville, Haynesville, and Marcellus formations. Multiple modes of transportation, including pipeline, rail, highways, and waterways, connect oil and gas production infrastructure (such as wells and processing plants) in shale areas to customers, which include refineries, industrial users, and individual consumers. Additionally, when products switch modes of transportation, oil-loading terminals, sometimes referred to as “transload” terminals, transfer the product from one mode to another, such as when crude oil is transferred from a truck or gathering pipeline to a train. Responsibility for maintaining these modes vary: pipelines and rail are generally privately owned, while highways and waterways are generally public. Figure 3 illustrates how various transportation modes work together to bring oil and gas from production areas to users. Approximately 2.5 million miles of pipelines transport roughly two-thirds of domestic energy supplies throughout the United States. These pipelines carry natural gas and hazardous liquids, including crude oil and natural gas liquids from production areas to end users, such as residences and businesses. Gathering pipelines collect produced oil and gas from their source and transport these products to processing facilities and transmission pipelines. Transmission pipelines then transport these products longer distances to users such as residences and businesses. Distribution pipelines transport natural gas to consumers for use and are not within the scope of this report. Characteristics of gathering and transmission pipelines are described in table 1. The U.S. rail network consists of about 200,000 miles of track, which runs mostly through rural areas. The railroad industry is dominated by the seven largest railroads, known as Class I railroads, which collectively accounted for more than 90 percent of annual railroad-freight revenues in 2012. Smaller regional and short-line railroads transport freight shorter distances and can help connect customers in areas not served by the larger railroads. The railroads’ national association, the Association of American Railroads (AAR), represents the interests of the industry and works with railroads and other stakeholders to develop industry standards. Crude oil travels by rail in tank cars, commonly DOT-111 tank cars, which are generally owned by shippers or third parties. The DOT- 111 is a DOT-specification tank car, meaning it must be built to conform to standards specified in DOT regulation. It is a non-pressurized car that is used to transport a variety of liquid products, including hazardous, flammable materials like crude oil. Terminals, referred to as transload facilities, transfer crude oil from other transportation modes (typically trucks or gathering pipelines) to tank cars for transport by train. In addition to pipeline and rail, other modes, including barge and truck may transport oil and gas products. For example, barges may transport oil over longer distances on major waterways, such as the Mississippi River, while trucks typically transport oil over short distances to transload facilities. While this report provides a closer look at transportation infrastructure and safety impacts of shale oil and natural gas development on pipeline and rail nationwide, we also discuss highway infrastructure and safety impacts in the four selected states we examined (see app. II for a summary of highway-related impacts). DOT is responsible for ensuring the safe transportation of people and goods through regulations, oversight, inspections, and other efforts, sometimes in partnership with states. Within DOT, PHMSA’s Office of Pipeline Safety oversees the safety of pipelines through regulation and an inspection program, which includes over 100 PHMSA inspectors, and also collects information about the location of pipelines. PHMSA also has arrangements with states, which collectively have over 300 inspectors, to assist with overseeing interstate pipelines, intrastate pipelines, or both. PHMSA’s current pipeline regulations cover all hazardous liquid (including crude oil) and natural gas transmission pipelines. In addition to minimum safety standards that all transmission pipeline operators must meet, PHMSA employs a risk-based approach to transmission pipeline regulation and requires operators to systematically identify and mitigate risks in “high-consequence areas,” which include populated and environmentally sensitive areas. PHMSA also applies this risk-based approach to gathering pipelines and regulates gathering pipelines in non- rural areas, resulting in regulation of approximately 10 percent of the nation’s gathering pipelines. Generally, PHMSA retains full responsibility for inspecting interstate pipelines for compliance with its regulations and taking enforcement actions when needed. However, states may be authorized to conduct inspections of interstate pipelines, as well as inspections and associated enforcement for intrastate pipelines. States can also promulgate regulations for intrastate pipelines, including gathering pipelines, even if these pipelines are not covered by PHMSA’s federal safety requirements. PHMSA, through its Office of Hazardous Materials Safety, also regulates shippers and railroads transporting hazardous materials like crude oil by rail and other modes. A memorandum of agreement details how PHMSA works with the other DOT modal agencies to address hazardous- material transportation safety. DOT’s other modal administrations have responsibility for safety of their respective modes, such as the Federal Railroad Administration (FRA), which oversees rail safety. FRA enforces its own and PHMSA’s safety regulations through inspections by FRA officials and state partners in some states. PHMSA also has hazardous materials inspectors who enforce requirements for hazardous material packaging for transportation. Additionally, PHMSA’s regulations include emergency response planning requirements for pipelines and the transportation of crude oil by rail. Specifically, regulations require operators of transmission pipelines and urban gathering pipelines to prepare emergency response plans and coordinate them with emergency responders. Railroads that transport crude oil in tanks larger than 42,000 gallons are required to develop comprehensive oil-spill response plans with additional requirements for contingency planning, ensuring response resources by contract or other means, and training. Railroads are required to submit comprehensive plans to FRA for review. Otherwise, railroads are required to develop basic response plans, for which there are fewer requirements. Because PHMSA applies a risk- based approach to its transportation oversight, we believe it is appropriate to apply principles of risk-based management to assessing the agency’s efforts in this area. Risk-based management has several key characteristics that help to ensure safety, including (1) using information to identify and assess risks; (2) prioritizing risks so that resources may be allocated to address higher risks first; and (3) promoting the use of regulations, policies, and procedures to provide consistency in decision making. Increased oil and gas production presents challenges for transportation infrastructure because some of the growth in production has been in areas with limited transportation linkages to processing facilities. According to studies and publications we reviewed, infrastructure limitations and related effects could pose environmental and safety risks and have economic implications, including lost revenue and hindered oil and gas production. Though capital investments in U.S. infrastructure for oil and gas transportation, processing, and storage have increased significantly in recent years—by 60 percent from 2008 to 2012, according to a December 2013 industry report—expansions in infrastructure have not kept pace with increased domestic oil and gas production. In the United States, most oil and nearly all natural gas are transported by pipeline. According to EIA data, U.S. refinery receipts of domestic crude oil by pipeline increased almost 25 percent from 2008 to 2012, from 1.6 billion barrels to nearly 2 billion barrels. However, according to a number of studies and publications we reviewed, including a 2013 report from the Fraser Institute, oil and natural gas production in the United States is outpacing the capacity to transport the resources through existing pipeline infrastructure. In February 2013, EIA reported that pipeline capacity to deliver crude oil to a key hub increased by about 815,000 barrels per day from 2010 through 2013; however, the increase has been inadequate to transport crude oil from production sites to refineries. In March 2014, we found that most of the system of crude oil pipelines in the United States was constructed in the 1950s, 1960s, and 1970s to accommodate the needs of the refining sector and demand centers at that time. We also found that, according to Department of Energy officials, this infrastructure was designed primarily to move crude oil from the South to the North, but emerging crude oil production centers in Western Canada, Texas, and North Dakota have strained the existing pipeline infrastructure. For example, according to a 2013 industry publication, oil production exceeded pipeline capacity in North Dakota by about 300,000 barrels of oil per day in the state. The limited pipeline capacity to transport crude oil has resulted in the increased use of other transportation options, in particular rail, truck, and barge (see fig. 4). Rail: According to a 2014 EIA report, U.S. refinery receipts of domestic crude oil by rail increased more than sevenfold from 2008 to 2012, from 4 million barrels to 30 million barrels. The increased use of rail for transporting crude oil is due to the increases in crude oil production in North Dakota, Texas, and other states, which have exceeded the capacity of existing pipelines to move oil from production areas to refineries, according to a number of studies and publications we reviewed. Truck: According to a 2014 EIA report, U.S. refinery receipts of domestic crude oil by truck increased almost 90 percent from 2008 to 2012, from 69 million barrels to 131 million barrels. In addition, according to a North Dakota Pipeline Authority publication, some natural gas liquids are transported to market by truck. Barge: According to a 2014 EIA report, U.S. refinery receipts of domestic crude oil by barge increased more than 200 percent from 2008 to 2012, from 48 million barrels to 151 million barrels. According to the EIA report, the increase in barge shipments may be partially explained by crude oil being transferred to barges from rail cars for the final leg of some journeys to refineries, particularly on the East Coast and along the Mississippi River. According to a number of studies and publications that we reviewed, in addition to pipeline capacity limitations, rail, barges, and processing facilities and storage facilities also face limitations. For example, a 2013 industry publication identified a backlog for tank cars, needed to transport oil by rail, in the United States at nearly 60,000—representing over 20 percent of the existing U.S. tank car fleet. In addition, a 2014 Congressional Research Service report states that significant development of loading and unloading facilities could be required if rail is to continue substituting for pipeline capacity. Further, a number of studies and publications identified that oil and gas production in some areas can exceed the capacity to process and store the resources. For example, state officials in North Dakota reported in 2013 that maintaining sufficient natural gas processing capacity is a challenge of increased production. A number of studies and publications we reviewed identified environmental and safety risks or economic implications from transportation infrastructure limitations. For example: Risks to air quality: These risks can be the result of intentional flaring—a process of burning the gas developed along with oil—of associated natural gas that results from limited pipeline infrastructure and of engine exhaust from increased truck and rail traffic. Oil and natural gas are often found together in the same reservoir. The natural gas produced from oil wells is generally classified as “associated- dissolved,” meaning that it is associated with or dissolved in crude oil. In areas where the primary purpose of drilling is to produce oil, operators may flare associated natural gas because no local market exists for the gas and transporting to a market may not be economically feasible. In September 2012, we found that flaring poses a risk to air quality because it emits carbon dioxide—a greenhouse gas linked to climate change—and other air pollutants that can increase ground-level ozone levels and contribute to regional haze. In January 2014, the North Dakota Industrial Commission reported that nearly 30 percent of all natural gas produced in the state is flared. According to a 2013 report from Ceres, flaring in North Dakota in 2012 resulted in greenhouse gas emissions equivalent to adding 1 million cars to the road. Increased truck and rail traffic associated with the movement of oil from well sites also creates a risk to air quality as engine exhaust, containing air pollutants such as nitrogen oxides and particulate matter that affect public health and the environment is released into the atmosphere. Specifically, the Department of State reported in 2014 that increasing the number of unit trains transporting crude oil could increase greenhouse gases emitted directly from the combustion of diesel fuel in trains and in 2011 we found that trucking freight produces more air pollution than other transportation modes. Air quality may also be degraded as fleets of trucks traveling on newly graded or unpaved roads increase the amount of dust released into the air—which can contribute to the formation of regional haze. Inherent safety risks: Transporting oil and gas by any means—through pipelines, rail, truck, or barge—poses inherent safety risks. However, in January 2013, we found that pipelines are relatively safe when compared with other modes, such as rail and truck, for transporting hazardous goods because pipelines are mostly underground. For example, we found that large trucks and rail cars transporting hazardous materials, including crude oil and natural gas liquids, resulted in far more fatalities and incidents than pipelines. Specifically, we found that from 2007 to 2011, fatalities averaged about 14 per year for all pipeline incidents reported to PHMSA, including an average of about 2 fatalities per year resulting from incidents on hazardous liquid and natural gas transmission pipelines. In comparison, in 2010, 3,675 fatalities resulted from incidents involving large trucks and 730 additional fatalities resulted from railroad incidents. Therefore, increased transport of oil and gas by rail, truck, or barge could increase safety risks. According to state officials and several publications we reviewed, increased truck traffic resulting from increased oil and gas production can present hazardous driving conditions—particularly on roads not designed to handle heavy truck traffic. Our analysis of data from PHMSA found that in recent years, the number of reported incidents involving the transport of crude oil by truck in both Texas and North Dakota has increased. Specifically, such incidents increased in Texas from 17 incidents in 2008 to 70 incidents in 2013, and in North Dakota they increased from 1 incident in 2008 to 16 incidents in 2013. Barge accidents also pose safety risks and can have associated environmental and economic effects. For example, according to the U.S. Coast Guard Polluting Incident Compendium, in 2011, a barge struck a bridge on the Lower Mississippi River causing damage to the barge and a discharge of just over 11,000 gallons of oil. In February 2014, a barge crash resulted in the spilling of about 31,500 gallons of crude oil into the Mississippi River, temporarily shutting down transportation along the river. According to a 2012 Congressional Research Service report, an oil spill from a barge can cause significant harm to marine ecosystems and individual aquatic organisms and negatively affect business activity near the spill, particularly businesses and individuals that count on the resources and reputation of the local environment. For instance, the local fishing and tourist industry may be affected, and in some cases, a well-publicized oil spill can weaken local or regional industries near the spill site, regardless of the actual threat to human health created by the spill. Economic implications: According to a number of studies and publications we reviewed, infrastructure limitations and related effects could have economic implications, including lost revenue, higher energy prices, and hindered development. Lost revenue: In addition to the risks to air quality from flaring, we found in October 2010 that flaring natural gas has economic implications, and in April 2014 the Environmental Protection Agency reported that flaring results in the destruction of a valuable resource. For example, in 2010 we found that on federal oil and gas leases, natural gas that is flared, instead of captured for sale, represents a loss of about $23 million annually in royalty revenue for the federal government. According to a 2013 report from the North Dakota Pipeline Authority, in August 2013, 2.7 percent of the total economic value and 7.2 percent of the total energy content being produced in North Dakota were lost due to flaring. In another example, a Ceres report found that in May 2013 roughly $3.6 million of revenue was lost per day, at market rates, as a result of flaring in North Dakota. Higher energy prices: Growing shale development and the resulting increased availability and lower prices of natural gas have contributed to an increasing reliance on natural gas as a source of producing electricity in some parts of the country. However, pipeline infrastructure limitations have at times contributed to price spikes. For example, according to a paper from ICF International, pipeline limitations were a contributing factor to higher natural gas prices in the northeast in January 2014. A cold weather pattern involving record low temperatures led to increased demand for natural gas for space heating and for generating electricity across parts of the country. With the surge in demand, several major pipeline systems became constrained and could not deliver sufficient natural gas to meet demand. According to a 2014 EIA publication, prices at the Algonquin, Massachusetts trading point, which normally are around $3 to $6 per million British thermal units (MMBtu) during unconstrained periods, reached up to $38/MMBtu in early January. These price increases for natural gas led electricity systems to use more oil-fueled generating resources during this period. Hindered oil and gas production: A 2013 study sponsored in part by the Utah Department of Transportation found that oil and gas production from the Uinta Basin is likely to be constrained by limitations in the capacity of transportation infrastructure. Specifically, the study found that existing pipelines in the state are already at or near capacity, and by 2020, demand on the infrastructure network to transport oil and gas will exceed capacity—resulting in a loss of 12 percent of potential production over the next 30 years. Further, according to a 2013 industry report, infrastructure constraints such as pipeline limitations and bottlenecks from the Permian Basin in Texas to a key hub have contributed to discounted prices for some domestic crude oils. For example, we found in March 2014 that West Texas Intermediate crude oil—a domestic crude oil delivered to a key hub that is used as a benchmark for pricing for all crude oil—was priced just under $18 per barrel less in 2012 than Brent, an international benchmark crude oil from the European North Sea that has historically been about the same price as West Texas Intermediate. These discounted prices mean resource developers have received lower prices for their crude oil production. According to a 2013 Energy Policy Research Foundation report, discounted prices may eventually lead to production growth constraints. Gathering pipeline construction has increased significantly as a result of increased shale oil and gas development; however, the increase in pipeline mileage is unknown because data on gathering pipelines are not systematically collected by PHMSA nor by every state. The Interstate Natural Gas Association of America (INGAA), a trade organization representing interstate natural gas transmission pipeline companies, estimated in March 2014 that shale oil and gas development will result in approximately 14,000 miles of new gas gathering pipelines and 7,800 miles of new oil gathering pipelines added each year from 2011 through 2035. State officials in Pennsylvania, North Dakota, Texas, and West Virginia said that companies have invested significantly in gathering pipeline infrastructure. For example, according to data published by Texas state officials, 15,684 new miles of federally unregulated gathering pipelines were added in the state between 2010 and 2013. In response to the growth in gathering pipelines, Texas officials told us that their state enacted legislation to increase state regulatory authority over gathering pipelines. Similarly, North Dakota passed rule changes in 2013 to increase state regulatory authority over gathering pipelines. Texas officials told us that they plan to study and determine what parts of their rules should apply to gathering pipelines during 2014 and then issue guidance in 2015. In April 2014, North Dakota implemented regulations requiring companies to report the location and characteristics of gathering pipelines carrying any products including natural gas, crude oil, natural gas liquids, water, and others. The National Association of Pipeline Safety Representatives, an association representing state pipeline safety officials, produced a compendium of state pipeline regulations showing that most states with delegated authority from PHMSA to conduct intrastate inspections do not have expanded regulations that cover increased oversight of gathering pipelines. As a result, companies building gathering pipelines in rural areas are generally not subject to inspection and do not have to report the location and characteristics of much of the gathering pipelines being installed. Although the majority of the total gathering pipeline network that exists are the traditional small pipelines, state pipeline regulators, PHMSA officials, and pipeline operators we spoke with said that some newly built gathering pipelines have larger diameters and higher operating pressures that more closely resemble transmission pipelines than traditional gathering pipelines. For example, while gathering pipelines have traditionally been 2 to 12 inches in diameter, one company operating in the Texas Eagle Ford shale region showed us plans to build 30- and 36- inch natural gas gathering pipelines, which is near the high end of diameters for regulated transmission pipelines. Historically, federally unregulated gathering pipelines were low pressure, smaller-diameter pipelines and were generally in rural areas where there was less safety risk. Now, according to PHMSA, industry, and state pipeline safety officials we spoke to, gathering pipelines of larger diameter and higher pressure are being constructed, including in areas closer to populations. Such construction could increase safety risk, since an incident occurring on one of these larger, high-pressure unregulated gathering pipelines could affect a greater area and be as serious as an incident involving a regulated transmission pipeline of similar diameter and pressure. Pipeline operators and industry organizations told us that new gathering pipelines are likely safer because new pipelines are less susceptible to issues like corrosion—a common reason for failure in older pipelines. Pipeline operators also told us that some large-diameter, high-pressure gathering pipelines are built to the same specifications as regulated transmission pipelines and that these pipelines are in very rural areas with little risk to people. They also expressed that safety is very important to the industry and that companies understand not only the potential harm to the network, people, and environment, but also the public perception following a high-profile incident and therefore manage their assets to avoid incidents. Nonetheless, state pipeline regulators, PHMSA officials, and safety organizations expressed concern with the potential safety threat of unregulated gathering pipelines of this size. For example, a citizens’ shale development awareness group in Pennsylvania has documented construction of several unregulated gathering pipelines in the state that are 24 inches in diameter. The group argues that while these gathering pipelines are in rural areas, they are being built unnecessarily close to homes. PHMSA officials told us that the large diameter and pressure of the pipelines increase the concern for the safety of the environment and people nearby. In addition to potential increased safety risk as a result of the changing characteristics of the pipelines, some stakeholders shared concerns about the readiness of emergency responders to address potential incidents that could occur with unregulated gathering pipelines. PHMSA’s emergency response planning requirements that apply to other pipelines do not apply to rural unregulated gathering pipelines. Consequently, response planning in rural areas with unregulated gathering lines may be inadequate to address a major incident. Transmission pipeline operators with pipelines similar in size to the new gathering pipelines are required to develop comprehensive emergency response plans and coordinate with local emergency responders. Emergency response officials whom we spoke with stated that lacking information about the location of some gathering pipelines, responders—particularly in rural areas—may not be adequately prepared to respond to an incident. A representative from the National Association of State Fire Marshals told us that training and communication with pipeline companies are key for emergency responders’ knowledge and awareness. Additionally, emergency response officials also told us that rural areas in particular lack the level of hazardous-materials response resources found in metropolitan areas where more is known about the extent of local pipeline networks. The National Transportation Safety Board (NTSB) has also stated that emergency response planning is critical for pipeline safety and has recommended that pipeline operators help ensure adequate emergency response by providing local jurisdictions and residents with key information on the pipelines in their areas. As previously discussed, PHMSA applies a risk-based approach to regulating pipeline safety. A key principle of risk-based management is promoting the use of regulations, policies, and procedures to provide consistency in decision-making. PHMSA has acknowledged the growing potential risk of federally unregulated gathering pipelines as more are constructed and at larger diameters and higher pressures, but DOT has not proposed regulatory changes to address this risk. In August 2011, PHMSA published an Advance Notice of Proposed Rulemaking, stating that the existing regulatory framework for natural gas gathering pipelines may no longer be appropriate due to recent developments in gas production. In the notice, PHMSA asked for comment on whether it should consider establishing new, risk-based safety requirements for large-diameter, high-pressure gas gathering pipelines in rural locations, among other potential changes to gathering pipeline regulations. The proposal also states that enforcement of current requirements has been hampered by the conflicting and ambiguous language of the current regulation that can produce multiple classifications for the same pipeline system, which means that parts of a single pipeline system can be classified as rural gathering pipelines and therefore be unregulated, while other parts of the same pipeline with the same characteristics are regulated. PHMSA officials told us they have drafted proposed regulations for both gas and hazardous liquid gathering pipelines but as of June 2014, the agency had not issued the Notice of Proposed Rulemaking for comment. According to DOT officials, the proposed gas rule is being reviewed internally and the proposed hazardous liquid rule is with the Office of Management and Budget for review. PHMSA officials also told us they have studied existing federal and state gathering pipeline regulation to help identify where gathering pipelines are currently regulated and remaining gaps; however, this study has been in the final stages of review during the course of our work and has not yet been published. Given the lack of PHMSA regulation of rural gathering pipelines, the extent, location, and construction practices for rural gathering pipelines is largely unknown by federal, state, and local officials, and oversight to verify the construction and monitor operators’ safety practices is lacking. In 2012, we concluded that unregulated gathering pipelines also pose risks due to construction quality, maintenance practices, and limited or unknown information on pipeline integrity. We recommended at that time that PHMSA collect data on unregulated gathering pipelines to facilitate quantitatively assessing the safety risks posed by these pipelines, which we said could assist in determining the sufficiency of safety regulations for gathering pipelines. According to DOT officials, as of July 2014, PHMSA has compiled data on existing gathering pipeline requirements, and the resulting report is under internal review. Furthermore, officials said that data collection is part of the proposed rules also under review. In 2010, the National Association of Pipeline Safety Representatives recommended that PHMSA modify federal pipeline regulations to establish requirements for gathering pipelines in rural areas that are presently not regulated. The association stated that with the advent of new production technologies, there has been rapid development of gas production from shale formations such as the Barnett, Marcellus, and Bakken resulting in a significant amount of new gathering pipeline construction. Further, in these newer gas gathering systems, it is not uncommon to find rural gathering pipelines up to 30 inches in diameter and operating at 1480 psi, which is the higher end of traditional transmission operating pressure. Enhanced pipeline safety for all types of pipeline was also on NTSB’s “Most Wanted List” in 2013 and 2014. NTSB has prioritized overall pipeline safety because of the increased demand in oil and gas and the aging pipeline infrastructure. Resources are an important consideration in evaluating how to address the increased risk of gathering pipelines. According to PHMSA officials, inspection resources are limited and would be further stretched if rural gathering pipelines were regulated. If PHMSA were to receive increased staff funding in the near future, there could be a lag in ramping up the inspection workforce because inspectors would have to complete PHMSA’s 3-year pipeline-inspection training to become fully certified. However, if PHMSA were to set minimum federal regulations for gathering pipelines, this would enable the agency to include currently federally unregulated rural gathering lines in decisions for prioritizing resources for addressing safety risks. This is in line with the principles of risk-based management, while also enabling data-driven, evidence-based decisions about the risks of rural gathering pipelines, which our previous work has shown is especially important in a time of limited resources. State regulators in all four states we spoke with acknowledged their resources could also be strained; however, the officials supported regulating rural gathering pipelines. State officials said that without rural gathering pipeline regulation, such as provisions for inspections or industry reporting, they have limited knowledge of the construction and maintenance practices of rural gathering pipeline operators, do not always know where new rural gathering pipelines are being constructed, and may not even have communication with the operators. Gas flows from shale plays into transmission pipelines have increased, but construction of new transmission pipelines has not increased dramatically as a result of increased shale development. According to PHMSA data, approximately 4,500 miles of new oil and gas transmission pipelines were built between January 2010 and December 2012. This includes about 2,000 miles each of crude oil and natural gas pipelines and the remaining is pipeline for natural gas liquids. Oil and gas pipeline industry representatives and transmission pipeline companies we spoke with stated that transmission pipeline companies have been able to accommodate increased demand in various ways. Transmission pipeline companies have repurposed existing pipelines, made operational changes—including increased compression and change in directional flow—and added additional capacity to the current network through smaller-scale construction projects. Companies have repurposed pipelines by changing them from one product to another, such as converting a natural gas pipeline to a natural gas liquid pipeline or a crude oil pipeline. Companies have also instituted operational changes such as adding additional compression to their line, which allows them to move more gas through the same lines, and changing directional flow of a pipeline. One pipeline operator we spoke with is reversing flow of its gas transmission pipeline. Prior to shale development, this pipeline moved natural gas from the Gulf Coast to the Northeast. By 2017, the volume of gas that once flowed north will flow south. The pipeline operator stated that changing the direction of the flow, while not as easy as flipping a switch, requires significantly less time and money than building a new pipeline. Smaller-scale construction projects, such as short pipeline extensions, help meet the demands of shale areas like the Marcellus because the natural gas being produced is in close proximity to its destination market. Therefore, it is primarily a matter of connecting new gas production into the existing transmission pipeline network. However, accommodating increased demand through new construction is a challenging proposition. To build a transmission pipeline requires a long-term commitment—often a contract that spans 30 to 50 years from producers. Major transmission pipeline projects may also face long timeframes. Pipeline companies we spoke with stated that once contracts are in place it usually takes 2 to 4 years to complete a pipeline. This is in part a result of the permitting process, which can include multiple federal and state agencies as well as obtaining rights to build on properties of individual land owners. Timelines can be longer if the pipeline construction project is contentious. The construction timeline is also dependent on terrain and weather. Pipeline industry representatives in West Virginia told us that constructing pipelines in mountainous terrains is much more difficult than in flat land. State officials in North Dakota said that long winters and short construction seasons cause construction projects to last over several construction seasons to complete. Other reasons provided by state officials and industry officials for a slower growth in pipeline infrastructure in North Dakota include the challenge of securing right-of-ways, as well as uncertainty in market demand. It is possible that continued shale exploration and development in other parts of the country could displace demand for Bakken shale products. Developers have proposed some future pipeline projects, and some have been approved in areas like North Dakota, but transmission pipeline mileage has not seen the same kind of rapid increase as gathering pipeline mileage. For example, Enbridge Energy Partners has proposed a transmission pipeline called Sandpiper that spans 612 miles from North Dakota through Minnesota to Wisconsin. The pipeline is proposed to be 24 inches and 30 inches in different places and will carry between 225,000 and 375,000 barrels of oil. WBI Energy announced planning for a 375-mile natural gas transmission line called the Dakota Pipeline in January 2014. The proposed route would start in North Dakota and continue into Minnesota. In areas of shale development without access to an established pipeline network, such as the Bakken region, lengthy timelines and high costs associated with building transmission pipeline have led producers to seek alternative methods for transporting some of the production—primarily rail. The use of rail to transport crude oil from development areas to refineries has increased dramatically. STB data show that rail moved about 236,000 carloads of crude oil in 2012, which is 24 times more than the approximately 9,700 carloads moved in 2008 (see fig. 5). Carloads further increased in 2013, with AAR reporting that Class I freight railroads originated 407,761 carloads of crude oil that year. According to railroads, the majority of this increased movement of crude oil by rail is done using unit trains, which are trains that carry only one commodity to a single destination. Crude oil unit trains may consist of 80 to 120 tank cars, each carrying about 30,000 gallons of product, for a total of about 2.4 million to 3.6 million gallons of crude oil per train (see figure 6). This has resulted in an increase in demand for tank cars. According to AAR, nearly 50,000 tank cars were used to transport crude oil by rail as of April 2014. There are different types of crude oil, which affects where crude is refined, as refineries are configured differently to handle the various types. This, in turn, affects where crude oil is transported for refining. Increasingly, crude oil produced in the United States is “light and sweet” (lower in density and sulfur content); in contrast, a portion of new Canadian production has been “heavy and sour” crude oil (higher in density and sulfur content). We have previously reported that not all U.S. refineries can take advantage of domestic crude oils to the same extent because of configuration constraints at some refineries. Therefore, oil may travel long distances to a refinery with the matching refining configuration even if there is refinery capacity nearer to the crude oil source. According to an oil industry association in North Dakota, much of the domestic refining capacity for Bakken crude oil, which is a lighter crude, is located along the Gulf, East, and West Coasts. According to STB data, about 69 percent of crude oil transported by rail in 2012 originated in North Dakota, followed by Texas and all other states (see fig. 7). STB data show that crude oil originating in North Dakota in 2012 traveled to 19 destination American states or Canadian provinces across North America. While most Bakken crude oil was shipped to destinations along the Gulf Coast, there was a large increase in oil shipped to East and West Coast destinations in 2012, signaling a shift in demand from the Gulf region to the other coasts. While pipelines generally deliver commodities to a fixed customer, rail offers the flexibility to serve different customers, allowing shippers to shift product quickly in response to market needs and price opportunities. Despite the great increase in crude oil transported by rail, the commodity remains a small percentage of railroads’ business, comprising about 1.4 percent of Class I railroads’ freight originations in 2013, according to AAR. As previously discussed, there are thousands of miles of track in the United States, providing various shipping opportunities for crude oil, as well as other commodities. Officials from Class I railroads said they have not extensively added new infrastructure to specifically accommodate the increased shipping of crude oil by rail, although officials from some railroads said they have added track infrastructure in specific areas of increased shale oil development to increase capacity. As the movement of crude oil by rail has increased, incidents, such as spills and fires involving crude oil trains have also increased. PHMSA’s hazardous materials incident data show that rail crude oil incidents in the United States increased from 8 incidents in 2008 to 119 incidents in 2013. These data show that the majority of the 2013 incidents were small; however, two incidents in 2013, in Aliceville, Alabama and Casselton, North Dakota resulted in large spills and greater damage. Significant incidents have continued to occur in 2014, including an April derailment and fire in Lynchburg, Virginia. During a presentation at an April 2014 forum on rail safety, NTSB noted that significant accidents involving crude oil have increased in recent years, with one incident occurring between 2008 and 2012 compared to eight incidents since 2012. DOT, primarily through PHMSA and FRA, sometimes jointly, has taken steps to engage the rail and oil and gas shipping industries and emergency responders to address the safety of transporting crude oil by rail, particularly in response to concerns stemming from the July 2013 Lac-Mégantic, Quebec accident: In August 2013, February 2014, and May 2014, DOT issued emergency orders to compel shippers and railroads to address safety risks by taking steps to secure unattended trains, ensure proper testing and packaging of crude oil, and notify emergency responders about crude oil shipments. DOT also issued safety advisories during this period recommending additional actions. In August 2013, PHMSA, with FRA assistance, initiated an ongoing special inspection program to examine whether crude oil rail shipments are appropriately tested and packaged. The effort consists of spot inspections, data collection, and testing crude oil samples taken from tank cars. Initial information from the program has identified deficiencies; DOT issued fines against three companies in February 2014 for not following proper crude oil packaging procedures. According to PHMSA officials, this effort inspects about 2 percent of Bakken crude oil trains. In September 2013, PHMSA issued an Advance Notice of Proposed Rulemaking seeking comments from industry and other stakeholders on improvements to standards for crude oil rail tank cars. This action was in response to the railroad industry’s 2011 petition for improved standards and recommendations from NTSB. In January 2014, PHMSA issued a safety alert notifying the general public, emergency responders, shippers, and carriers that Bakken crude oil may be more flammable than traditional heavy crude oil based on tests associated with PHMSA and FRA’s special inspection program. PHMSA said it planned to issue final results of the tests at a later date. In February 2014, DOT entered into a voluntary agreement with AAR to improve the safety of moving crude oil by rail including increased track inspections, improved emergency braking capabilities, use of a risk-based routing tool to identifying the safest routes, travel at lower speeds, and emergency response training and planning. Also in February 2014, PHMSA officials met with emergency responders and industry groups to discuss training and awareness related to the transport of Bakken crude oil. In July 2014, DOT issued an update on PHMSA and FRA’s joint special inspection program that includes results to date of their crude oil testing efforts and related discussion of the appropriate packaging of oil tested. Additionally, in July 2014, PHMSA, in coordination with FRA, proposed new rules that align with key areas of concern cited by stakeholders: crude oil classification, testing, and packaging; crude oil tank car design; and emergency response. Specifically, PHMSA issued a Notice of Proposed Rulemaking seeking comment on proposals for new regulations to lessen the frequency and consequences of train accidents involving large volumes of flammable liquids, such as crude oil. The proposal includes new operational requirements for certain trains transporting a large volume of flammable liquids; revisions to requirements for crude oil classification, testing, and packaging; and improvements in tank car standards. Additionally, PHMSA, also in consultation with FRA, issued an Advance Notice of Proposed Rulemaking seeking comments in response to questions about potential revisions to current regulations for emergency response planning for crude oil transported by rail. PHMSA’s current regulations for transporting hazardous materials require that shippers classify and characterize the materials they ship to identify the materials’ characteristic properties and select an appropriate shipping package based on those properties. “Classification” refers to identifying a material’s hazard class, which could be one or more from the list of nine in PHMSA’s hazardous materials regulations, such as a flammable liquid or flammable gas. “Characterization” refers to ascertaining other characteristics of the product to determine its proper packing group, a designation based on risk that identifies acceptable packages. Specifically, PHMSA’s regulations classify crude oil as a flammable liquid and offer acceptable tank cars under three packing groups based on characteristics such as the oil’s flash point—lowest temperature at which a liquid can vaporize to form an ignitable mixture in air—and boiling point. PHMSA’s regulations provide options for the tests shippers may use to determine these characteristics. Crude oil with higher boiling and flash points is considered less risky, since it is less likely to form flammable vapor unless exposed to extreme temperatures, and more approved packaging choices exist for such oil than for oil with lower boiling and flash points that could form ignitable vapor at lower temperatures. Substances that are gases, rather than liquid, at ambient temperatures are even more flammable, and thus more stringent packaging requirements apply to flammable gases than to flammable liquids. In particular, flammable gases must be packaged in pressure tank cars, which provide additional safety in the event of an accident. According to PHMSA officials, because crude oil is a natural resource, it has greater characteristic variability than a hazardous material manufactured under strict specifications or quality guidelines. Thus, testing may need to be done more frequently to make sure the proper packaging rules are followed, since different rules may apply depending on the characteristics of a particular oil shipment. A review by Canadian transportation safety officials determined that the crude oil involved in the Lac-Mégantic accident was packaged under less stringent packing requirements than those which should have been followed, given the flammability characteristics of the oil involved. Although, as DOT officials pointed out, a different packing group would not have changed the package itself, since the type of tank car involved in the incident, the DOT-111 tank car, is allowed for crude oil transport under all three packing groups. Stakeholders we spoke to have differing views on the volatility of crude oil from the Bakken region, the area where the most crude oil is being shipped by rail. Some industry stakeholders, including the operators of Bakken crude oil rail terminals, characterized Bakken crude oil as being like any other crude oil produced in the United States, while other stakeholders said it has differences that may make it more volatile. In particular, PHMSA and AAR officials said that Bakken oil has variable composition and may sometimes contain higher than usual levels of dissolved natural gases. According to AAR officials, this can lead to flammable gases building up in a tank car during transport. AAR officials also said that the presence of natural gas makes fires more likely when crude oil tank cars are involved in an accident. Additionally, a May 2014 industry study noted that Bakken crude oil may contain higher amounts of dissolved flammable gases; however, the report states that it is not enough to warrant new regulations for crude oil rail transportation. According to PHMSA officials, the current regulatory framework for classifying and packaging crude oil for transport by rail may need to be further examined to ascertain if it addresses all of the risks posed by some shale crude oils that have properties unlike other typical crudes. PHMSA’s July 2014 Notice of Proposed Rulemaking calls for shippers of mined gases and liquids to be transported by rail, including crude oil, to develop and implement a program for sampling and testing to ensure the shippers’ materials are properly classified and characterized. The procedures must outline a frequency of sampling and testing that accounts for the potential variability of the material being tested, sampling at various points to understand the variability of the material during transportation, sampling methods that ensure a representative sample of the entire packaged mixture is collected, and testing methods used, among other requirements. The sampling and testing program must be documented in writing and retained for as long as it remains in effect and be made available to DOT for review upon request. Representatives from railroads and crude oil terminals we spoke to, as well as from the oil and gas industry, have indicated that clarification about the requirements for testing and packaging crude oil is needed. Specifically, two of the railroads and two crude oil rail terminal operators told us that PHMSA needs to clarify its crude oil testing requirements, including to more clearly state which tests should be done and with what frequency. One of the terminal operators told us that without clearer guidance, they are unsure whether they are performing the right tests and testing with sufficient frequency. They are also concerned they may be incurring unnecessary expense from over-testing. PHMSA’s July 2014 Notice of Proposed Rulemaking does not state which tests should be performed or specifically how often, but does state that testing methods used should enable complete analysis, classification, and characterization of the material as required by PHMSA’s regulations and that the frequency of testing should account for the potential variability of the material. The notice also seeks comment on whether more or less specificity of these requirements would aid shippers and whether the proposed guidelines provide sufficient clarity for shippers to understand whether they are in compliance. PHMSA has also drafted more detailed guidance on classification and packaging crude oil, including testing procedures, but had not released it publicly as of June 2014. Additionally, the American Petroleum Institute, an oil and gas industry association, formed a working group in 2014 to develop industry standards for testing and packaging of crude oil for transportation by rail, which the group hopes to implement by October 2014. PHMSA officials said that PHSMA scientists have been attending the group’s meetings and providing input. In its July 2014 Notice of Proposed Rulemaking, PHMSA noted that it is encouraged by the development of an industry standard and that once finalized, PHMSA may consider adoption of such a standard. Under PHMSA’s current packaging regulations, a number of types of tank cars are approved for transporting crude oil. However, DOT-111 tank cars are most commonly used, according to industry and railroad representatives, and PHMSA’s regulations allow its use for all types of crude oil, regardless of packing group. NTSB has documented a history of safety concerns with the DOT-111 tank car. Specifically, NTSB has raised concerns regarding the tank car’s puncture resistance, heat tolerance, and potential for overpressurization during a fire. In its report of a 2011 investigation of the derailment of a train hauling ethanol tank cars, NTSB noted that its 1991 safety study and four train-derailment investigations from 1992 to 2009 had identified problems with DOT-111 tank cars. The report further concluded that the car’s poor performance suggested that DOT-111 tank cars are inadequately designed to prevent punctures and breaches and that the catastrophic release of hazardous materials can be expected when derailments involve DOT-111 cars. In response to that and other rail incidents, in 2012, NTSB recommended that PHMSA upgrade its DOT- 111 tank car standards to improve tank shielding and puncture resistance, a move the industry had already begun to address. In 2011, the railroad industry petitioned PHMSA to adopt improved standards for DOT-111 tank cars and worked with tank car manufacturers and other stakeholders to develop improved industry standards that were implemented later that year. These standards called for a thicker shell to improve puncture resistance, shielding at both ends of the tank car, and protection for the top fittings of the tank car. More recently in November 2013, following the Lac-Mégantic accident, the railroad industry called for further tank car upgrades, including a thicker shell, protection to prevent overheating, additional shielding, protection for outlet handles on the bottom of a tank, and high-capacity pressure relief valves. Figure 8 shows how these various proposed upgrades may be incorporated into a crude oil tank car. Although tank cars are generally owned by shippers or third parties, one railroad told us they intend to acquire their own fleet of tank cars built to the railroad industry’s 2013 proposed standards. The railroad hopes to incentivize the cars’ use by shippers; however, railroads are obligated to move materials as long as they are packaged according to federal standards, and told us they cannot force customers to use upgraded cars and have to accept cargo so long as it is in an allowable package, which includes older model DOT-111 cars. A wide range of stakeholders we interviewed—including those from PHMSA, NTSB, state transportation agencies, railroad industry, and rail suppliers—told us that crude oil tank car standards need to be improved. Most shippers, railroads, and rail suppliers providing comments in response to PHMSA’s September 2013 rail safety Advance Notice of Proposed Rulemaking also stated this opinion. However, there were some differences in their views on how improvements should be implemented. Those who commented in response to PHMSA’s notice supported enacting the industry’s upgraded tank car standard into regulation and were generally supportive of proposals to strengthen tank cars’ puncture resistance through design features such as thicker tank walls, jackets, and shielding. However, stakeholders disagreed on the extent to which existing tank cars should—or even could—be retrofitted to meet higher standards. For example, shippers and rail suppliers stated that existing tank cars built to the current industry standards that already exceed the regulatory standard should be exempt from retrofitting if PHMSA were to adopt an even higher standard. Shippers also expressed concerns that retrofitting would be costly, take tank cars out of service, and put a burden on the already busy shops that also build new tank cars. Local-government and rail-industry commenters supported retrofitting existing cars. In its July 2014 Notice of Proposed Rulemaking, PHMSA sought comment on requirements for a new DOT-117 tank car standard to replace the current DOT-111 standard for newly manufactured tank cars transporting flammable liquids, which could be one of three options: 1) a design by PHMSA and FRA that would increase puncture resistance, provide thermal protection, protect top fittings and bottom outlets, and improve braking performance; 2) the design in the railroad industry’s proposal from November 2013 previously discussed; and 3) the 2011 industry-developed tank car design with some enhancements. PHMSA’s Notice of Proposed Rulemaking concluded that cars built to the option 3 standard would likely be built in the absence of a new rule, based on commitments from industry, but that options 1 and 2 would provide additional safety benefits, along with additional cost. Specifically, PHMSA estimated that the improved braking, roll-over protection and increased shell thickness under option 1 would cost $5,000 more per car than option 3. According to PHMSA, option 2 would have most of the same safety features as option 1 except rollover protection and the improved braking system, resulting in a cost of $2,000 more per car than option 3. In addition to the new DOT-117 standard, the proposal would create a performance standard for the design and construction of new tank cars equivalent to the DOT-117, subject to FRA approval. Under the proposal, existing tank cars would have to be retrofitted to comply with the performance standard. The proposal calls for phasing out use of all DOT-111 tank cars for transporting flammable liquids by October 1, 2020, although the cars could still be used to transport other materials. Although this proposed rule had previously been scheduled for release in November 2014, PHMSA accelerated its efforts to issue a proposal, resulting in the July 2014 release. According to PHMSA officials, the agency did not issue a proposal sooner because the industry, as well as the public, have had different opinions on tank car specifications. Since 2011, PHMSA has received multiple petitions seeking changes to tank car safety standards. In the interim, the railroad industry has moved to adopt higher standards, and in a May 2014 safety advisory, PHMSA and FRA asked companies to refrain from using older tank cars if possible. Transporting a large volume of flammable liquid in one train increases the risk of a large fire or explosion in the event of a derailment, such as in the Lac-Mégantic incident. DOT has noted that the transportation of crude oil in unit trains compounds the risk of ignition, and NTSB has reported that crude oil unit trains present the potential for disastrous consequences in the event of an accident. Associations representing emergency responders told us they are particularly concerned about the risk these trains pose to rural areas, which generally have fewer resources to respond to hazardous materials incidents. They also cited concerns with the general lack of awareness about risks and the need for industry to better communicate with local responders about them. Railroad officials told us that risks from unit trains can be managed. Further, railroad officials told us that transporting crude oil in trains that carry a mixture of freight commodities could be higher risk, due to the need to sort crude oil tank cars in rail yards, and that doing so would lead to reduced efficiency by increasing the turn-around times for crude oil trains. In August 2013, AAR revised its guidance on hazardous materials operating practices so that its restrictions would apply to crude oil unit trains; these restrictions include a 50 MPH speed limit, limitations on the use of track siding, and requirements for addressing defective bearings. However, associations representing emergency responders told us that industry should do more to prepare responders for potential incidents, such as by providing information, training, and resources. These organizations also shared concerns about rural responders lacking resources and information to respond as effectively as responders in urban areas. As previously mentioned, stakeholders told us that rural areas may lack sufficient resources to respond to a major event, like an accident involving a crude oil unit train. As previously discussed, PHMSA has engaged emergency responders on crude oil transportation safety, and the voluntary commitment DOT secured from railroads included emergency response planning and training efforts. However, PHMSA’s requirements for comprehensive emergency response planning do not apply to unit trains used to transport crude oil, raising concerns about the abilities of responders and other stakeholders to effectively handle potential incidents. As currently worded, PHMSA’s regulations require comprehensive plans for trains that haul any liquid petroleum or non-petroleum oil in a quantity greater than 42,000 gallons per package—greater than the about 30,000 gallons of crude oil typically transported in a tank car—even though a unit train of 100 cars could carry about 3 million gallons of crude oil. Instead, PHMSA requires railroads to have a basic plan that includes information about the maximum potential discharge, response plans, and identification of (but not coordination with) private response personnel and the appropriate people and agencies to contact in the event of an incident. Federal regulations require that comprehensive emergency response plans include a written plan outlining contingency planning, an identified central coordinating official during an incident, private personnel secured by contract or other means to respond to a worst-case incident, training, equipment, and response actions and be subject to review by FRA. Without a comprehensive plan, PHMSA does not have assurance that railroads have taken steps to plan for response needs and identified and coordinated with the appropriate responders. PHMSA’s July 2014 Advance Notice of Proposed Rulemaking seeks comment on several possible ways of expanding the comprehensive planning requirement to include crude oil unit trains by changing the threshold under which such plans are required. Although this review focuses on the packaging and movement of crude oil in tank cars, our prior work has found that while DOT has taken actions in this area, there are other safety issues that are also relevant to the context of the safety of transporting crude oil by rail. Specifically, in a December 2013 report, we found that FRA has developed a risk-based approach to direct its inspection efforts, but the agency had been slow to implement broader risk reduction planning. As required by the Rail Safety Improvement Act of 2008, FRA was tasked with overseeing railroads’ development of risk reduction plans. Specifically, FRA was required to issue a final rule by October 2012 directing railroads to develop these plans, but our report found that FRA had not yet issued the final rule. Our report described safety challenges that railroads face, some of which can contribute to derailments. Other actions have been taken subsequently. FRA issued a final rule in January 2014 revising track inspection requirements to increase the standard for track used to transport hazardous materials. And, as discussed, railroads entered into a voluntary agreement with DOT in February 2014 to improve the safety of crude oil trains. In their comments in response to PHMSA’s September 2013 Advance Notice of Proposed Rulemaking on rail safety, several shippers noted that recent incidents have generally been caused by defective track, railroad equipment or operational issues, and supported improvements in these areas. NTSB’s accident report for the aforementioned 2011 derailment of ethanol tank cars noted that although problems with tank cars were a contributing factor, the probable cause of the accident was a broken rail. PHMSA has also noted that addressing the causes of derailments, not just upgrading tank cars, is important for improving the safety of transporting crude oil by rail. According to PHMSA officials, the severity of a derailment may present a wide range of forces for any particular tank car to withstand, and therefore, even an enhanced tank car may have variable performance and may not always perform better in a given derailment. PHMSA’s July 2014 Notice of Proposed Rulemaking to address safety of transporting crude oil by rail includes a number of other provisions in addition to those already discussed for trains carrying 20 or more tank carloads of flammable liquids, including a routing analysis, enhanced braking, and codifying the May 2014 emergency order requiring notification to emergency responders about crude oil shipments. The advent of new oil and gas production technologies has created a new energy boom for the United States. However, with this increase in production comes the responsibility to move those flammable, hazardous materials safely. While the Department of Transportation has worked to identify and address risks, its regulation has not kept pace with the changing oil and gas transportation environment. Gathering pipeline construction has increased, but some of these new pipelines in rural areas fall outside the current safety framework, despite operating at the size and pressure (and therefore similar risk) as federally regulated transmission lines. DOT began a rulemaking to address this issue in 2011 but did not issue proposed rules. Subsequently, new gathering pipeline infrastructure has continued to grow, with industry predicting such growth will continue for the foreseeable future, raising concerns where such pipelines are not subject to safety regulations. The growth in the use of rail to move crude oil has likewise revealed risks not fully addressed by the current safety framework, particularly in ensuring that oil is properly tested and packaged for shipping. Emergency responders also need to be adequately prepared in the event that incidents occur, both for pipeline and for rail. Recent transportation incidents, such as the July 2013 train accident in Lac-Mégantic, Quebec, have highlighted the need for risk-based federal safety oversight. Since the Lac-Mégantic accident, much emphasis has been placed on the need to upgrade standards for tank cars that carry crude oil, but attention to tank cars alone is not sufficient to address safety, a sentiment shared by some railroads and shippers, as well as DOT. Oil and gas shippers, railroads and pipeline operators, emergency responders, and government all have a role to play. Shippers, in particular, play an important role in making sure that hazardous materials like crude oil are properly packaged for safe transport. This underscores the importance of DOT’s role in assessing the risk such oil poses and providing clear guidance for handling it safely. Without timely action to address safety risks posed by increased transport of oil and gas by pipeline and rail, additional accidents that could have been prevented or mitigated may endanger the public and call into question the readiness of transportation networks in the new oil and gas environment. DOT’s recent proposed rulemakings to address concerns about transporting crude oil by rail signal the department’s commitment to addressing these important safety issues. Because of the ongoing rail safety rulemakings, we are not making recommendations related to rail at this time. To address the increased risk posed by new gathering pipeline construction in shale development areas, we recommend that the Secretary of Transportation, in conjunction with the Administrator of PHMSA, move forward with a Notice of Proposed Rulemaking to address gathering pipeline safety that addresses the risks of larger-diameter, higher-pressure gathering pipelines, including subjecting such pipelines to emergency response planning requirements that currently do not apply. We provided a draft of this report to DOT for comment. We received written comments from DOT’s Assistant Secretary for Administration, which are reproduced in appendix III. These comments stated that PHMSA generally concurred with our recommendation to move forward with a rulemaking to address risks posed by gathering pipelines. Further, the letter stated that PHMSA is developing a rulemaking to revise its pipeline safety regulations and is examining the need to adopt safety requirements for gas gathering pipelines that are not currently subject to regulations. Additionally, the letter stated that proposed regulations are under development to ensure the safety of natural gas and hazardous liquid gathering pipelines that include collecting new information about gathering pipelines to better understand the risk they pose. In the version of the draft report we sent to DOT for comment, we had also recommended that PHMSA develop and publish additional guidance on testing, classification and packaging of crude oil for transport by rail and that PHMSA address emergency response planning regulations for transporting oil by rail so that they include shipments of crude oil by unit trains. DOT’s written response stated that PHMSA generally concurred with these recommendations and was taking steps to address them. Subsequently, on July 23, 2014, PHMSA, in coordination with FRA, issued rulemaking proposals that, if implemented, would likely address these concerns. Therefore, we are no longer making those recommendations in this report and we have added language to the report describing the objectives of the proposals. We also received technical comments from DOT, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and to the Secretary of Transportation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Susan Fleming at (202) 512-2834 or flemings@gao.gov or Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. This report addresses (1) challenges, if any, that increased domestic oil and gas production poses for U.S. transportation infrastructure and examples of associated risks and implications; (2) how pipeline infrastructure has changed as a result of increased oil and gas production, the key related safety risks, and to what extent the U.S. Department of Transportation (DOT) has addressed these risks; and (3) how rail infrastructure has changed as a result of increased oil production, the key related safety risks, and to what extent DOT has addressed these risks. To identify challenges increased domestic oil and gas production poses for U.S. transportation infrastructure and examples of the associated risks and implications, we reviewed and synthesized information from 36 studies and other publications from federal, state, and tribal government agencies; industry; academics; and other organizations. We identified these studies and publications by conducting a search of web-based databases and resources—including Transport Research International Documentation, ProQuest, and FirstSearch—containing general academic articles, government resources, and “gray literature.” Studies and publications were limited to those focused on domestic onshore oil and gas production and published in the years 2008 through 2013. In addition, we reviewed prior work we have conducted. We included examples of known transportation infrastructure limitations and associated effects from these studies and publications in this report. We believe the studies and publications identified through our literature search and included in our review have identified key examples of known transportation infrastructure limitations and associated effects. In addition, we analyzed data from the U.S. Department of Energy’s Energy Information Administration (EIA) to identify oil and gas produced from 2007 to 2012. To assess the reliability of these data, we examined EIA’s published methodology for collecting this information and found the data sufficiently reliable for the purposes of this report. To determine how pipeline infrastructure has changed as a result of increased oil and gas production, we analyzed data from DOT’s Pipeline and Hazardous Materials Safety Administration (PHMSA) on pipeline construction from January 1, 2010 through December 31, 2012 and interviewed PHMSA officials and representatives of pipeline industry associations and operators. We assessed the reliability of the data on pipeline construction by reviewing documentation about the database, interviewing agency officials about how the data are collected, comparing the data to similar information from EIA on completed pipeline projects, and reviewing the agency’s related internal controls. We determined that the data were sufficiently reliable for describing new pipeline construction projects. We identified key pipeline safety risks by reviewing documents provided by and interviewing officials from PHMSA, pipeline industry associations and operators, and safety organizations. To examine trends in pipeline incidents, we analyzed PHMSA’s pipeline incident data from January 1, 2008 through December 31, 2013. This analysis only examined transmission pipeline incidents, since many gathering pipelines are not regulated and therefore the data may not include potential gathering pipeline incidents. We assessed the reliability of these data by reviewing documentation on the collection of these data, interviewing agency officials about how the data are collected and whether there are potential limitations for using the data as we intended, and reviewing the agency’s related internal controls. We determined that these data were sufficiently reliable for identifying trends in pipeline incidents. We also examined transportation infrastructure changes and safety risks specific to key shale development areas in four states selected because they are located above shale plays in different parts of the country with generally the highest levels of oil and gas production from 2007 through 2011, according to EIA data. The states and corresponding shale plays were Pennsylvania and West Virginia (Marcellus shale play), North Dakota (Bakken shale play), and Texas (Eagle Ford shale play). In these states, we spoke with state oil and gas regulatory and transportation agencies, oil and gas industry associations, oil and gas companies, railroads, and crude oil rail terminal operators, as well as a community advocacy organization in Pennsylvania. We also reviewed documents provided by these organizations. To determine how rail infrastructure has changed, we analyzed Surface Transportation Board (STB) data on crude oil shipments by rail for calendar years 2008 through 2012 and interviewed DOT officials from PHMSA and the Federal Railroad Administration and industry representatives, including railroads. We assessed the reliability of the STB data by reviewing documentation about the data, interviewing agency officials about how the data were collected and ways they could be analyzed, and reviewing the agency’s related internal controls. We determined that the data were sufficiently reliable for describing trends in the movement of crude oil. To identify the key safety risks related to changes in rail infrastructure, we analyzed PHMSA’s data on rail hazardous-materials incidents from January 1, 2008 through December 31, 2013, reviewed documents submitted to a DOT rulemaking proceeding on rail safety, and interviewed DOT officials and representatives from safety organizations and industry. We assessed the reliability of PHMSA’s incident data by reviewing documentation about the data, interviewing agencies officials about how the data were collected, testing the data for inconsistencies, and reviewing the agency’s related internal controls. We concluded that the data were sufficiently reliable for discussing trends in rail hazardous-materials incidents. Additionally, to examine infrastructure impacts and safety issues closely associated with shale areas, we Interviewed officials from state oil and gas regulatory and transportation agencies, industry associations and oil and gas transportation companies in the four states mentioned previously: North Dakota, Pennsylvania, Texas, and West Virginia. To evaluate to what extent DOT has addressed safety risks, we reviewed federal laws and regulations, DOT emergency orders and guidance, and interviewed DOT officials. National and state-level stakeholders we interviewed are listed in tables 2 and 3. We conducted this performance audit from August 2013 to August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Pipeline and rail are used to transport shale oil and gas long distances; however, truck transportation via local roads and highways also plays an important role. Trucks transport these goods from production areas to pipeline and rail as well as haul most materials needed to develop oil and gas, such as water, sand, and equipment used during drilling and hydraulic fracturing. State agencies within the four states we examined (Pennsylvania, North Dakota, Pennsylvania, and Texas) have noted significant local road and highway impacts and safety concerns as a result of shale oil and gas development and have taken steps to address these impacts. States have reported a significant increase in truck traffic as a result of shale oil and gas development, the effects of which are particularly acute in rural areas unused to this level of road congestion. State officials provided estimates of the number of truck-loads required to drill and fracture a shale gas well ranging from about 1,200 to about 3,000. Although the number of trucks is greatest during the initial drilling and fracturing phases, significant truck volume may return if a well is re- fractured and, in the case of oil wells, trucks can be used to remove oil if the wells are not connected by pipeline. State officials told us that roads in many of these areas prior to development were built for light local and farm use and were not built for the additional thousands of heavy truck loads associated with oil and gas development, leading to deterioration. North Dakota officials told us that many of the roads the oil industry is using were built to handle approximately 600 loads a day and now these roads can see thousands of heavy truck loads per day. Officials also told us that state-wide, truck traffic accounts for approximately 18 percent of all traffic, but in development areas truck traffic can account for 35 to 50 percent. As a result, it has been much more difficult to predict where the biggest road deterioration is going to happen because it depends on the location and intensity of shale development. In Pennsylvania, officials told us the increased volume of trucks has shortened the roads’ normal life cycle, leading to accelerated deterioration and significant damage. The costs of shoring up and rebuilding roads to address these impacts are significant. The Texas Department of Transportation estimated an annual impact to farm-to-market roads, state highways, and local roads in the Eagle Ford area of about $4 billion. Road deterioration and increased truck volumes have created safety concerns in these states. Reported highway incidents involving crude oil have increased in recent years in North Dakota and Texas, the two states we examined with significant shale oil development. In Texas, for example, the Texas Department of Transportation reported an increase in highway crashes in the Eagle Ford shale and Permian Basin areas, with the Permian area seeing a 13 percent increase in roadway fatalities between 2012 and 2013. The following actions are examples of ways state officials said they have addressed highway infrastructure and safety concerns: Extraction taxes. Officials in North Dakota told us the state uses taxes on extracted mineral resources to pay for road improvements. In 2013, the Texas state legislature voted to transfer a portion of the state’s oil and gas severance tax to pay for road maintenance, a measure that will go before Texas voters for approval in November 2014. Use agreements. In Pennsylvania and West Virginia, states have entered into road-use agreements with energy companies making the companies responsible for their damage to the roads and maintaining them. Companies must also pay a bond as part of the agreement. Officials told us these agreements have helped make companies more responsible for their impact. For example, in Pennsylvania, officials told us industry had invested over $750 million in roadway infrastructure improvements. Public awareness. Texas launched a public education campaign to alert drivers to the need to use caution when driving through energy- related work zones. Although much of the impact has been on rural, nonfederal roads, the Federal Highway Administration has been involved in helping states to coordinate information sharing. For example, in 2011, the Federal Highway Administration hosted an information-sharing meeting between officials in Pennsylvania and West Virginia, who told us the session was beneficial. In addition to the individuals named above, Karla Springer (Assistant Director), Sara Vermillion (Assistant Director), Melissa Bodeau, Lorraine Ettaro, Quindi Franco, David Hooper, Andrew Huddleston, John Mingus, Joshua Ormond, James Russell, Holly Sasso, Jay Spaan, Jack Wang, Amy Ward-Meier, and Jade Winfree made key contributions to this report.
|
Technology advancements such as horizontal drilling and hydraulic fracturing (pumping water, sand, and chemicals into wells to fracture underground rock formations and allow oil or gas to flow) have allowed companies to extract oil and gas from shale and other tight geological formations. As a result, oil and gas production has increased more than fivefold from 2007 through 2012. DOT oversees the safety of the U.S. transportation system. GAO was asked to review oil and gas transportation infrastructure issues. This report examines (1) overall challenges that increased oil and gas production may pose for transportation infrastructure, (2) specific pipeline safety risks and how DOT is addressing them, and (3) specific rail safety risks and how DOT is addressing them. GAO analyzed federal transportation infrastructure and safety data generally from 2008 to 2012 or 2013 (as available), reviewed documents, and interviewed agency, industry, and safety stakeholders, as well as state and industry officials in states with large-scale shale oil and gas development. Increased oil and gas production presents challenges for transportation infrastructure because some of this increase is in areas with limited transportation linkages. For example, insufficient pipeline capacity to transport crude oil has resulted in the increased use of rail, truck, and barge to move oil to refineries, according to government and industry studies and publications GAO reviewed. These transportation limitations and related effects could pose environmental risks and have economic implications. For instance, natural gas produced as a byproduct of oil is burned—a process called flaring—by operators due, in part, to insufficient pipelines in production areas. In a 2012 report, GAO found that flaring poses a risk to air quality as it emits carbon dioxide, a greenhouse gas linked to climate change, and other air pollutants. In addition, flaring results in the loss of a valuable resource and royalty revenue. Due to the increased oil and gas production, construction of larger, higher-pressure gathering pipelines (pipelines that transport products to processing facilities and other long-distance pipelines) has increased. However, these pipelines, if located in rural areas, are generally not subject to U.S. Department of Transportation (DOT) safety regulations that apply to other pipelines, including emergency response requirements. Historically, gathering pipelines were smaller and operated at lower pressure and thus posed less risk than long-distance pipelines. But the recent increase in their size and pressure raises safety concerns because they could affect a greater area in the event of an incident. In 2011, DOT began a regulatory proceeding to address the safety risks of gathering pipelines, but it has not proposed new regulations. Although states may regulate gathering pipelines, an association of state pipeline regulators' report on state pipeline oversight shows that most states do not currently regulate gathering pipelines in rural areas. Crude oil carloads moved by rail in 2012 increased by 24 times over that moved in 2008. Such an increase raises specific concerns about testing and packaging of crude oil, use of unit trains (trains of about 80 to 120 crude oil cars), and emergency response preparedness. Crude oil shippers are required to identify their product's hazardous properties, including flammability, before packaging the oil in an authorized tank car. DOT has issued safety alerts on the importance of proper testing and packaging of crude oil. However, industry stakeholders said that DOT's guidance on this issue is vague and that clarity about the type and frequency of testing is needed. In July 2014, DOT proposed new regulations for crude oil shippers to develop a product-testing program subject to DOT's review. Additionally, unit trains, which can carry 3 million or more gallons of crude oil and travel to various locations through the country, are not covered under DOT's comprehensive emergency response planning requirements for transporting crude oil by rail because the requirements currently only apply to individual tank cars and not unit trains. This raises concerns about the adequacy of emergency response preparedness, especially in rural areas where there may be fewer resources to respond to a serious incident. Also in July 2014, DOT sought public comment on potential options for addressing this gap in emergency response planning requirements for transporting crude oil by rail. DOT should move forward with a proposed rulemaking to address safety risks—including emergency response planning—from newer gathering pipelines. DOT generally concurred with the recommendation and stated that it is developing a rulemaking to revise its pipeline safety regulations.
|
The goal of SNAP is to help low-income individuals and households obtain a more nutritious diet by supplementing their income with benefits to purchase allowable food items. The overarching rules governing SNAP are set at the federal level. Accordingly, FNS is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states, or in some cases counties, administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants on an electronic benefits transfer card. States are also allowed flexibility in establishing some state-specific policy modifications, such as through state options. One financial criterion for SNAP eligibility and benefit amount involves household income, which can come from various sources, including earned income, such as wages and salaries, and unearned income, such as payments from other government programs (see table 1). Generally, under federal law, a household’s gross income cannot exceed 130 percent of the federal poverty level ($2,628 per month for a family of four for most states in fiscal year 2016). The household’s net income, which is determined by deducting certain expenses from gross income, such as certain dependent care and shelter costs, cannot exceed 100 percent of the federal poverty level ($2,021 per month for a family of four for most states in fiscal year 2016). Net income is used in determining the household’s benefit amount, subject to maximum benefit limits. Generally, eligibility is based on various household circumstances, including income. After eligibility is established, households are certified to receive SNAP for periods ranging from 1 to 24 months depending upon household circumstances and state-selected policy options. Households are required to report certain changes during the certification period which can affect their eligibility and benefit amounts. Once the certification period ends, there is a recertification process whereby households reapply for benefits and eligibility and benefit levels are redetermined. SNAP agencies use data matching to obtain information about households’ income, verify information that households provide when they initially apply or recertify for benefits, or identify potential discrepancies. For households that already receive SNAP benefits, data matching can provide information about changes in income that affect households’ eligibility or benefit levels (see fig. 1). In certain cases, data matching can take the place of traditional forms of verifying information provided by applicants, such as requiring households to submit documentation (e.g. pay stubs or a child support agreement) or making collateral contacts, such as a phone call to an employer. Certain federal policies that help protect individuals from inappropriately losing benefits affect whether SNAP agencies need to take additional steps to verify information from a data match. Federal law generally requires that government agencies administering benefits using matching programs verify information from matches before reducing or terminating benefits unless specified government entities have determined that there is a high degree of confidence that the information is accurate. For SNAP, FNS defines some data matches as “verified upon receipt,” if the match is with a primary or original source of the data (such as information on a government benefit provided by the administering agency) and is not questionable. An example is a match with SSA that provides information on the amount of Old-Age, Survivors, and Disability Insurance (OASDI) benefits a household receives. Eligibility workers can use this information for eligibility determinations without taking additional steps to verify that the data are accurate, according to FNS guidance. In contrast, data from a secondary source (not verified upon receipt) require additional verification before they can be used in eligibility determinations. For instance, state quarterly wage data are considered a secondary source because they are a compilation of earnings data submitted by employers to the state workforce agency. Accordingly, the SNAP caseworker must take additional steps, such as contacting an employer or requesting paystubs from the client, to verify that information from that match is accurate before using it to change a household’s eligibility status or benefit amount. Failing to verify information from secondary sources can cause eligible households to lose benefits if caseworkers act on inaccurate information. Alternatively, if caseworkers do not follow up to verify information from secondary sources, they may miss opportunities to reduce or prevent improper payments. Regardless of whether the data are from primary or secondary sources, SNAP agencies are required to notify the household of the actions they intend to take and provide the household with an opportunity to request a fair hearing prior to any adverse action. Federal law and regulations require states to conduct certain data matches for SNAP eligibility, including three matches that provide non- income related information on people who may be incarcerated, deceased, or otherwise disqualified from receiving SNAP benefits (see table 2). Most recently, the Agricultural Act of 2014 required SNAP agencies to conduct a match with the National Directory of New Hires (NDNH) to verify employment data. In response to our survey, all states reported conducting multiple data matches for income information that they use to determine SNAP eligibility. These matches gather information from federal, state, and commercial data sources on earned income from employment or self- employment, or unearned income from other government benefit programs (see table 3). As of September 2016, 39 state SNAP agencies conduct the federally- required match with the NDNH New Hire data (see sidebar), according to our survey and information from HHS/ACF’s Office of Child Support Enforcement (OCSE) on states’ progress in implementing the match. FNS required states to implement this match for SNAP eligibility purposes by September 21, 2014; however, some states have experienced implementation delays, and FNS officials said that each of the 12 remaining states is working towards implementing the match. States that experienced implementation delays had not yet developed the capacity to conduct the NDNH New Hire match or faced competing priorities for resources, according to FNS, OCSE, and state officials. For example, two of the remaining states said they will implement the match in coordination with efforts to upgrade or replace SNAP eligibility systems. Two other states said in their survey responses that a lack of resources has delayed their implementation of the match. FNS monitors states’ progress in implementing the required NDNH New Hire match and promotes states’ compliance by reviewing monthly reports on states’ matching activities and following up with states as needed, according to FNS officials. Specifically, FNS officials reported that the national office in Washington, D.C., reviews monthly reports from OCSE on states matching activities and forwards information to FNS regional offices for follow up as needed. Data matches with primary sources, those that allow for real-time access to data, and those that provide up-to-date information are useful because they enable efficient and accurate SNAP eligibility determinations, and matches that combine these characteristics are particularly useful, according to our analysis of state survey responses. Officials from SNAP agencies we interviewed explained that these data matching characteristics streamlined the eligibility determination process for caseworkers and households. Data matches with primary data sources are more useful because caseworkers can use the information without having to conduct additional verification, according to officials from each of the six state SNAP agencies we interviewed. Matches that can be accessed in real-time, or immediately upon request, can help caseworkers make determinations more quickly or eliminate the need to revisit eligibility or benefit determinations multiple times, according to officials we interviewed in five of the six states. Real-time access allows caseworkers to verify information provided by applicants and discuss apparent discrepancies during interviews rather than contact households again later. It can be difficult for caseworkers to contact households, according to officials we spoke with in four of the six states, and the need to do so can create delays in processing applications, a key FNS performance measure for SNAP. FNS regulations generally require recent information on income for SNAP eligibility and benefit determinations. Generally, FNS directs states to use income received in the last 30 days as the basis for determining SNAP benefit amounts. Accordingly, matches that provide current information can be used to accurately calculate or adjust benefits whereas older data may introduce errors if not updated. Officials from each of the six states we interviewed emphasized the importance of up-to-date information. Four data matches for unearned income have each of the useful characteristics described above, while the data matches for earned income lack one or more of these characteristics. The four data matches most states reported using and finding very or extremely useful are with primary sources including state or federal agencies that administer benefit programs that provide households’ unearned income (see fig. 2). Matches with SSA can provide real-time access to current SSI and OASDI program benefits that a household may receive. Matches with state-level unemployment insurance and child support enforcement agencies can provide up-to-date and real-time access to information on households’ receipt of benefits or support payments, depending on states’ matching capabilities. The data source for earned income that the most states reported as very or extremely useful was the commercial verification service The Work Number®, which is owned by Equifax Inc. and stores employment and earnings information gathered from participating employers’ payroll systems. Equifax representatives told us they estimate that The Work Number covers about 35-40 percent of the working population at any given time. Additionally, FNS officials told us that while The Work Number is a secondary source of information for SNAP eligibility purposes, SNAP agencies can use it to verify earned income reported by households or to identify potential earnings that were not reported. Similar to our findings in a recent report on the use of commercial data services to help identify fraud and improper payments, officials from five of the six states where we interviewed SNAP agency officials said The Work Number can improve program integrity or program efficiency by providing real-time access to accurate and up-to-date information. Also, use of The Work Number can reduce the reporting burden on households and employers because caseworkers can use it in lieu of collecting pay stubs from households or contacting employers to confirm reported earnings, according to state or local officials in these five states and Equifax representatives we interviewed. Data matches that do not have the characteristics that states found particularly useful can be used as leads to detect income that households may not have reported. For example, although state new hire directories and the NDNH New Hire data do not include information on the amount of employee earnings, caseworkers can use them to identify new employment that may not be reflected in households’ case files and follow up with households or employers to verify earnings. Eligibility workers at one county office we visited said that although not all data matches they used had complete or up-to-date information, they were still useful because they prompted action and helped prevent potential improper payments. In our survey, we asked states about overall challenges for the income- related data matches they use, and the issue the most states found very or extremely challenging was following up to verify information provided by those matches. Other issues related to the data, such as the recency of information, were also noted as challenges (see fig. 3). SNAP agency officials we interviewed in each of the six states said that following up on data matches to ensure information is accurate and up-to-date can be time intensive and difficult to achieve. As discussed earlier, the need to follow up on data matches stems from limitations in using the data for SNAP eligibility purposes. According to FNS guidance, before taking certain actions that affect households’ benefits, SNAP agencies should confirm income information that is not considered verified. FNS does not consider information that is questionable or that comes from a secondary rather than a primary source of information to be verified. These secondary data sources are often collected for multiple purposes and thus may not be sufficiently recent, accurate, or complete for SNAP eligibility determinations. NDNH files, for example, contain data used for many purposes. They are used to help state child support agencies locate parents and enforce child support orders. They also assist with the administration of SNAP, TANF, and child and family services programs, among others. However, some information, such as NDNH’s Quarterly Wage file, may not be sufficiently recent to serve as the basis for SNAP eligibility determinations without potentially leading to improper payments. It can take months to compile this information from employers at state and national levels—states have four months after the end of a calendar quarter to transmit quarterly wage data to NDNH. Thus, it may be difficult to accurately calculate SNAP benefits from quarterly earnings if employees experienced changes in employment, hours worked, or wage rates. In addition, officials we spoke with from four of the six states said that a new hire match may not indicate that someone actually earned wages, such as when someone signs up with a temporary employment agency but has not actually been placed in a job. Working families make up a growing share of SNAP households; yet earned income information can be particularly difficult to obtain through data matches. Data matches that provide earned income information do not come from primary sources and have unique advantages and disadvantages in how recent or comprehensive their data are for determining SNAP eligibility (see table 4). Unlike several data matches for unearned income, all data matches for earned income lack one or more characteristics that are useful for determining SNAP eligibility. Earned income data sources such as the PARIS federal file, The Work Number, and various state sources do not include all employers, whereas more comprehensive sources such as NDNH or Internal Revenue Service earnings and tax data are more than 30 days old, lack relevant details, and are not available in real-time. In addition, states reported that the security and privacy safeguards needed to use tax data can be particularly challenging. Accordingly, SNAP agencies reported that they rely on multiple data matches for earned income information, which they should confirm is correct and relevant for SNAP eligibility before using the information for eligibility or benefit determinations. Similarly, several data matches provide information on whether households received unearned income in the form of benefits from programs in other states or receive benefits from the same program in multiple states at the same time, but following up to confirm matches across states can be difficult. Data matches on benefits in other states can help SNAP agencies identify when a household or household member moves from one state to another but has not reported the move, as well as detect possible instances of intentional fraud. Data matches such as the PARIS Interstate file and NDNH Unemployment Insurance file aggregate income information across states and can provide useful leads for follow up by states. However, 13 states reported in our survey that lack of access to income information from other states was very or extremely challenging. In addition, officials we interviewed in four of six states said that it is time consuming to conduct follow up verifications with other states in terms of obtaining verification from other states or in terms of providing verifications to other states upon request. For example, an official from one state noted that the state receives approximately 500 inquiries a month from other states following up on matches from the PARIS Interstate file, and the state must respond to each one in writing. Officials from another state said that out-of-state-benefits were the most cumbersome form of income to verify because the process varies by state or even by county, and other agencies may be slow to provide information, place verification requesters on hold for extended periods of time, or frequently change their points of contact. States reported efforts to target and streamline data matching activities in order to reduce some of the manual work associated with data matching, including those associated with worker follow-up. SNAP agency officials we interviewed in three of the six states reported initiatives underway to improve data matching efficiency by better prioritizing which matches need follow up. For instance, in response to earlier challenges it encountered with data matching processes, Massachusetts has gone through a yearlong effort to better distinguish patterns of information from their quarterly wage and new hire matches that indicate whether follow up is worthwhile, according to state officials. By developing more sophisticated data matching algorithms, they hope to be able to filter out new hire matches that are not likely to affect SNAP benefits such as for one-day employment as an election worker or college students’ income from federal work-study programs that are not counted when determining SNAP eligibility and benefits. Massachusetts officials said that efforts such as these to use data matches more strategically can improve program integrity and service to households, as well as administrative efficiencies. Also, among states we interviewed, Texas, Virginia, and Washington have implemented information systems that enable caseworkers to look up available client information from multiple state and federal data systems with one search rather than conducting each data match individually. In addition, five states responded to our survey that they have systems to automatically import information from most or all matches into their eligibility systems without additional manual data entry. Similarly, SNAP agency officials in two of the six states we interviewed (Massachusetts and Washington) said they were involved in state- initiated efforts to address the need for faster, more complete cross-state data. For example, Massachusetts officials described a direct match that their state does with New York to provide one another data on households receiving benefits. Massachusetts officials said they can take action on exact matches showing household members receiving benefits from both states, and follow up on leads as appropriate if matches are not exact. Washington state officials told us that they provide neighboring states with certain read-only access to an online benefits verification system to allow those states to more easily confirm whether applicants already receive benefits in Washington. Washington officials told us this online system is easier than faxing or phoning the information to other states, and that they are pursuing access to other states’ information through similar online systems. States reported ways they have integrated SNAP eligibility processes with other programs, particularly TANF and Medicaid, but also reported that limitations on using the same data matches for multiple programs were challenging. States sometimes integrate aspects of the SNAP eligibility process with those of other programs, such as through combined applications, common eligibility workers, or integrated or linked eligibility systems. According to our survey, SNAP eligibility processes were most commonly integrated with state TANF cash assistance programs, as well as with state Medicaid programs, although to a somewhat lesser degree. However, 21 states reported that they found it very or extremely challenging that Medicaid and other programs use income information from matches differently than SNAP because, for example, processes for verifying income information or definitions of household income differ across programs. One way state Medicaid programs verify income information differently than SNAP is by accessing data sources through CMS’s federal data services hub (the Hub). As we have previously reported, CMS created the Hub to implement provisions of the Patient Protection and Affordable Care Act (PPACA). The Hub provides a single access point for state agencies to gather information from various data sources used to verify eligibility determinations for Medicaid and related insurance affordability programs. More specifically, the Hub provides access to OASDI data from SSA and earnings data from Equifax’s Work Number, among other data, according to our prior work and additional information from CMS, SSA, and Equifax. Although these data are also used for SNAP, if state agencies access this information from the Hub, they cannot use it for SNAP eligibility determinations, even for a household that applies for or receives both Medicaid and SNAP benefits, according to FNS guidance. This is because data use agreements between CMS and federal agencies and the contract between CMS and Equifax do not authorize states to also use data accessed through the Hub for SNAP, according to officials from CMS and SSA. Due to these restrictions, when a state uses the Hub for Medicaid eligibility and a caseworker would like to use information on OASDI benefits or earnings available from The Work Number to determine SNAP eligibility, the caseworker would need to access this information independently, in effect conducting duplicative data matches to verify some of the same information for the same household. (See fig. 4). Officials we interviewed in four of the six states and officials in six additional states in their responses to open-ended survey questions reported they would like to use the Hub for SNAP determinations or that it was challenging they were not able to do so. During interviews, officials from two of these states noted that this was challenging given that households that participate in SNAP are often also enrolled in Medicaid. For instance, an official from one state we interviewed expressed frustration with the duplicative work caused by not being able to use data accessed through the Hub for Medicaid eligibility verifications, to also verify SNAP or TANF eligibility, when applicable. An official from another state told us that having different processes to verify income for SNAP and Medicaid posed challenges to integrating program operations. Specifically, the official said that it was difficult to train eligibility workers to verify income in different ways for Medicaid and SNAP, and that it is also more difficult to integrate eligibility systems and combine households’ records across these programs when income information from data matches can be used for Medicaid but not for SNAP. Concerns regarding program inefficiencies, duplicative work, or additional costs were echoed by representatives we interviewed among five of seven stakeholder groups. Costs may also be a factor in states’ ability to access data. About one- third of states reported in our survey that upfront and ongoing costs associated with conducting data matches were very or extremely challenging overall (see fig. 5), and states reported costs as particularly challenging with respect to use of The Work Number. Upfront costs are those associated with establishing a new data match, such as developing data sharing agreements, updating information systems or adapting business processes to use the match. Ongoing costs include those charged by the source agency or the commercial provider for ongoing use of the match, such as a per match charge. Additionally, there are costs associated with maintaining data sharing agreements. More states reported that costs associated with The Work Number were very or extremely challenging than they did for other national data sources. Of the 45 states that used The Work Number match, upfront costs were a challenge for 17 states, and ongoing costs were a challenge for 19. Some states limit their use of The Work Number or do not use it at all due to costs, according to interviews with state and FNS officials and comments from states in response to our survey. For instance, officials from one state we interviewed said they would like to be able to use The Work Number for every application and that applicants ask them to verify earnings through The Work Number, but because of the cost of these matches, caseworkers are asked to only conduct these matches as a last resort when other forms of verification are not available. Even with these limitations, the state contracts for a limited number of matches per month and frequently loses access to The Work Number when that limit is met before the end of the month, according to officials in this state and Equifax representatives. Such challenges can create inefficiencies for eligibility workers and increase the burden for households. FNS has initiated several demonstration projects or pilots aimed at using data matching to improve client access, program integrity, or program efficiencies for SNAP (see table 5). Two of these initiatives, the Combined Application Project and the Elderly Simplified Application Project, are designed to help vulnerable populations, particularly the elderly, obtain benefits more easily. Currently, states are able to participate in these projects by requesting waivers from FNS. However, based on the success of these demonstrations in increasing elderly participation in SNAP, FNS recently proposed creating a state option (instead of the use of waivers) that would allow states to adopt a set of policies to streamline and simplify SNAP application, reporting requirements, and recertification for low-income elderly individuals. In contrast, based on the results of another demonstration that explored the use of quarterly wage data matches, FNS recently concluded that it could not recommend the use of these data, as it affected the accuracy of benefit amounts. A fourth FNS-sponsored project, the National Accuracy Clearinghouse (NAC) pilot, created a new data sharing system in 2013 that enabled five pilot states affected by Hurricane Katrina (Alabama, Florida, Georgia, Louisiana, and Mississippi) to share information across states about the receipt of SNAP. The 2015 evaluation of the NAC reported that the five participating states were able to identify or prevent duplicate receipt of SNAP benefits among these states. Unlike the PARIS data match, which identifies potential duplicate receipt a number of months after it has occurred, the NAC’s real-time data could help states prevent duplicate receipt of benefits. These sentiments were echoed by Florida officials we interviewed, who told us that the NAC is more useful than PARIS because it is updated daily rather than quarterly. The evaluation also found that net cost savings were achieved in each of the five participating states, although to varying degrees. Funding Available for Health and Human Services Integration The federal government funds 90 percent of the qualifying costs of information technology (IT) improvements for investments in the design and development of state eligibility- determination and enrollment systems for Medicaid and the Children’s Health Insurance Program. Under normal cost allocation rules, human services programs that benefit from these systems improvements would be required to pay their share of associated costs. However, under a waiver of these rules through 2018, state human services agencies do not need to pay their share of costs for certain systems improvements that benefit both health and human services programs. Along with several SNAP-focused demonstration projects and pilots, FNS has also worked with HHS on initiatives that affect SNAP and its integration and interoperability (the ability to exchange information across systems) with other health and human services programs. These efforts have largely been spurred by time-limited funding opportunities aimed at promoting more streamlined and efficient enrollment and eligibility systems across state health and human services programs, according to information from HHS and FNS. Specifically, to help states meet requirements under the PPACA, CMS increased the level of federal funding for information technology (IT) modernization for health systems improvements. Human services programs can benefit from these improvements at little or no additional costs, under a time-limited waiver to normal cost allocation rules (see sidebar). FNS has worked with CMS and ACF to promote states’ awareness and use of this funding. For example, these three agencies sent a joint letter to states in 2011 that announced the cost allocation waiver and another in 2015 that announced an extension of the waiver through 2018. According to FNS, 38 states had availed themselves of the cost allocation waiver to fund health systems improvements that also include improvements to SNAP eligibility systems, at the time of this review, and according to our survey, almost three-fourths of states (29 of 40) that rated the usefulness of this funding said that it was very or extremely helpful. Since 2013, FNS also has been involved in CMS-led interagency efforts to address duplication and program integration challenges, previously discussed, related to restrictions of data accessed through the Hub for joint Medicaid cases that include SNAP or TANF. Officials from FNS, CMS, and ACF (the federal agency that oversees TANF) as well as SSA, told us that they have been working for several years to develop policy options and modified data sharing agreements to allow for SSA data accessed through the Hub for Medicaid verifications to also be used for SNAP and TANF in joint cases. Officials told us that progress in addressing these issues has been slow, due to complex issues related to various legal requirements, including those under the Privacy Act of 1974. However, CMS officials told us that they have the support of multiple agencies, including SSA and the Office of Management and Budget (the federal agency that provides guidance on the Privacy Act), to allow TANF and SNAP agencies’ use of Hub data and they hope to resolve these issues by the end of the calendar year. In our prior work on data sharing across state and local human services programs, we noted that strong leadership support across agencies is a key factor for success in facilitating data sharing. These interagency efforts to improve efficiencies across programs for Hub-accessed data are in line with several efforts aimed at better coordination across government. In our extensive work on duplication, fragmentation, and overlap in the federal government, we defined duplication as occurring when two or more agencies or programs are engaged in the same activities or providing the same service to the same beneficiaries. We have reported that federal agencies should identify opportunities to reduce duplication, which could enable better management of program administration or result in potential cost savings. In the case of the Hub, because some duplicative data matching processes related to eligibility determinations occur due to the current Hub restrictions, reducing these duplicative processes may result in improved administrative efficiencies. Additionally, the interagency efforts related to the Hub are also in line with HHS recommendations, developed in response to requirements under the PPACA, which called on federal agencies to promote greater interoperability across health and human services programs to streamline program administration, including the use of data for eligibility determinations among several programs when possible. Similarly, officials from CMS, FNS, and ACF indicated that their current efforts in facilitating use of data from the Hub across Medicaid, SNAP, and TANF were part of a larger vision to promote health and human services interoperability and integration overall, and were related to current funding opportunities for such integration, discussed earlier in the report. In a tri-agency letter to states, these agencies stated that they were “committed to a strong partnership with states and our federal stakeholders as we work together to implement our shared vision of interoperable, integrated and consumer-focused health and human services systems.” FNS also has begun to explore ways to help states address some of the cost challenges associated with use of The Work Number discussed earlier in the report. Equifax representatives told us that each state SNAP agency that conducts data matches with The Work Number contracts independently with Equifax, and states pay different prices to use the service due to various factors including the volume of matches that each state purchases. In contrast, CMS has negotiated a single contract with Equifax that allows states to access information from The Work Number through the Hub to determine eligibility for Medicaid and related insurance affordability programs. Due to volume-pricing, Equifax representatives told us that a single contract to cover all state SNAP agencies use of The Work Number would likely lead to lower costs per match than what most state SNAP agencies pay now through individual contracts. It would also help states by eliminating the need for each state to negotiate separate contracts. FNS and CMS officials told us that they have discussed opportunities to expand use of The Work Number through the Hub for SNAP verifications when CMS’ current contract with Equifax expires in 2018 and the service is reprocured. In addition, FNS officials told us that FNS could also consider negotiating a separate, single contract for The Work Number that would allow all state SNAP agencies to access the match, but would first explore possibilities that involved access through the Hub. Various options have the potential to result in cost savings such as through volume pricing discounts or enabling the use of the match across programs. Despite FNS’ current efforts, 32 states reported that more information from FNS on promising data matching practices, such as on the use of different data matches or on ways to filter data to streamline worker follow-up, would be extremely or very useful, according to our survey. However, information on such practices available on FNS websites is limited, based on our review. FNS uses both its public website and its PartnerWeb site (a web portal available to state agencies and others) to publish guidance, share information on state practices, and communicate other information to SNAP agencies. Officials at one FNS regional office said that, while states may engage in more informal communication with other states in their region, the PartnerWeb site serves as a broader source of information that can facilitate additional communication across states about relevant practices. Through these websites, FNS has provided detailed information on what states are doing to modernize their SNAP eligibility systems and processes in areas such as states’ use of call centers, document imaging, or alternative methods for managing SNAP caseloads. However, based on our review of FNS’ websites, we found few documents with specific information on data matching strategies, such as data brokering (allowing access to various data sources in a centralized portal), data filtering (using data analytics and information on program rules to better identify discrepancies and prioritize follow-up), or automatically populating the eligibility determination system (information obtained from reliable matches are automatically added to a client’s SNAP case file). For example, an FNS document on one of the websites listed states that used data brokering, but did not include information that described the practice in detail or its implementation. Additionally, although FNS has been engaged in recent efforts to test new data matching practices described earlier, such as the NAC, it has not yet widely disseminated findings from the NAC evaluation to states. FNS officials told us that they submitted these findings to Congress and relevant stakeholders in May 2016 and intend to more broadly disseminate the information once a determination has been made regarding the pilot’s future expansion. In our survey, 28 states said that information from FNS on cross-state data matching or sharing would be extremely or very useful. Accordingly, timely dissemination of findings from the NAC evaluation would be useful for states, even while decisions regarding its expansion are pending. Regarding other state practices, FNS officials told us that they would typically consider disseminating information on data matching practices considered effective based on evidence, and not necessarily on state practices that have not been evaluated. While supporting and sharing information on practices that have been proven effective is vital, information on various implementation and program administration issues are also useful for agencies operating programs. Likewise, officials we interviewed from three of the six states said that it would be useful for FNS to facilitate additional information sharing on state practices, such as implementation issues, so they could be aware of how other states were implementing data matches: Officials from one state said FNS could provide information about whether other states were effectively using data matches that they found challenging to implement in their state. An official from a second state said that FNS could serve as a clearinghouse for state practices on data matching. While this official acknowledged that FNS’ guidance on data matches needed to be relatively high level, this official said that more specific information on what other states were doing to implement data matches would be useful and that FNS could play a more active role in facilitating this type of information sharing. An official from a third state said it would be helpful to get additional information about state practices so states would not have to “reinvent the wheel” each time and could learn from each other. Our prior work on collaboration practices has shown that agencies can enhance and sustain collaborative efforts and identify and address needs by leveraging resources, such as through information sharing. In other work on human services integration, we have found that federal agencies can help states improve their information systems by acting as a facilitator to help states work together and share their models with other states. Additionally, federal standards for internal controls call for agencies to communicate necessary quality information to external parties in order to achieve the agency’s objectives. If FNS took additional steps to facilitate information sharing on promising practices, state SNAP agencies would be better positioned to improve program integrity, experience greater administrative efficiencies, and place fewer burdens on SNAP applicants and recipients. States would also be better able to leverage existing knowledge and resources and avoid duplicative efforts. Further, although FNS is beginning to explore ways to reduce the costs of The Work Number for state SNAP agencies by working with CMS to expand its use through the Hub, it has not yet systematically analyzed spending and SNAP data needs for this service and more thoroughly considered how various factors would affect costs. For example: Not all states currently access The Work Number for Medicaid eligibility through the Hub, according to CMS officials. Although there is potential for expanding Hub use among states, it is unclear how much cost savings would result if some states do not use the Hub to access The Work Number, and FNS officials indicated that they had not yet gathered information from states to assess this. CMS and FNS officials told us they are currently considering expanded use of The Work Number through the Hub for joint cases involving both Medicaid and SNAP benefits. Thus, states may still need to maintain individual contracts for The Work Number for SNAP cases that do not include Medicaid or related health programs, and it is unclear what subsequent cost implications there might be for these state-level contracts. FNS officials indicated that they had not yet obtained information on the extent that states would need to maintain individual contracts for SNAP-only cases and analyzed how state spending for the service would be affected. CMS and FNS officials told us that there was the potential for cost savings in facilitating use of the match across programs (i.e., a state agency would not need to run a query twice for a case that includes both SNAP and Medicaid). However, CMS officials did indicate that the price CMS pays Equifax to allow states access to The Work Number through the Hub would likely increase if they were to expand its use to other programs and said they needed to work through cost implications of possible scenarios. Both GAO and the Office of Management and Budget have emphasized opportunities for agencies to lower prices, reduce duplication, and reduce administrative costs by leveraging the government’s buying power through practices such as strategic sourcing and category management. For instance, in 2013, we found that leading commercial companies achieve savings by pooling purchases by using various practices, including analyzing spending of purchased services across entities, understanding the needs or requirements of services across various users, and promoting transparency about acquisitions within the organization to help identify inefficiencies. It is possible that the options FNS is exploring with CMS could help lower the cost of The Work Number across multiple government users by leveraging the government’s buying power. However, FNS will not be able to identify informed ways to best do this without conducting a more thorough analysis of SNAP data needs and spending across state and federal-level contracts and in relation to other programs. Data matching is a tool that has the potential to help SNAP agencies increase program integrity, improve administrative efficiency, and reduce household burden. The extent to which these benefits occur, however, may sometimes depend on the characteristics of the match or the use of promising data matching practices. Although FNS has efforts under way to promote the use of data matching to improve SNAP, it has not yet widely disseminated information on the results of all of these efforts or taken steps to more broadly facilitate information sharing about other promising data matching practices that states employ. With wider, timely dissemination of promising practices, state SNAP agencies will be better positioned to be aware of potentially useful ways to help them address implementation issues and improve the effectiveness of their data matching processes. Additionally, we applaud the interagency work done by CMS, FNS, and other agencies to try and reduce duplicative work across programs and promote the use of technology advances to facilitate program integration overall. These efforts are important, as is sustained attention to make further progress toward effective solutions. FNS has also taken initial steps to explore ways to reduce costs related to the use of commercial data services, such as through cross-program efficiencies. However, FNS, working with CMS as appropriate, has not yet systematically analyzed spending and taken steps to understand data needs for these services. Without such an approach, FNS will not be able to identify the best ways to leverage the government’s buying power through strategic sourcing practices and potentially reduce costs and improve performance. We recommend that the Secretary of Agriculture: 1. Take additional steps to collect and disseminate information on promising practices that could help improve data matching processes among state SNAP agencies, including broad and timely dissemination of information on results of recent relevant pilots or demonstrations. 2. Work with HHS (as appropriate) to analyze spending and understand data needs for SNAP across federal and state contracts and in relation to other programs as FNS explores ways to potentially reduce the costs of using commercial data services. We provided a draft of our report to USDA, HHS, and SSA for review and comment. On September 21, 2016, FNS officials from SNAP’s Program Development Division and SNAP’s Program Accountability and Administration Division provided us with the agency’s oral comments. FNS officials told us that they agreed with the recommendations in the report. They noted that they have been moving in the general direction of these recommendations and would build on current efforts to address them. FNS and SSA also provided technical comments, which were incorporated into the report as appropriate. HHS did not have any comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretaries of Agriculture and Health and Human Services, the Commissioner of the Social Security Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in Appendix I. Kay E. Brown, 202-512-7215, brownke@gao.gov. In addition to the contact above, Gale Harris (Assistant Director), Theresa Lo (Analyst-in-Charge), David Reed, and Russell Voth made key contributions to this report. Also contributing to this report were Holly Dye, Alexander Galuten, David Lin, Jean McSween, Mimi Nguyen, and Jerome Sandau.
|
During fiscal year 2015, state SNAP agencies provided about 46 million low-income individuals approximately $70 billion in federally funded benefits, and an additional $7.6 billion in federal and state funds was spent in administering the program in fiscal year 2014, according to the most recent data. SNAP agencies use data matching to verify eligibility information about applicant or recipient households, including their incomes, as well as to help detect improper payments. GAO was asked to review issues related to data matching in administering SNAP. This report examines (1) the extent to which states use data matching to obtain income information and find these matches useful for SNAP eligibility, (2) challenges states experience using data matching, and (3) actions FNS has taken to promote data matching for SNAP. GAO surveyed all state SNAP directors for a 100 percent response rate and interviewed state officials in six states that varied in caseload size, geography, and other criteria, and visited local offices in three of these states. GAO also reviewed relevant federal laws, regulations, and agency documents and interviewed agency officials. In administering the Supplemental Nutrition Assistance Program (SNAP), all state SNAP agencies verify household income by conducting multiple data matches, which they find useful for detecting potential discrepancies related to SNAP eligibility (see figure below), according to GAO's survey of all state SNAP directors. Most states reported that particularly useful data matches provided current information, can be accessed in real-time (i.e., immediately), and are from original sources. Some data sources for unearned income, including from the Social Security Administration, have all these characteristics. Data matches for earned income lacked one or more of these useful characteristics, but can be used as leads to follow up on with households or employers. States identified challenges with following up on data matches and with the costs of data matching. The issue states cited most often in GAO's survey as very or extremely challenging was the need to conduct follow-up for data that are not sufficiently recent, accurate, or complete, which can be cumbersome and time-consuming. Officials GAO interviewed in several states were implementing ways to manage follow-up. Over one-third of states also reported that costs associated with accessing certain commercial data to verify earnings were very or extremely challenging, with some states limiting their use of these data due to costs. The Department of Agriculture's (USDA) Food and Nutrition Service (FNS), which oversees the SNAP program, has efforts underway to promote data matching to improve program administration, but may be missing some opportunities. For example, FNS has initiated pilot or demonstration projects to improve program integrity or service to households. However, FNS has not actively collected or disseminated information on promising data matching practices, consistent with federal internal controls. Further, 32 states reported in GAO's survey that more information from FNS on promising data matching practices would be extremely or very useful. With more information, states will have increased awareness of other potentially useful or cost-effective practices. In addition, FNS has begun to explore ways to help states reduce the cost of using commercial data, but has not systematically analyzed spending and SNAP needs for these data to consider how to best leverage government buying power through strategic sourcing practices. Without this analysis, FNS may not be able to identify the best ways to lower data matching costs. GAO recommends that FNS disseminate information on promising practices to state SNAP agencies, and analyze spending and data needs as it explores ways to reduce costs of using commercial data. FNS agreed with these recommendations
|
Despite restrictions in law and regulation on private letter delivery, the U.S. Postal Service faces increasing competition from private firms. Moreover, growing demands are being made to open even more of the Service’s mail stream to competition. At the same time, some mail is reported to have been diverted to electronic communications, such as facsimiles and electronic mail. Although the Service’s overall mail volume continues to grow, the Service is concerned that customers increasingly are turning to its competitors. In light of this competition, the former Chairman of the Subcommittee on Federal Services, Post Office and Civil Service, and now Ranking Minority Member of the Subcommittee on Post Office and Civil Service, Senate Committee of Governmental Affairs, requested that we review aspects of the Private Express Statutes. To address his questions and related concerns, our objectives were to (1) determine the historical and current basis for restricting the private delivery of letters, including the Service’s efforts to administer and enforce the restrictions; (2) document changes in private sector capacity for letter delivery since 1970, including specific letter mail services for which the Service competes; and (3) estimate the possible financial effects on how the Service’s revenues, costs, and postage rates might change if current restrictions on private delivery of letters were to be changed. We also obtained information on whether selected other countries require the provision of universal mail service and if such countries restrict private letter delivery. “. . . the post office, however restructured, must be, first of all, responsive to the historic public need for, and reliance upon, a secure, swift, dependable, and inexpensive communications system.” S. Rep. No. 912, 91st Cong., 2d Sess. 2 (1970). “The Postal Service is—first, last and always—a public service.” H.R. Rep. No. 1104, 91st Cong., 2d Sess. 19 (1970). The legislative history of the 1970 Act shows that Congress also was concerned about balancing the Postal Service’s public service mission with the expectation that postal managers would maintain and operate an efficient service. In the House Report quoted above, the Committee stated that “The Postal Service is a public service but there is no reason why it cannot be conducted in a business like way and every reason why it should be.” H.R. Rep. No. 1104 at pp. 11-12. To this end, Congress removed the Service from the political arena by making it an independent establishment; giving sole power to a Board of Governors to appoint and remove the Postmaster General and his Deputy; and making the Postal Service exempt from many, but not all, laws that apply to federal agencies. For over 200 years, the Postal Service and its predecessors have operated with a statutory monopoly imposed by the Private Express Statutes, which restrict the private delivery of most letters. Over the years, Congress has reaffirmed the need for the monopoly many times. However, the scope of the monopoly has been both broadened and reduced at various times through statutory and regulatory changes. The monopoly was created by Congress as a revenue protection measure for the Postal Service’s predecessor to enable it to fulfill its mission. It is to prevent private competitors from engaging in an activity known as “cream-skimming,” i.e., offering service on low-cost routes at prices below those of the Postal Service while leaving the Service with high-cost routes. Those who favor retention of the Statutes continue to cite the threat of cream-skimming as their principal economic justification. The letter monopoly was not changed under the Postal Reorganization Act of 1970. Rather, Congress adopted then-existing restrictions on private letter delivery with little debate. When it passed the 1970 Act, Congress directed the Board of Governors to evaluate the need to modernize the monopoly and report any recommendations to the president and Congress. In response, the Board recommended in its report (“Restrictions on the Private Carriage of Mail: A Report of the Board of Governors of the United States Postal Service,” dated June 29, 1973) that the Statutes remain intact. But the Board recommended that the Postal Service suspend the Statutes by administrative action for certain items, including intracompany and data processing communications, as well as newspapers, periodicals, checks, and financial instruments that historically were deemed outside the definition of a letter. The Service adopted most of the Board’s recommended suspensions by issuing regulations in 1974. The basic restrictions on private delivery of letter mail are in seven sections of the federal criminal statutes (18 U.S.C. 1693-1699). These Statutes generally prohibit anyone from establishing, operating, or using a private company to carry letters for compensation on regular trips or at stated periods over postal routes or between places where U.S. mail regularly is carried. Violators are subject to fines or, in some cases, imprisonment. The current maximum fines are $5,000 for individuals and $10,000 for organizations, and the maximum term of imprisonment is 6 months. The 1970 Act also contains provisions (39 U.S.C. 601-606) dealing with private delivery of letters. Along with the statutory restrictions on mail delivery, Congress passed a law in 1934 to restrict access to mailboxes (18 U.S.C. 1725). This law prohibits anyone from intentionally placing mailable matter without postage into any mailbox. The legislative history of the 1934 law shows the purposes of the mailbox restrictions were twofold. First, the law was designed to stop the loss of postal revenue resulting largely from public utilities using special messengers to deliver customer bills to mailboxes without paying postage. Second, Congress sought to decrease the quantity of extraneous matter being placed in mailboxes. Violators are subject to the maximum fines of $5,000 for individuals and $10,000 for organizations, but not imprisonment. Congress did not define what constitutes a letter. Rather, the Service has issued regulations to define a letter for the purpose of administering the Statutes. These regulations define a letter broadly as “a message directed to a specific person or address, and recorded in or on a tangible object.” (39 CFR 310.1 (a)) However, the regulations also exclude a number of items from that definition and suspend the Statutes for other letters, notably “extremely urgent,” i.e., overnight, letters and outbound U.S. international letters. The Postal Service has six major classes of mail: First-Class, which consists mainly of correspondence (business and personal) and transactions, greeting cards, postcards, and some small packages; second-class, which includes newspapers and magazines; third-class, sometimes called “bulk business mail,” which consists primarily of advertising matter and nonprofit fund solicitations; fourth-class, which includes parcels, library materials, and bound printed matter; Express Mail, which includes expedited, overnight letters and packages; and international mail, which includes all letters and packages mailed between the United States and other countries. The Service also maintains certain subclasses of mail, such as Priority Mail, which is heavier (more than 11 ounces) First-Class letters and packages. Under the current mail classification scheme, domestic letters subject to the Statutes fall primarily into First-Class (including Priority) and third-class mail. These classes and subclasses represented about 93 percent of the Service’s total mail volume, which totaled over 180 billion pieces, in fiscal year 1995. These same classes and subclasses accounted for almost 91 percent of the Service’s total 1995 mail revenue of $52.5 billion. According to the Postal Rate Commission, an estimated 83 percent of the Service’s total mail volumes and about 82 percent of its revenues are protected under the Statutes. In July 1996, as a result of a mail reclassification decision, the names of some mail classes used in this report changed. While current First-Class and Priority Mail designations remain the same, Express Mail changed to “Expedited,” second-class to “Periodicals,” and third-class and fourth-class to “Standard Mail.” During 1995 and early 1996, both the Senate and House postal oversight subcommittees held several hearings in which they focused in part or in whole on the need for changes in the 1970 Act. A report issued in December 1995 by the House Committee on Government Reform and Oversight entitled “Voices for Change” summarized the results of 10 hearings held by the Subcommittee on the Postal Service in 1995. Many witnesses at those hearings agreed that the Service faces challenges because of new technology and competition in the overall communications environment. In the December 1995 report, four key issues that emerged during the oversight hearings were identified: the mail monopoly, labor-management relations, ratemaking, and new postal products. However, there was little consensus on specific solutions among the more than 36 witnesses who testified at the hearings. A final hearing on November 15, 1995, “The Postal Reorganization Act 25 Years Later: Time for Change?” set the stage for the Subcommittee’s 1996 agenda. One legislative proposal (H.R. 210, 104th Cong., 1st Sess. (1995)) discussed during the House Subcommittee’s November 1995 hearing would turn the Postal Service over to its employees under an employee stock ownership program. This proposal provides for the continuation of the Private Express Statutes only during the first 5 years of the newly formed corporation’s existence. In a January 1996 hearing, the Senate and House postal oversight Subcommittees jointly continued to assess the need for Postal Service reform. At that time, the Subcommittees heard testimony on the postal reform experiences of some other countries. Representatives of postal administrations in four other countries (Australia, Canada, New Zealand, and Sweden) described major changes in those countries allowing the mail systems to operate with greater commercial freedom. In June 1996, the Chairman of the Subcommittee on the Postal Service, House Committee on Reform and Oversight, introduced legislation (H.R. 3717) to reform the Postal Service. Under this bill, delivery of letter mail priced at less than $2.00 would be restricted to the Postal Service. According to the Subcommittee’s analysis, more than 80 percent of the Service’s letter mail volume would still be protected by law if H.R. 3717 as introduced is enacted. The former Chairman of the Subcommittee on Federal Services, Post Office and Civil Service, and now Ranking Minority Member of the Subcommittee on Post Office and Civil Service, Senate Committee on Governmental Affairs, requested that we review aspects of the Private Express Statutes. He requested the review after legislation (S. 1541, 103d Cong., 1st Sess. (1993)) was introduced in the 103rd Congress to curtail the Postal Service’s authority to enforce the Private Express Statutes. Our objectives were to (1) determine the historical and current basis for restricting the private delivery of letters, including the Service’s efforts to administer and enforce the restrictions; (2) document changes in private sector capacity for letter delivery since 1970, including specific letter mail services for which it competes; and (3) estimate the possible financial effects on how the Service’s revenues, costs, and postage rates might change if current restrictions on private delivery of letters were to be changed. Because of the Postal Service’s interest—and, subsequently, House and Senate postal oversight Subcommittee interest—in how other countries had reformed their postal administrations, we obtained information on whether selected countries require universal mail service to be provided, as this country does, and if such countries restrict private letter delivery. To review the Statutes’ history, current basis, and enforcement, we (1) examined the legislative history of the Statutes and related laws from 1782 to 1995 and their implementation through Postal Service regulations; (2) interviewed Postal Service officials at headquarters and at Service field offices in California, Colorado, Florida, Georgia, Illinois, and Texas that we selected primarily because of their involvement in specific enforcement actions; and (3) reviewed relevant Postal Service data and reports. We also examined records summarizing Postal Inspection Service audits, completed from 1989 to 1994, of mailers’ compliance with the Statutes. We interviewed representatives of selected companies in Georgia and Alabama that had been audited by the Inspection Service, including one whose experiences is related to proposed legislation (S. 1514, 103rd Congress) that served as the impetus for this review and another that the Inspection Service suggested because of the complexity of the case and the magnitude of the resulting settlement. To document changes in private letter delivery capacity, we interviewed representatives of private delivery firms, major trade associations and mailer groups, knowledgeable industry observers, and Postal Service and other government officials; reviewed available literature; and analyzed relevant Postal Service and industry data. Specifically, we discussed private letter delivery activities with representatives of (1) each of the five major expedited mail and parcel delivery companies identified by the Postal Service as its principal competitors, at locations in the District of Columbia and in California, Pennsylvania, and Washington; (2) four national alternate delivery alliances located in Washington, DC, and in Georgia, Michigan, and New Jersey that were identified through a nationwide alternate delivery directory; (3) 15 alternate delivery firms located in California, Georgia, Michigan, Nevada, New Jersey, New York, Pennsylvania, Oklahoma, Texas, and Washington that were selected on a judgmental basis to ensure a broad range of geographic locations and population levels, large as well as small companies, and both independently and newspaper-owned firms; and (4) 25 trade associations and industry experts representing carriers as well as the majority of the Service’s commercial and nonprofit customers located in the District of Columbia, New Jersey, New York, and Virginia. To estimate the possible financial effects of changing the Statutes, we assessed such effects in the following two ways: • First, we estimated the relative risk (high, medium, and low) of the Service’s letter mail stream, by class and subclass, from direct competition by private delivery firms (as distinguished from electronic communications media). • Second, we estimated the extent to which the Service’s revenue and postage rates might have been affected if its estimated fiscal year 1995 letter mail volumes, by class and subclass, had been reduced by various percentages. To assess the relative risk of direct competition, we used data obtained in our interviews, mentioned above, with representatives of the private delivery industry and mailer associations. We structured the interviews to obtain insight into the ability of private delivery firms to deliver letter mail now protected by the Statutes and the interest of mailers in using such firms. Along with these interview results, we compiled, but were unable to verify, various shipment data on private firms in order to estimate existing private delivery capacity and compare the magnitude of private mail delivery to that of the Postal Service. To estimate the effects of mail volume losses on the Service’s revenues, costs, and postage rates, we used estimated mail volumes and other data that the Postal Service and Postal Rate Commission used in a recent rate case (Docket No. R94-1). We assumed that for fiscal year 1995, the estimated number of First-Class, Priority Mail, and third-class mail pieces had been reduced in 5-percent increments from 5 to 25 percent. We used those percentages not to predict what would happen, but rather to show the potential effects on postal revenues, costs, and rates if the Service had lost these volumes of mail. We assumed that the financial effects of losing mail volumes would result in changes to postage rates, not reductions in the levels of service offered and not federal appropriations to offset revenue losses. At our request and with the Service’s approval, we also used a Price Waterhouse LLP model, developed under contract with the Service, to provide additional estimates for 10 future years of changes in the Service’s revenues, costs, and rates as a result of assumed future mail volume losses. Additional details on the methods and assumptions used to estimate the effects of mail volume losses on the Service’s revenues, costs, and rates are included in appendix I, volume II, of this report. To obtain information on postal administrations in other countries, we reviewed several reports done by other U.S. organizations, including a February 1995 report prepared by Price Waterhouse for the Postal Service. We interviewed officials of several other postal administrations, visited the Canadian postal administration—Canada Post Corporation—in Ottawa, and reviewed annual reports and various other documents provided by foreign postal administrations. The information in this report concerning the postal laws of other countries does not reflect our independent analysis of those laws; rather, it rests primarily on the views and analysis provided to us by officials of those governments and other secondary sources. We requested written comments on this report from the Postal Service and the Postal Rate Commission. The Postal Service responded by letter and an enclosure that presented its technical evaluation of the estimated financial effects of changing the Statutes discussed in our report. We have reprinted the letter and enclosure in appendix II, volume II. Our overall evaluation of the Service’s comments is included in volume I, and we provide additional comments on the technical evaluation in chapter 6, volume II. The Commission chose not to provide written comments, but Commission officials suggested several changes to volumes I and II of our report to improve its technical accuracy and completeness, which we made where appropriate. We also arranged for several knowledgeable parties, many of whom provided information for our report, to review and comment on our draft report, volumes I and II. We made changes as appropriate to the report on the basis of all comments we received. They were provided by Mr. Murray Comarow, Executive Director of the former Kappel Commission; and representatives of (1) Price Waterhouse LLP, (2) the Advertising Mail Marketing Association, (3) the National Association of Presort Mailers, (4) Federal Express, (5) United Parcel Service, and (6) Haldi Associates, Inc. (a consulting firm that has studied Postal Service mailing costs.) Our review was conducted primarily between May 1994 and February 1996. It was performed in accordance with generally accepted government auditing standards. The Private Express Statutes play a fundamental role in determining how mail service is provided to the general public. However, compared to 1970, providing mail delivery as a public service today is far more difficult, and the Statutes have come to play a lesser role in protecting the Service’s revenue. Some of the Service’s largest customers and competitors have questioned the need for and the Service’s enforcement of the Statutes. Responding to pressures to allow more private letter delivery, the Service suspended portions of the Statutes and has virtually stopped enforcing them. According to the legislative history and current Postal Service policy, the purpose of the Statutes has long been to ensure adequate revenue to permit the government to meet various public service objectives, including universal mail service to all communities. The Postal Service believes that any change in the Statutes could jeopardize its ability to meet such public service mandates. The 1970 Act contains various public service objectives, such as (1) requiring uniformity of certain rates, (2) providing criteria for ensuring public access to services, (3) specifying how costs are to be allocated and postage rates are to be set, and (4) providing free or reduced rates to certain categories of mailers. As discussed below, these public services have changed over the years and differ in some cases from what was anticipated in 1970. The rate uniformity requirement, which is stated at 39 U.S.C. 3623(d), requires that the rates charged by the Service for at least one class of mail that is sealed against inspection must be uniform. The Service provides a uniform rate for First-Class letters delivered anywhere in the United States, its territories, and possessions. However, the Service’s rates are more complex today than in 1970. The Service has adopted a broader range of rates over the years that more closely reflect its processing and delivery costs. To illustrate, in 1970, there were only two rates for 1-ounce First-Class letters—an 8-cent rate for regular letters and an 11-cent rate for air mail. For third-class mail, there were three rates—23 cents per pound for circulars and 17 cents per pound for books, with a minimum rate per piece of 4 cents. No discounts were offered to mailers who presorted their mail or performed other steps that reduced the Service’s processing time and costs. In contrast to the 1970 rates, current postage rates, which became effective in January 1995, include a variety of First-Class and third-class rates. The single-piece, 1-ounce First-Class letter is 32 cents. First-Class mailers who perform certain worksharing functions that reduce the Service’s processing costs are charged lower rates. The eight worksharing rates for First-Class letters weighing 1 ounce or less range from 25.4 cents to 30.5 cents. Similarly, the rates for letter-size third-class mail are different today from what they were in 1970. Currently, such rates vary depending on the extent of transportation and preparation (i.e., presorting and prebarcoding) by mailers and the weight of the piece. For example, the rates for one piece of regular, bulk, third-class letter-size mail with no mailer transportation or preparation is 22.6 cents but reduced to 11.7 cents if the mailer presorts it into the order that it is to be delivered and transported to the postal unit responsible for delivery. Further, the volume of First-Class mail subject to the uniform rate requirement has declined, as a percentage of total mail volume, since 1971. In 1971, First-Class mail made up 59.2 percent of the total volume, compared to 53.3 percent in 1995. In contrast, third-class mail increased from 23.6 percent of the total volume in 1971 to 39.3 percent in 1995. Currently, a relatively small percentage of the overall mail volume is generated by residential customers. In 1994, according to the Service’s studies, the volume of household-generated mail represented 10 percent of the total volume; the volume of household-to-household mail was an even smaller part—only 3.6 percent of the total. About 55 percent of the total mail stream was sent from business to households, and about 35 percent was sent from business to business. The Postal Service did not have similar data for 1970 or other years immediately following the 1970 Act on business and residential mail volumes. Under the 1970 Act, the Service must provide access to the U.S. mail system through post offices and other means. However, the Service may not close a post office solely because it is operating at a deficit, even though a more cost-effective means of providing access may be available. Rather, under the 1970 Act, a number of criteria must be considered.Further, any person whose service is affected by any proposed post office closing may appeal the closing to the Postal Rate Commission (PRC). Postal Service officials told us that the number of post offices not producing sufficient revenue to cover the related operating costs has grown since 1970. This trend has occurred for three basic reasons. First, the make-up of mail has changed, with far more business mail and far less residential mail. Secondly, worksharing postage rates introduced since 1970 encourage mailers to (1) bypass local post offices and “drop ship” mail closer to the mail’s delivery destination; and (2) deposit large volumes of mail, already sorted and barcoded, at the Service’s mail processing plants rather than local post offices. Thirdly, in earlier years, postage stamps could be purchased only from a post office or rural letter carrier. Today, they may be obtained from a variety of sources, including grocery and other retail stores, vending machines, and mail carriers. Stamps may also be ordered by mail. The Service is exploring various ways, in addition to the traditional post office, of providing ready customer access to all “retail” postal services, such as placing “Postal Express” units in private retail stores located in shopping centers and opening postal stations in shopping malls. The Service is constrained, however, in upgrading its post office infrastructure, which remains largely the same as in 1970. According to Service data, of the 39,149 post offices it operated in fiscal year 1995, 17,702 (about 45 percent) reported taking in annual revenues that were lower than their aggregate expenses for the same year by about $1.1 billion. Under the 1970 Act, as amended in 1976, the Service is required to follow specific procedures and criteria for closing post offices. These procedures include responding to appeals that could be filed by any person whose service may be affected by the proposed post office closing. Under the 1970 Act, the Commission has 120 days to make a decision on each such appeal. Of the 239 proposed post office closings in fiscal year 1995, 22 were appealed to the Commission. Fourteen closings were upheld, 4 appeals were withdrawn, and 4 were sent back to the Service for further review, i.e., remanded. According to Commission data, the time necessary to complete the appeals process was less than 120 days in each of the 22 cases. In some cases, however, the total time taken to close a small post office ranged up to several years from the date the Service began working with the affected community to the date of the closing. For example, in consolidating a post office in Clarkia, Idaho, the Service began the process in October 1993, and PRC issued a final document on the case stating that the Service met statutory requirements in March 1996—about 30 months later. In line with its public service mission and restrictions on private letter delivery in the 1970 Act, the Service and PRC must allow the public, including competitors, to review and comment on proposed changes to domestic postage rates and mail classifications. Changes to international mail rates are not subject to review outside the Postal Service. The 1970 Act includes specific criteria and requirements for allocating costs among classes of mail and for achieving various public service objectives. Achieving those objectives, while also recognizing the impact of the competitive markets in which the Service must operate, involves some trade-offs. For example, in setting rates under the 1970 Act, the Service and PRC must balance a number of criteria, including the relative demand for the various classes of mail and the need to be fair and equitable to all its customers. Achieving this balance has generated much debate and disagreement among the Service, PRC, and many parties who participate in ratemaking or are affected by the resulting rates. The Service must follow some procedures and criteria prescribed in the 1970 Act for bargaining with unions and resolving disputes over working conditions, including employee pay and benefits, that are unique to the Service. Unlike other federal and private organizations, the Service is also required by law to consult with postmasters and supervisors before making any changes to pay and certain other matters affecting these employees. Postal Service employees do not have the right to strike, nor does postal management have the right to lock out striking employees or hire replacements. Instead, the 1970 Act provides for binding arbitration to resolve bargaining deadlocks. When adopting this provision in 1970, Congress emphasized that the parties were to make every attempt to reach agreement bilaterally through earnest, good faith negotiations. Arbitration was to be used only as a last resort. However, contract negotiations between postal management and most of the major unions often have resulted in impasses that were settled by an arbitrator. The Postal Service believes that the Statutes must remain intact if it is to carry out its current public service mission in accordance with the various requirements and constraints of the 1970 Act, some of which we discussed previously. No private sector organization that competes with the Service has similar requirements or constraints. The Service believes, for example, that its “double postage rule” for private delivery of extremely urgent letters is necessary to maintain adequate revenue for operation of the current system. This rule can result in additional cost to some mailers who choose to use private carriers for nonurgent mail delivery and, in turn, agree to pay required postage to the Service, under “alternative postage agreements,” which we discuss later in this chapter. The rule also sets a minimum price that any customer of a private delivery firm must pay for certain letters. The Service believes that the protection provided under the double postage rule is necessary for meeting its public service obligations. For example, the Postmaster General, as part of his testimony in 1995 before the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, said that “If the double postage rule for extremely urgent letter mail was suspended, all Postal Service letter and flat mail potentially could be diverted.” He said that eliminating the rule would jeopardize the Service’s mandate to provide universal service at uniform rates because private carriers would resort to “cream skimming.” These practices were described in the Board of Governors’ 1973 report on the Statutes and are discussed more fully below. While the Service views the Private Express Statutes as essential to executing its public service mission, the administration and enforcement of the Statutes have been challenged by mailers and competitors. These challenges have come in the form of questions regarding the underlying economic theory cited by the Board of Governors in 1973 for the Statutes and requests for the Service to suspend the Statutes for certain letter mail. Responding to pressures from mailers and competitors, the Service suspended the operation of the Statutes for certain letters in 1979 and 1986, only to have its authority to make such suspensions questioned by competitors and other parties. Moreover, Postal Inspection Service officials told us they have stopped direct enforcement of the Statutes because of pressures from mailers, competitors, and some Members of Congress. The restrictions on private delivery contained in the Statutes have been defended by a number of parties, including the Kappel Commission, the Board of Governors in its 1973 recommendation to Congress, and some experts on the economics of postal services. These parties usually offer one or more of three basic justifications: • A single provider, currently the Postal Service, can operate at a lower total cost to the nation than multiple providers. • Without restrictions on private delivery, cream-skimming by private competitors in the most profitable postal markets would undermine the ability of the Service to provide universal service at reasonable, uniform rates. • Postal services, historically, have been viewed to be of such importance to binding the nation together that they should be essentially immune to disruption by labor disputes, bankruptcy, and other difficulties that private businesses face, regardless of whether this minimizes the cost to hard-to-serve customers, or to the nation as a whole. Whether the Postal Service does or can achieve these objectives more effectively than if additional providers are allowed to participate freely in letter mail delivery has been studied and debated often since the Board’s 1973 report. Some of the Service’s largest customers and competitors, PRC, the Department of Justice, and many economists have questioned the need for and economic justifications of the Statutes in today’s environment. A complete analysis of all economic perspectives on the Statutes was not within the scope of our review. However, we did examine two proposals for changing the Statutes made by some of the Service’s largest third-class customers. The proposals and the Service’s responses, discussed below, were predicated on discussions of economic theory concerning the letter mail monopoly published since the Board of Governors’ 1973 report. Some of the Service’s largest customers requested that the Service take steps to allow certain letters to be delivered by private carriers. Primarily because of concern for maintaining its revenue base, the Service declined to allow such delivery. In March 1988, the Third Class Mail Association (TCMA), a trade association representing more than 300 companies and other organizations engaged in “distributed” advertising, requested that the Postal Service suspend the Statutes for third-class mail. TCMA, which is now the Advertising Mail Marketing Association, believed that the suspension would serve the public interest because the Service’s definition of a letter was not equitable. For example, advertisers who mailed catalogs of 24 pages or more could use private carriers, but those whose catalogs were under 24 pages could not. TCMA also argued that the advertising industry did not receive benefits from the Service that were commensurate with the rates charged, and it should be allowed to use alternative delivery services. Finally, TCMA was concerned that the advertising industry would bear an even heavier share of the Service’s cost in the future because of attempts to “balance” the perceived value of advertising mail with other mail, such as business and personal communication. The Postmaster General responded that the requested suspension would not be in the public interest. The Service’s principal argument against the suspension was that third-class mail, which ranked second to First-Class in revenue and volume, was too important to the Postal Service as a whole. The Service also pointed out that certain items, such as books, were already excluded from the Statutes. The Service also disagreed with TCMA’s view that third-class mail had been assigned overhead costs disproportionately in comparison to other classes. Finally, the Service emphasized that all of the mail it delivers is valuable and expressed concern about the public’s perception regarding the nature of third-class mail, which is sometimes referred to by the general public as “junk mail,” and its contribution to the Service’s financial well-being. In October 1988, TCMA submitted a complaint to the Commission to compel the Postal Service to suspend the Statutes (under authority in 39 U.S.C. 601) for addressed, third-class mail. The complaint did not result in suspension. However, PRC requested a written compilation of theoretical views regarding economic justifications for the monopoly. PRC held hearings and published its proceedings but did not reach any conclusions or take any action on the complaint or the related views it received. At the same time, PRC reported that relevant economic theory had advanced since 1970. It said, “New cost and pricing concepts have been developed that can provide theoretical insights into both justifications for, and challenges to, a statutory monopoly.” Since the PRC monopoly inquiry, a number of papers, articles, and books have been published concerning the economic reasons for and against the postal monopoly. Debates about the economic justification for the Statutes often focus on whether postal functions fit the economic model of a natural monopoly. One argument is that among the various postal functions, namely, collection, sorting, transportation, and delivery, those that most closely resemble a natural monopoly are collection and delivery. Generally, this is because the economies of scale and scope associated with such functions are believed to favor economic and efficient provision by a single supplier. In this regard, postal services frequently have been compared to telecommunications services. Until passage of the Telecommunications Act of 1996, local telephone companies maintained networks for call origination and termination that have been compared to postal collection and delivery functions. Long-distance carriers provided services that more closely resembled postal sorting and transportation functions, the intermediate steps that occur between pickup and delivery. Various competing economic theories, and their relationship to current statutory restrictions on postal services, have been offered and debated. Some maintain that the provision of the most efficient, universal, and affordable postal services requires maintenance of the postal monopoly. Others argue that postal customers will be best served only under free and open competition. Our research of literature on the issue (see selected bibliography) revealed that none of the materials that we reviewed indicated a need to expand the scope of the Statutes—a conclusion also reached by the Board of Governors in 1973. The vast majority of the research results and opinions that we reviewed indicated that (1) economic theory supporting the Statues has changed since they were last reviewed in 1973, and (2) much more is known today about the Service’s operating costs than in earlier years. For example, in testimony before Congress in 1995, a United Parcel Service (UPS) representative presented a paper prepared by two economists in which they challenged the economic basis for the Statutes. The authors of that paper, subsequently published in book form, asserted that there appeared to be “no intellectually defensible argument that the Postal Service’s statutory monopoly under the Private Express Statutes flows directly from a natural monopoly that it purports to possess over mail delivery.” Instead, they said that private firms had proven mail markets to be “demonstrably competitive” and called for repeal of the Statutes in order to “encourage the entry of private firms into mail services currently monopolized by the federal government.” The authors also concluded that universal service and geographic uniformity of rates no longer depended on “public provision of the full range of postal services.” Rather, they argued that competitive provision of letter mail service not only would ensure universal service, but likely would “increase . . . the integrity and efficiency of the mail stream because of the superior incentive structures . . . in private firms.” In January 1995, a group of the Service’s customers and competitors called the Coalition for the Relaxation of the Private Express Statutespetitioned the Postal Service to initiate a rulemaking to suspend the Statutes for all or certain categories of third-class mail. The Coalition said its members included “private carriers of mail that would like to be able to compete more broadly with the USPS and users of USPS third-class mail that would like the opportunity to enjoy the benefits of such competition.” Coalition participants included the nation’s largest alternate delivery networks and the industry’s recognized trade association, as well as the largest third-class mailers’ associations. In other words, the Coalition acted on behalf of organizations representing the vast majority of those who mail third-class material and who deliver it outside the U.S. mail system. In its petition, the Coalition said that the world had changed markedly since the Service examined the Statutes in 1973. For example, the Coalition cited such changes as (1) the Service’s “de facto relaxation of its monopoly over the transportation of mail;” (2) increased competition in the telecommunications industry, which had been used as a public service monopoly model to justify continuing the letter mail monopoly; (3) better understanding of the Service’s mail delivery costs and the consequences of mail volume and revenue losses; and (4) changes in economic thinking regarding the application of natural monopoly theory to postal services. In response to the Coalition’s petition, the Service’s General Counsel declined to initiate the requested rulemaking procedure. She said that for the most part, the developments discussed in the petition predated the 1988 request discussed above. She also argued that to consider the issues raised by the Coalition “in a piecemeal fashion” by focusing only on private express matters and one particular class of mail would not allow the Service to address broader issues, such as infrastructure and labor costs and pricing. Currently, private delivery of addressed, third-class letter mail is prohibited under the Statutes and implementing Postal Service regulations. According to the Service’s mail stream breakouts, the vast majority of third-class mail meets its definition of a letter. Third-class mail represented 71.1 billion pieces, or nearly 40 percent of the Service’s total mail volume, and $11.8 billion, or almost 23 percent, of its total revenues in fiscal year 1995. As a result of requests made primarily by private delivery companies, the Service has issued regulations to suspend the Statutes for certain letters. Included in the suspensions are extremely urgent letters and international letters originating in the United States for delivery in other countries. However, some parties have questioned the Service’s authority to make such suspensions. In 1979, the Postal Service suspended the Statutes for extremely urgent letters (39 C.F.R. 320.6). The 1979 suspension allows letters to be sent by private carrier without payment of postage if the letters are deemed extremely urgent. The regulations specify criteria that must be met for a letter to qualify as extremely urgent. Relying on that suspension, some private companies began a practice called “international remailing” wherein nonurgent letters mailed in this country were transported by private carriers to other countries for distribution and delivery to other countries. For several years afterwards, the Service tried to stop international remailing. Believing this practice was a misuse of the urgent-letter suspension, the Service proposed to modify the 1979 suspension to clarify that it did not allow for international remailing. However, U.S. mailers’ comments on the proposed clarification were overwhelmingly negative. Because of requests from mailers and private carriers to continue the practice, the Service issued regulations in 1986 to exempt all outbound international letters from the Statutes. To suspend the Statutes, the Postal Service cited a provision of the 1970 Act that (1) set forth certain circumstances in which private delivery of letters is permitted (39 U.S.C. 601(a)) and (2) allows the Postal Service to “suspend the operation of any part of this section upon any mail route where the public interest requires the suspension” (39 U.S.C. 601(b)). Some parties in both government and the private sector have questioned whether Congress intended that the latter provision (601(b)) be used to permit greater use of private carriers to deliver letter mail. They have argued that the purpose of 39 U.S.C. 601(b) was to provide authority to the Postal Service to stop, not facilitate, private delivery of letter mail. In 1973, when the Postal Service proposed regulations to suspend the Statutes for certain items, PRC’s legal staff reviewed the regulations and concluded that use of the suspension authority would violate the original legislative intent to stop private carriage of letters. In 1976, after the regulations were adopted, PRC decided that it did not have jurisdiction over the Service’s proposed changes to the Statutes and therefore elected not to comment further on the proposed rulemaking. Subsequently, the Service proposed to suspend the Statutes for extremely urgent letters in 1979 after representatives of the private delivery companies urged Congress to exclude such letters from the Statutes. Industry representatives, including ACCA, also contended that the Postal Service’s use of the suspension authority in 39 U.S.C. 601(b) violated its legislative intent. They were concerned that if the Service could unilaterally suspend the Statutes, it could similarly revoke the suspension, and they contended that this would create havoc for the existing private delivery companies and jeopardize their financial stability. They said the industry had not taken the issue to court because resolving the matter through litigation likely would be expensive and protracted. In 1988, the President’s Commission on Privatization reported that “. . . there is a legal issue as to whether the Postal Service has the authority to issue regulations (as in 39 C.F.R., Part 320 above) suspending the criminal code. If it does not, then all the private express couriers are in violation of criminal law under Title 18.” A Postal Service official told us that the language of the Statutes is broad enough to cover suspensions intended either to stop existing private delivery of letters or to allow additional private delivery. In addition, the Postal Service believes that Congress concurred in the Service’s interpretation of the Statutes and use of the suspension authority when Congress was reviewing the proposed suspension for extremely urgent letters during hearings held in 1979. Despite criminal sanctions for violations, Postal Inspection Service officials told us that direct enforcement of the Statutes rarely occurs and has proven difficult for a number of reasons. They include past objections to enforcement by mailers, competitors, and some Members of Congress. Consequently, compliance with the laws and regulations is largely voluntary. Enforcing the Statutes is difficult because violations can occur at any household or business in the United States where letters originate. The difficulty of enforcing the Statutes is compounded by statutory exceptions and regulatory suspensions that permit private delivery of some letters, but not others. For example, under the suspension for extremely urgent letters, mailers determine whether their letters meet the urgency criteria. Consequently, nonurgent letters may also be mailed privately without any easy means of detection. Further, when the Service tries to enforce the Statutes, it finds itself in an adversarial, and possibly self-defeating, position of investigating and prosecuting its own customers. Separately, private carriers told us that they do not examine the contents of sealed envelopes and packages tendered by their customers for overnight delivery. Rather, they suggested that primary responsibility for compliance with the Statutes rests with mailers, not carriers. However, carriers also bear certain responsibilities under Service regulations. Postal Inspection Service data show that the Inspection Service completed compliance audits and follow-ups at 62 business and government entities between October 1988 and June 1994. Of these 62 entities, 39 (63 percent) had violated the Statutes, according to the Inspection Service. None were prosecuted, nor were any fines or penalties assessed. Of those 39 entities, 22 said they had stopped sending nonurgent letters via private carriers. Another 13 chose to continue using private carriers to deliver nonurgent letters and, through March 1995, had paid the Service about $1.2 million under “alternate postage agreements.” Of the $1.2 million, about $989,000 (81 percent) was paid by one company. Our review of the Inspection Service audit reports showed that mailers generally wanted to use private carriers because they charged lower rates and provided more dependable delivery services than the Postal Service. Two examples follow. BellSouth Services, Inc.(BSI), in Birmingham, Alabama, is an affiliate of BellSouth Corporation, headquartered in Atlanta. BSI provides mailing and other services for BellSouth’s regional telephone companies—Southern Bell and South Central Bell. BSI initiated a cost-cutting move in the mid 1980s, whereby Southern Bell stopped using its own employees to carry intracompany mail. Instead, it began using a trucking service, already under contract to transport supplies, to also carry the mail at no additional cost. On their own initiative, BellSouth corporate officials determined that this arrangement was in violation of the Statutes and that postage should have been paid for letters sent by contract carrier. In August 1989, Southern Bell’s Jacksonville, Florida, unit found that it owed about $5,300 in postage for that month and paid that amount to the local postmaster. Subsequently, the Postal Inspection Service initiated an audit at the Jacksonville unit in September 1989 and determined that the postage due from BellSouth on letters sent to and from Jacksonville by the contract trucking service amounted to over $69,000 per year. BSI officials elected to continue using the trucking service and signed an alternate postage agreement to pay the Service for ongoing postage. BellSouth officials in Atlanta asked the Inspection Service to audit other Southern Bell and South Central Bell operating units, and it found violations at all but one unit. In total, postal inspectors conducted 23 audits, including follow-up visits, at BellSouth units between fiscal years 1990 and 1994. As of March 1995, the Inspection Service reported total collections of about $989,000 from BSI under various alternate postage agreements. BellSouth officials told us that the arrangement with the Postal Service was satisfactory to them because they were still saving postage costs. However, company officials also indicated they did not like having to pay the Postal Service for services it was not providing. Equifax, Inc., a credit reporting company in Atlanta, was audited by the Inspection Service on the basis of a March 1991 lead from a Postal Service employee. Equifax initially denied the Inspection Service access to company mailing records. However, after the Inspection Service submitted a written request to the company’s president, Equifax agreed to cooperate. In order to determine the amount of postage due to the Postal Service, the Inspection Service analyzed mail sent by the company’s primary private carrier between June 1991 and March 1992 and conducted a 2-week survey of mail sent by a secondary private carrier. The Inspection Service reported that Equifax used private carriers to deliver nonurgent letters without required postage, thereby violating Service regulations. While the private carriers offered lower rates than the Postal Service for some, but not all, zones, Equifax’s decision to use private carriers appeared to be based primarily on service rather than cost considerations. Consequently, Equifax signed an alternate postage agreement for the 1-year period that ended in September 1992 and agreed to pay $32,682 on letters sent by private carriers. Equifax and the Postal Service did not continue the agreement beyond that year because Equifax said that it had changed its policy on the use of private carriers. The Inspection Service did follow-up work in September 1993, determined that Equifax was in compliance, and closed the case. Equifax officials told us that they viewed the audit experience as “counterproductive.” The audit resulted in payments to the Postal Service totaling less than $33,000, compared with total postage expenses of about $8 million that officials said the company pays annually. In 1993 and 1994, mailers and competitors questioned the Service’s authority to audit mailers’ compliance with the Statutes and to collect postage on letters sent by private carriers. By June 1994, the Postmaster General had deemphasized the Postal Inspection Service role in ensuring compliance with the Statutes by shifting that responsibility from the Chief Postal Inspector to the Senior Vice President for Marketing. This change was made after concerns were raised in Congress regarding the Service’s audits of various mailers. A bill (S. 1541, 103d Cong., 1st Sess. (1993))was introduced in October 1993 to limit the Service’s authority to fine or otherwise penalize mailers who used private carriers. The Inspection Service has not initiated any new compliance audits since February 1994. Currently, the Postal Service tries to promote compliance by educating mailers and private carriers about the Statutes. It believes that the Statutes act as a deterrent to illegal delivery of letters by private carriers. The education efforts include the use of postmasters and other postal employees to apprise the public of the Statutes’ requirements and discussions by Service officials at various symposia and conferences attended by mailers. In 1994, the Service established an office in Chicago with primary responsibility for educational efforts regarding the Statutes. The office, which had two employees, reviews allegations of possible violations of the Statutes coming into the Postal Service. When warranted, the office can forward apparent violations to the Postal Service’s General Counsel and request audit support from the Postal Inspection Service. Examples of educational activities conducted by the Chicago office included participation in several national and regional conferences with postal customers, presentations on the Statutes to postal employees at various locations around the country, coordination with Postal Service account managers assigned to work with major commercial mailers, administration of existing alternate postage agreements, and conduct of compliance reviews at mailers’ facilities. In 1971, the Postal Service faced little competition for delivery of letter mail. Its competition has grown substantially since that time, partly as a result of the Service’s regulatory suspension of the Private Express Statutes for certain letters. Although the bulk of the Service’s mail volumes has remained under the protection of the Statutes, numerous national and local mail delivery firms exist; both their numbers and the volume and variety of services they offer are increasing. Generally, private delivery firms that we reviewed delivered (1) expedited (or overnight) and 2-day and 3-day (also called deferred) letters and parcels or (2) unaddressed advertising circulars or periodicals. These firms compete on a local, national, or international basis for portions of delivery markets previously served largely or exclusively by the Postal Service. In 1971, the newly organized Postal Service faced limited competition from two private carriers, the Railway Express Agency (REA) and United Parcel Service (UPS). This competition was largely confined to the surface delivery of packages, although UPS did offer a limited, second-day air package service beginning in 1971. REA never posed a strong competitive threat to the Postal Service. Its business had dropped off steadily since the late 1940s and, in 1975, REA filed for bankruptcy and terminated all operations. Although UPS was a growing business, it trailed well behind the Postal Service. In 1971, UPS’ surface deliveries totaled about 547 million packages. By comparison, the Postal Service delivered approximately 968 million pieces of fourth-class mail in 1971. The Postal Service held an even greater edge in second-day air services, delivering about 197 million Priority Mail pieces in 1971, compared to 11 million second-day UPS air shipments. In 1971, when the Postal Service introduced an experimental Express Mail service, Federal Express (FedEx) was not yet operating. FedEx began overnight delivery operations in April 1973. FedEx discloses limited information on its delivery volumes but did report handling an average of 35,000 packages per night at its Memphis, TN, hub in 1978. If FedEx maintained this rate for 250 business days, its package volume would have totaled nearly 8.8 million pieces in 1978. During that same year, the Postal Service delivered approximately 8 million Express Mail pieces. Thus, on the basis of these data, it appears that FedEx may have surpassed the Postal Service as the leading overnight delivery carrier in less than 5 years. The limited competitive environment of the early 1970s was indicated in a study of the Statutes mandated by Congress in the 1970 Act and conducted by the Postal Service’s Board of Governors in 1973. In that study, a Postal Service contractor, McKinsey & Company, was to review the threat of private sector delivery to First-Class (letter) mail. However, to conduct the study, McKinsey had to construct two hypothetical firms because it found that “no comparable real ventures” existed. In 1992, a small number of large, private carriers dominated the expedited letter and package delivery markets. The Postal Service competes for business in those markets through its Express Mail and parcel post services, respectively. Most of those private carriers also have made substantial gains in the deferred delivery market by offering 2-day as well as 3-day air shipments in competition with the Service’s Priority Mail. After the Postal Service suspended the Statutes for delivery of extremely urgent letters in 1979, several private carriers joined the Postal Service, UPS, and FedEx in the expedited letter and package delivery markets. In 1979, Airborne Freight Corporation, a Seattle-based company, began offering expedited letter and package deliveries through its subsidiary, Airborne Express. In 1983, DHL Airways, an established international air courier, entered the domestic expedited mail market. Roadway Package System (RPS) entered the ground package delivery market in 1985. The Postal Service identified FedEx, UPS, Airborne, DHL, and RPS as its chief competitors for expedited mail and package deliveries. To indicate their importance as competitors, we compared the revenues of the Service’s domestic Express Mail, Priority Mail, and fourth-class business mail with the total domestic revenues reported by the five firms for all of their expedited and ground parcel services. The aggregate domestic revenues for these services were about $31.7 billion in 1994. Of that total, the Postal Service’s share was just under 15 percent. Our analysis also showed that the Postal Service’s share of total revenue in the expedited letter and package markets was less than the shares of either UPS and FedEx but greater than those of Airborne, RPS, and DHL. (See fig. 3.1.) UPS ($17.3) Postal Service officials estimated that the Service’s share of the expedited delivery market was 18 percent in 1994 and declining. A Postal Service marketing official attributed the loss of expedited mail volumes to the following four factors: the Postal Service’s inability to engage in carrier “price wars” common to the highly competitive, expedited mail market, and the Service’s inability to offer discounted postal rates to high-volume customers; • competitors’ greater capacities to provide shipment-related information services, such as automated package tracking and tracing; the less extensive geographic “reach” of the Service’s dedicated Eagle air transportation network, which limits the number of locations where the Service consistently has been able to match or exceed its competitors’ “next-day, morning” delivery performance; and • public perceptions that private express carriers offer more dependable service than the Postal Service. Conversely, Service officials estimated that in 1994 the Service’s share of the ground parcel delivery market was 15 percent and growing. They attributed this growth primarily to three factors. First, because of higher per-piece delivery costs to residential areas, UPS and RPS have imposed surcharges for residential ground parcel deliveries in recent years, thereby making the Service the low-cost provider in the more cost-sensitive, residential segment of the parcel market. Secondly, the Service improved its performance for parcels drop-shipped for delivery within geographic areas covered by the Service’s bulk mailing centers. Thirdly, the Service improved its service by forwarding packages to recipients’ work locations or leaving packages at residences when no one was home, so customers did not need to make a trip to the delivery post office during specified service hours to retrieve the package. Although the Service has lost market share in the expedited letter and parcel markets over the years, its overall mail volume has continued to grow since 1970. This growth was attributable primarily to increases in mail volumes and revenues for those mail classes largely protected by the Statutes, First-Class and third-class. By comparison, volumes and revenues in those mail classes and subclasses subject to full or significant competition have shown relatively little growth, as shown in figures 3.2 and 3.3. We recently reported that compared to private delivery firms, the Postal Service’s competitive position in the international mail market also eroded after it suspended the Statutes for outbound international letters in 1986.Since that time, private carriers have come to dominate this market, notably FedEx and DHL. Together, these two firms accounted for more than half of the $3.5 billion in total international mail revenues in 1992. The Service’s share of the international mail market declined because private firms offered more competitive services and prices. All but one of the Postal Service’s principal competitors for expedited letter and parcel delivery services—DHL—also offered deferred (2-day and 3-day) package delivery services, and most were adding other services at the time of our review. None of those five firms disclosed detailed operating data by product line or type of service. However, we were able to compare publicly available data on the kinds of services offered with similar services offered by the Postal Service, as shown in table 3.1. Of the services listed in table 3.1, only deferred letters are covered by the Private Express Statutes. As indicated, four of the five private carriers offered deferred package service. Only one of the five carriers publicly offered a deferred letter service. If the Statutes were relaxed to permit private carriers to deliver deferred letters, it appears the remaining firms easily could add letters to their deferred delivery services for the localities they now serve. Thus, whether private firms deliver larger volumes of letter mail in the future appears to depend less on private delivery capacity than on statutory restrictions or the profitability of such deliveries. Although the Postal Service considers the five carriers discussed above to be its most prominent competitors, many other private firms also offer expedited mail delivery services. As one indication of the number, the Air Courier Conference of America, a trade association whose member firms compete with the Postal Service, reported that it had 78 members in 1995, including most of the Postal Service’s principal competitors. Similarly, the Express Carriers Association listed 64 ground package express carriers in its 1995-96 directory. Most of these carriers serve regional or local markets as an “alternative to the regular common carrier.” As indicated above, private carriers have continued to dominate the expedited letter and parcel markets since our 1992 report. At that time, private carriers reported that they were eager to expand their deferred delivery business and to compete for a greater share of the Service’s Priority Mail volumes. Our current review showed continued strong interest in second-day and third-day mail delivery. Some of the carriers that offered deferred letter or package delivery services (Airborne, FedEx, RPS, and UPS) acknowledged that such services had been among their fastest growing business segments in recent years, though none were willing to provide specific data. They generally predicted continued strong growth in the deferred delivery market. Priority Mail represents the majority (about $3 billion, or nearly 58 percent) of the combined revenues from the Postal Service’s Express, Priority, and fourth-class mail services in fiscal year 1995. Priority Mail is a category of First-Class mail that the Service has marketed as a competitively priced, 2- or 3-day delivery service available throughout its domestic service areas. Postal Service officials estimated that as much as 70 percent of approximately 869 million Priority Mail pieces handled in fiscal year 1995 were letters covered by the Private Express Statutes. Notwithstanding statutory restrictions on letter mail delivery, about 375 predominantly local and mostly small delivery firms operate in 47 states and compete in a fast-growing advertising mail market, a subscriber publication delivery market, or both. Known collectively as the “alternate delivery industry,” these firms compete with the Postal Service for the delivery of third-class advertising mail that does not fall within the definition of a letter and for second-class publications mail. According to an industry group, the Association of Alternate Postal Systems (AAPS), these firms “provide delivery and distribution of circulars, tabloids, magazines, catalogues, directories, flyers, samples and other printed material and advertising outside of the U.S.P.S. mail stream.” The number of these firms grew rapidly in the late 1980s and early 1990s. More recently, many have entered into nationwide alliances to market their services to national advertisers and publishers. Collectively, they represent a significant and growing source of additional private sector competition for mail delivery. From the information we obtained, we determined that the contemporary alternate delivery industry began to take shape in the 1970s and grew rapidly in the 1980s and 1990s. From 1982 to 1994, the number of alternate delivery firms increased from 108 to 387, but they declined slightly to 375 in 1995. Most new start-ups (226 of the 375 firms) occurred over roughly a 5-year period, from 1988 to 1993. Several events stimulated expansion and transformation of the industry in the 1980s. One was the growth of computerized database marketing programs that allowed advertisers to target direct mail advertising to specific geographic and demographic groups. This led to a proliferation of highly customized household mailing lists, which also contributed to significant growth in third-class advertising mail delivered by the Postal Service. In addition, three significant increases in the Service’s third-class postage rates that became effective in 1988, 1991, and 1995 prompted some advertisers to seek lower cost delivery alternatives. Another event, and one that many industry experts believe was of greatest significance, was the growing participation of newspapers and other publishers in alternate delivery ownership and operations. Newspaper publishers owned about three-fourths of the firms that entered the market between 1988 and 1994. These publishers, who account for the largest share of the overall advertising market, traditionally relied heavily on revenue from advertisements printed within a newspaper’s pages, known in the trade as “run of press” advertising. However, many advertisers have shifted to less expensive advertising inserts. As this change took place, newspaper publishers found themselves increasingly in direct competition with the Postal Service. For example, in discussions with us, newspaper industry representatives were outspoken about losing revenue to the Postal Service and the implications of any such losses to the financial health of the newspaper industry. The developments highlighted above both stimulated new entrants to the advertising delivery market and contributed to an unprecedented growth in the Postal Service’s advertising mail deliveries, despite rate increases. As previously illustrated in figure 3.2, the Postal Service delivered about 20 billion pieces of third-class mail in 1971 compared to about 71 billion pieces in 1995. As indicated previously in figure 3.3, this growth rate far exceeded that of the larger First-Class category and significantly narrowed the gap between First- and third-class mail. According to Postal Service regulations, advertising matter under 24 pages and addressed to a specific person or occupant is “letter mail” and thus subject to the Statutes. However, the firms we studied and the Postal Service have found innovative ways of targeting and delivering advertisements to households without using an addressed envelope. Alternate delivery firms we interviewed used a variety of delivery strategies. Many firms made “saturation” deliveries once a week to households to deliver such items as advertising flyers, local government notices, product samples, unpaid community newspapers, and telephone directories. Typically, items were placed in plastic bags and hung on a door knob, placed on a front porch, hung from a hook placed on the mailbox post, placed in a delivery tube attached either to its own stand or a mailbox post, or tossed onto driveways or walkways. For alternate delivery firms owned by newspapers, the core delivery product is what they call a “total market coverage” (TMC) package. Generally, a TMC package includes at least one item, such as a free community newspaper or a weekly entertainment supplement, that would qualify for second-class postage rates on the basis of its editorial content if sent through the mail. TMC packages also include advertisements identical or similar to newspaper advertising inserts. The primary purpose of TMC deliveries is to ensure distribution of advertisers’ messages beyond the newspaper’s subscriber base alone. Because the TMC packages also include an item qualifying as a second-class publication, alternate delivery firms do not consider TMC packages to be covered by the Statutes and the Postal Service agrees. Most newspaper-owned alternate delivery firms also deliver one or more of the following products, either concurrently or separately from TMC deliveries: saturation advertising and product samples, weekly newspapers and shoppers’ guides not otherwise contained in TMC packages, and consumer magazines and catalogs. Private delivery of the latter, however, has declined since 1994. The Postal Service also delivers some third-class advertising mail to occupants and boxholders at specific addresses without an address label on the mail itself. For example, ADVO, Inc., one of the Postal Service’s largest customers, specializes in delivering “marriage mail” by combining pieces from several advertisers in a single mailing. An address card, which is separate from the advertising pieces, is sent on the same day. Many product samples are delivered in the same manner. However, the Postal Service describes this mail as “detached label” and still considers it to be “addressed.” Overall, about 17 percent of the Service’s regular third-class advertising mail was addressed to occupants and boxholders in fiscal year 1994, regardless of whether labels were affixed or detached. More recently, in 1995, the Postal Service announced it was planning to implement an experimental “Neighborhood Mail” program. In this program, the Service proposed to allow mailers to send advertising materials only to “Neighbor” or “Postal Patron” and to eliminate the requirement that it bear a specific street or box address. The purpose of the neighborhood mail program was to provide lower advertising delivery rates to small, local businesses for unaddressed, saturation mail that did not require significant handling and processing by postal employees. The planned program encountered strong opposition, primarily from the newspaper industry, advertising mailers and companies that provide support services to mailers (such as address lists and labels), and the alternate delivery industry. Consequently, the Service deferred the test and later announced that it would not be done at all. Some newspaper and alternate delivery executives perceived the proposed neighborhood delivery program as an attempt by the Service to take business from them. They questioned the Postal Service’s choice of sites, such as Baton Rouge and New Orleans, Louisiana; Rochester, New York; and Sacramento, California, where alternate delivery firms were already operating. These executives were most concerned about the Service’s choice of Rochester. That city’s major newspaper had discontinued a delivery operation in 1995, reportedly after failing to make a profit. Subsequently, Publishers Express (PubX), a national alternate delivery network with which the newspaper had been affiliated, established its own alternate delivery operation in March 1995. Rochester was the only location nationwide where the firm performed its own deliveries instead of working through a local affiliate; a company official believed PubX had been targeted for harassment by local postal officials and workers. A common characteristic of the alternate delivery firms we reviewed was the goal of increasing the volume and variety of items delivered in order to develop and sustain profitable delivery operations. One strategy used to accomplish this objective was the formation of national delivery alliances. Several such organizations have been established, some of which are discussed below. To the extent that several large publishers were involved as founders of or investors in such organizations, they were primarily motivated by a desire to reduce mailing costs and improve delivery service. Locally owned delivery companies affiliated with national networks in order to benefit from the collective marketing of members’ delivery services to national advertisers or publishers. Of about 261 newspapers engaged in alternate delivery that responded to a 1994 survey conducted by the Newspaper Association of America (NAA), about one-quarter indicated they were affiliated with national delivery networks. All of the firms included in our review that were owned or operated by newspaper publishers had contract or licensing agreements with one of two national alternate delivery marketing organizations. One of these organizations, Alternate Postal Delivery (APD), Inc., is headquartered in Grand Rapids, Michigan. Originally formed in 1978, APD issued its first stock offering in 1995 and now is publicly traded. APD has a network of about 40 private delivery affiliates capable of delivering address-specific items to about 10 million households and saturation materials to about 30 million households. The second organization, PubX, was established in 1989 by a group of equity partners led by Time, Inc., and included other magazine publishers, catalogers, printers, and paper companies. PubX, which was headquartered in Marietta, Georgia, built a national network through licensing agreements with predominantly newspaper-owned alternate delivery firms. The number of PubX licensees peaked at 32 in 1994; collectively, they delivered about 60 million pieces that year. However, PubX licensees declined to around 25 by mid-1995, and some of the remaining licensees cut back on second-class magazine deliveries, which constituted the core of PubX’s business. Both APD and PubX tried several marketing strategies to increase the volume of pieces delivered. Officials of both firms said that to deliver magazines profitably and at rates lower than the Postal Service’s second-class postage rates depended on developing a market for so-called “ride-along” advertising, i.e., normally third-class mail pieces that may be delivered to specific addresses when included as inserts with second-class publications. However, ride-along advertising did not develop as fully as anticipated, and many newspaper publishers reduced or terminated magazine deliveries arranged through APD or PubX. Largely as a result of the decline in private magazine deliveries, the combined number of APD and PubX affiliates dropped from 82 in 1993 to 47 by the end of 1995. APD and PubX also have sought to increase the use of alternate delivery by mail order catalog publishers. Toward that end, some of these publishers participated in a 1994 catalog delivery test coordinated by the Direct Marketing Association (DMA). Overall, however, DMA found that Postal Service delivery resulted in more orders and higher dollar sales than private delivery of the same catalogs. Separately, a representative of the Mail Order Association of America said that for large nationwide catalogers, any delivery cost savings associated with alternate delivery were not great enough, given the industry’s relatively limited geographic reach when compared to the Postal Service, to justify shifting portions of their deliveries to private carriers and bearing the additional administrative costs of using multiple service providers. Nonetheless, the JC Penney Company, one of the nation’s largest catalogers, had distributed a portion of its catalogs through former PubX licensees. In January 1996, it terminated its agreement with PubX and reverted to Postal Service delivery in those markets. In February 1996, PubX’s board of directors voted to discontinue all operations. The board cited a number of factors for the decision, including a period of stable postal rates, the “historically low” increase in second-class rates that became effective in January 1995, improved service by the Postal Service, and the Service’s strong financial results in fiscal year 1995. The board said that “the recent improvements within the Postal Service have diminished the need for a hard copy delivery alternative. However, if the USPS cost trends revert to prior levels, hard copy delivery alternatives will once again develop.” “We ran them out of business by improving service and keeping costs low! I can’t say that I am sorry to see them go. But they taught us two valuable lessons. First, if we don’t do our jobs, somebody else will. And second, when we get our act together, we can be one hell of a competitor.” APD and PubX also pursued, unsuccessfully, changes in Postal Service regulations to permit greater competition with the Service for the delivery of advertising letter mail. As previously noted, they and other parties were members of the Coalition for the Relaxation of the Private Express Statutes that petitioned the Postal Service to suspend the letter mail monopoly for some or all third-class mail in January 1995. The Coalition founder told us that its principal target was catalogs of less than 24 pages. Despite the strong interest expressed by alternate delivery firms in expanding their business, industry leaders have acknowledged that they have a long way to go to increase capacity to levels that would represent a significant competitive threat to the Postal Service. In its September 1995 stock prospectus, APD said that “to present a viable alternative to USPS delivery and to attract a substantial number of national customers, must expand the scope of its delivery services to additional ZIP Codes across the United States, including additional major metropolitan areas.” As an indication of APD’s aggressiveness in this regard, it announced in February 1995 that it had signed letters of intent to add 12 former PubX licensees to its affiliate network. We obtained information on two national organizations that primarily delivered publications—the National Delivery Service (NDS), of Princeton, New Jersey; and Nationwide Alternate Delivery Alliance (NADA), which is co-located in Washington, DC, and New York. Both organizations compete with the Postal Service for the delivery of second-class mail and had plans to expand their delivery operations. NDS is a subsidiary of Dow Jones and Company, which publishes The Wall Street Journal and Barron’s. Dow Jones began testing alternate delivery of the Journal to business subscribers in the 1970s because the publisher did not believe that the Postal Service could meet its subscribers’ demands for timely delivery, i.e., by the start of the business day. NDS was established as a separate organizational division of Dow Jones in 1981. NDS did not deliver advertisements and had no plans to do so, according to an NDS official. NDS initially delivered the Journal primarily to businesses but expanded deliveries to residential subscribers. By July 1995, Dow Jones had shifted about two-thirds of the Journal’s estimated 1.55 million daily domestic subscriptions from the Postal Service to NDS, about 60 percent of which NDS delivered to businesses and the balance to residences. NDS also delivered about 1.3 million copies of Barron’s each year to business and residential subscribers. NDS has begun to sell delivery services to other publishers but has limited the service to business publications. On the basis of information provided by an NDS official, we estimated that NDS delivered roughly 280 million second-class publication pieces in 1995. In 1995, NDS’s work force included about 3,500 carriers, over 90 percent of whom were part-time workers. NDS also relied on some independent contractors and, increasingly, affiliate newspapers to make deliveries. An NDS official said that under the affiliate program, participating newspapers’ carriers deliver the Journal at the same time they deliver the local daily newspaper. Eventually, he said this will allow NDS to shift most of the remaining Journal subscriptions from the Postal Service to private delivery. The other national delivery organization, NADA, was formed in 1990 to market the collective delivery capabilities of its members to national business publishers. In July 1995, NADA membership included 74 independent newspaper and publication distributors operating in 55 metropolitan markets. According to NADA’s president, affiliates deliver only second-class items, not third-class advertising material. Affiliates in larger markets typically deliver roughly equal numbers of newspapers and business publications primarily to businesses. Affiliates in smaller markets typically deliver about 75 percent newspapers and 25 percent business publications. He said that if the Statutes were changed to permit more private delivery, some affiliates might expand into third-class delivery. Collectively, NADA affiliates delivered about 5 million pieces every week, or about 260 million pieces annually. Although the Postal Service delivers the vast majority of advertisements and periodicals, the volume of these items delivered by private firms could grow in future years if the Statutes were to be relaxed or repealed. We obtained from an industry official estimates of about $500 million in annual private advertising delivery revenue, which reportedly includes “both local delivery markets throughout the United States and the smaller national delivery market.” This revenue, when combined with the Postal Service’s total revenues from advertising mail of about $12.7 billion for fiscal year 1994, indicates that the industry’s portion of the total advertising delivery market was about 4 percent. By combining volume estimates provided to us by selected national firms, we developed an indication of the magnitude of private periodical deliveries. The combined annual volume estimates provided by NDS and NADA were about 540 million pieces. (Due to the discontinuation of PubX operations in 1996, and the general decline in alternate delivery of consumer magazines to residential subscribers, we excluded APD and PubX second-class delivery volumes from our analysis.) The Postal Service delivered about 10.2 billion pieces of second-class mail, mostly periodicals, in fiscal year 1995. When compared with the NDS-NADA volumes, the Postal Service delivered about 95 percent and those organizations and their affiliates about 5 percent of the total. The Postal Service could face greater competition for delivery of advertisements and periodicals if the Statutes were relaxed or repealed because relatively minimal investment is required to provide these services. A Service official also told us that with greater competitive freedom, mail presort bureaus could enter the private delivery market by expanding current operations or forming alliances with alternate delivery firms. Because presort bureaus already regularly receive and sort letters for many mailers, the official suggested that acquisitions, mergers, alliances, or other business arrangements between these bureaus and alternate delivery firms could be quite profitable ventures. Primarily on the basis of (1) existing private mail delivery capacity, (2) private firms’ actions and stated interests regarding expansion of mail delivery services, and (3) interviews with mailers, we determined that a greater percentage of Priority Mail volumes than other classes of letter mail would be at immediate risk if the Private Express Statutes were to be relaxed or repealed. Lower percentages of First-Class and third-class letters also could be diverted to private delivery, but probably not as quickly or to the same extent as Priority Mail. On the basis of the Service’s revenue and cost data, the financial effects of volume losses would vary greatly among those classes of mail largely comprised of letters. A loss of most or all Priority Mail would have a lesser effect on postage rates than a smaller loss, such as 5 to 10 percent, of First-Class letter volume. Similarly, a loss of 25 percent of the protected third-class mail would have about the same effect on the price of the First-Class stamp as a 5-percent loss of First-Class letter volume. However, a range of factors could increase or decrease the Service’s future mail volumes and postage rates. This makes it difficult to estimate how a change in the Statutes might affect the Service’s finances and postage rates. On the basis of our interviews with the Postal Service’s competitors and mailers as well as analysis of various Service revenue, cost, and postage rate data, we assessed (1) the relative risk of volume losses of First-Class, Priority (a subclass of First-Class), and third-class mail, which make up most of the Service’s letter mail protected by the Statutes; and (2) the estimated effects of such losses on postage rates, particularly the basic First-Class letter mail rate, which is currently 32 cents. Our assessment showed that the risk of loss and the likely impact of such loss at the time of our review and in the near term would vary among the three segments of the Service’s mail stream. (See table 4.1.) Included in our analysis of the risk of loss were structured interviews with private, nationwide express and parcel carriers identified by the Postal Service as its chief competitors (Airborne, DHL, FedEx, RPS, and UPS); a judgmental sample of alternate delivery carriers of varying sizes located throughout the United States, including both newspaper and independently owned firms; and various organizations, mostly nonprofit associations, who collectively represented most of the nation’s commercial and institutional mailers. We also assessed the sensitivity of such losses by using revenue, cost, and postage rate data provided to us by the Commission, which it had used for setting the current 32-cent basic letter mail rate and other new postal rates that became effective in January 1995. We supplemented our analysis of historical ratemaking data by using a financial forecasting model that presented estimates for 10 future years, which was developed by Price Waterhouse LLP (Price Waterhouse) under contract with the Postal Service. As discussed below, the risk of loss varied among the mail stream segments because of (1) differences in delivery capacity, prices, and past competitive actions of private firms that might deliver certain letters that now are protected under the Statutes; (2) the extent to which mailers indicated that they were satisfied with current service and rates; and (3) differences among mailer representatives relative to actions they might take in response to changes in the Statutes. The financial effects of volume losses also would vary, by mail class, due to differences in the overall contribution to the Postal Service’s overhead costs among the various classes. Private delivery firms already have the capacity to deliver a significant portion of those letters designated as Priority Mail, currently covered by the Statutes. Priority Mail consists of both letters and packages. Because such mail is sealed from inspection, the Service does not know how many Priority Mail pieces are letters, but it estimates that as many as 70 percent are. Of the three letter mail classes and subclasses, Priority Mail would be most susceptible to immediate and strong competition if the Statutes were changed to allow competitors to set prices freely and deliver such letters. As noted earlier, Priority Mail is a subclass of First-Class mail. The minimum rate is $3.00 for Priority Mail pieces. Under current Service regulations, competitors must charge at least $6.00, or double the applicable Priority Mail rate, to provide expedited, 2- or 3-day delivery of items defined as letters and weighing 12 ounces or more. We asked private carriers the extent to which they would pursue delivery of protected letters, and we also asked mailers the extent to which they might divert such letters to private carriers, by mail class and subclass, if the Statutes were changed. Of the Service’s principal competitors, national express/parcel carriers, four out of five said they were likely to pursue delivery of Priority Mail letters. Specifically, three said they were “very likely” and one said it was “somewhat likely” to seek additional 2-day letter business; only one said it was unlikely to do so. By contrast, alternate delivery carriers, who compete largely for second-class publications (mostly newspapers, magazines, and other periodicals) and third-class advertising and lack the nationwide delivery infrastructure of the national carriers, expressed little interest in delivering Priority Mail. Only 4 of 17 (24 percent) said they were likely to seek a share of the Priority Mail letter business if the Statutes were changed. (See fig. 4.1.) Only 3 of 12 (25 percent) mailer groups told us they were likely to divert Priority Mail letters to private carriers if the Statutes were modified, and 2 more said they were as likely as unlikely to do so. Of the remainder, five said they were unlikely to do so, and two were undecided. Both in our 1992 report and in our more recent discussions with private carriers and Postal Service officials, we obtained other information that suggested Priority Mail may be at greatest risk for immediate and substantial volume losses to private carriers. For example: • Nationwide private carriers may be able to meet or surpass the Postal Service from a competitive perspective in terms of price, range of services, and reliability. • Airborne, FedEx, and UPS each claim to deliver to virtually all domestic U.S. addresses; RPS expects to be able to do so by the end of 1996; and DHL advertises its overnight service is available “to all major U.S. business centers.” Thus, most of the private express and parcel carriers say that they maintain essentially universal delivery networks. • National carriers already have lobbied Congress to suspend the double-postage rule, discussed in chapter 2, in an effort to facilitate adding letters to their deferred package deliveries. • Declining growth in next-day morning deliveries has caused overnight carriers to consider expanding into the fast growing but less expensive next-day afternoon, second-day, and third-day delivery markets. This trend is reflected in the Postal Service’s Priority Mail volume, which increased nearly 64 percent between 1990 and 1995, from about 518 million to 869 million pieces, while its Express Mail (next-day) volume declined about 3 percent. • Some expedited mail delivered by private carriers is too large to fit in residential mailboxes, is delivered inside to businesses, or requires a signature for delivery. Consequently, most private carriers we interviewed said they are not dependent on greater access to mailboxes in order to expand deferred deliveries. • Unlike the Postal Service, private carriers are able to offer both volume and negotiated discounts to their customers. Further, many mailers perceive private carriers as more reliable than the Postal Service and find their tracking and tracing services superior to those of the Postal Service. • Priority Mail generates high revenue per mail piece, making it especially attractive to private carriers. In 1994, on average, Priority Mail generated gross revenue of $3.45 per piece and net revenue of $1.79 per piece, compared to gross revenue of about 35 cents per piece and net revenue of 12 cents per piece for First-Class mail. For fiscal year 1995, Priority Mail represented less than one-half of 1 percent of all mail pieces handled by the Service but generated almost 6 percent of total revenues. • As noted in chapter 3, the five principal nationwide carriers already command an overwhelming share, about 85 percent, of the combined overnight, 2- and 3-day, and parcel delivery markets. The Service measures its mail volume on the basis of the following origin and destination pairs: business-to-business, business-to-household, household-to-business, and household-to-household. In 1994, about 90 percent of all mail was generated by businesses (about 55 percent of all mail went from businesses to households, and 35 percent went from businesses to businesses), while 10 percent emanated from households. We asked private carriers whether they were most interested in providing business or residential service. We asked mailers in which of those market segments they were likely to divert some mail to private carriers if given greater freedom to do so. Among the five national express/parcel carriers, all but one said they were “somewhat” or “very likely” to pursue additional business-to-business deliveries. The remaining carrier said it was as likely as unlikely to do so. However, only two of the five carriers (40 percent) said they were likely to pursue a greater share of the business-to-household delivery market, while two others said they were as likely as unlikely to do so. None said they were likely to pursue household-to-business deliveries, although one indicated it might be as likely as unlikely to consider that market segment. The vast bulk of Priority Mail falls into the first two categories. (See fig. 4.2.) We asked mailers to assess the Postal Service’s performance relative to its competitors for largely protected mail classes and subclasses. Specifically, we asked them whether the Service’s performance on (1) timeliness and dependability of mail delivery and (2) postage rates was better, as good, or worse than its competitors. The overall results of our interviews where we combined responses for “as good” and “better” are shown in figure 4.3. Our interviews showed that overall, mailers did not rate Priority Mail service well in comparison to service provided by private carriers. Only 4 of 14 mailers (about 29 percent) said that the Service was as good or better than its competitors with regard to service timeliness and dependability. Specifically, only one said Priority Mail service was better than that provided by private carriers, three said it was about the same, six said it was somewhat or much worse, and four had no opinion. The Service’s on-time delivery rates for Priority Mail generally have been below 90 percent. For example, during fiscal year 1995, the Service delivered 82 percent of Priority Mail shipments within its 2-day standard and 94 percent within 3 days. Among the same respondents, 8 of 14 (57 percent) said that Priority Mail rates were as good or better than private carriers’ rates, 2 said the Postal Service charged more than its competitors, and 4 had no opinion. As noted in our 1992 report and confirmed in discussions with nationwide carriers and mailers, Priority Mail users generally tend to be more price-sensitive than overnight mailers. On the basis of our overall assessment of industry capacity and interest, we believe that if private delivery of Priority Mail letters were to be permitted and the double-postage rule were eliminated, carriers would offer discounted rates to volume mailers for deferred letters as they now do for overnight services and deferred package deliveries. This likely would reduce or negate the Service’s perceived price advantage and encourage even more mailers to divert Priority Mail letters to private carriers. The actual price that competitors might charge for deferred (second-day) delivery of letters cannot be determined easily for at least two reasons. First, pricing data are not readily available to the public because private carriers do not publish their best prices. They are offered through individual contracts with customers and considered by the carriers to be proprietary data. Secondly, the lowest prices that private carriers might charge on the basis of their actual cost is difficult to predict because the Service’s double-postage rule results in an artificial price “floor” on rates private carriers must charge for urgent letter delivery. We are unable to predict what rates private carriers would charge if allowed to set prices freely based on market conditions. Available data indicate that private firms competing with the Service for overnight, express deliveries currently offer contracted rates to large-volume customers approaching the Service’s lower Priority Mail rates. For example, under a contract with the U.S. General Services Administration (GSA), which was in effect during the entire period covered by our review, FedEx charged federal departments and agencies $3.75 for letters (up to 8 ounces), and $3.99 for packages weighing from 1 to 3 pounds, for next-day delivery anywhere in the United States, including Alaska, Hawaii, and Puerto Rico. According to information obtained from GSA covering an 11-month period from February to December 1995, FedEx achieved monthly, on-time rates ranging from 91 to 97 percent for overnight shipments delivered by noon the next day, the standard in the contract, during 1995. Excluding the months when lapses in budgetary authority resulted in partial government shut-downs—November and December 1995—monthly on-time delivery rates ranged from 93 to 97 percent. For the same period, FedEx handled about 9.7 million shipments for government mailers subject to next-day noon or earlier delivery requirements. In summary, the data we gathered and summarized in figures 4.1 through 4.3 indicate that (1) most mailers believed the Service’s Priority Mail rates were as good or better than its competitors but the timeliness and dependability of Priority Mail service was not; (2) most nationwide private carriers would be ready and willing to deliver Priority Mail letters, particularly business-to-business mail, if the Statutes were changed; and (3) about one-fourth of the mailers we interviewed said they likely would divert Priority Mail letters to private carriers if the Statutes were changed to permit them to do so. Private carrier interest and delivery capacity, combined with mailers’ willingness to shift some mail to private carriers, indicated the Service could be at high risk to lose significant portions of its Priority Mail volume if the Statutes were relaxed. Although significant portions of all three letter mail classes and subclasses could be lost to private carriers, First-Class letters appear to be at less risk than Priority Mail or third-class letter mail. As indicated previously, nationwide express and parcel delivery carriers expressed little interest in delivering First-Class letter mail compared to the more lucrative overnight and deferred delivery markets. Most (65 percent) of the alternate delivery firms we interviewed expressed some interest in delivering First-Class letters. Most also said that they were likely to pursue additional opportunities in the business-to-business and business-to-household segments of the mail market, which together account for nearly 90 percent of all First-Class letters. Despite their apparent interest in delivering First-Class mail, most alternate delivery carriers currently lack the capacity to sort large quantities of addressed mail. If the Statutes were changed, those carriers may have a greater incentive to invest in the necessary capacity, but any significant increases could take several years to develop to the point where First-Class volumes could be affected materially. Among the mailers we interviewed, only 2 of 12 (17 percent) said that they were somewhat or very likely to use private carriers to deliver First-Class mail, while 2 others said that they were as likely as unlikely to do so. Of the remaining eight mailers, six said that they were unlikely to use private carriers if the Statutes were changed, and two did not respond. In addition, including 2 newspaper associations, 11 of 14 mailers (79 percent) rated First-Class mail service and rates as good or better as competitors’ service and rates. An additional factor of importance to mailers was safeguarding personal or confidential information, which characterizes much of First-Class mail. Consequently, retaining the restrictions on nonpostal access to private mailboxes likely would help shield the Postal Service from private competition for delivery of First-Class letters. When asked generally if they would consider using private carriers to deliver protected mail if they were denied access to mailboxes, only one-third of the mailers we interviewed (4 of 12, or 33 percent) said they were likely to do so. However, one-half of the same mailers (6 of 12) said they were likely to use private carriers to deliver letter mail if they were allowed to place it in mailboxes. Mailers who do regular billing of residential customers see the restrictions on mailbox access as a reason to use the Postal Service. However, before the mailbox restrictions were imposed (in 1934), some utility companies were using mailboxes for private delivery of monthly billings. Where utility companies still regularly read residential water, gas, and electric meters, employees might deliver monthly bills again if the access restrictions were lifted. In addition, some newspaper officials with whom we spoke indicated that they would consider sending subscription statements via their own private carriers if mailbox restrictions were lifted. The mailbox restriction would be less likely to shield the Postal Service from competition for Priority Mail and heavyweight First-Class mail. Typically, this mail is delivered to businesses and often is too large to fit in residential mailboxes. In addition, national carriers often rely on a signature for delivery. In general, they did not see lack of mailbox access as a barrier to pursuing increases in their shares of these markets. Some heavier weight First-Class mail could be exposed to a level of risk similar to Priority Mail if private firms could freely set prices to compete with the Service. For example, First-Class letters weighing between 8 ounces and 12 ounces have postage rates ranging from $1.81 to $2.62. First-Class mail in this weight range could be attractive for delivery by private carriers because they may be able to deliver some or most of it profitably at prices competitive with the Service’s rates. However, this mail represented less than one-tenth of 1 percent of the Service’s total volume in fiscal year 1995. To summarize, the data we gathered indicate that most mailers rate First-Class postage rates and service as good or better than competitors’ rates and service. If the Statutes were relaxed and mailbox access restrictions remained intact, mailers would not be likely to shift First-Class mail to private delivery. Most private carriers with existing, nationwide delivery capabilities expressed little interest in pursuing First-Class letter mail delivery. Carriers who operated local delivery networks indicated they would pursue some First-Class mail deliveries if the Statutes were relaxed or repealed. Although the alternate delivery industry has developed rapidly in recent years, the Service still dominates the advertising mail delivery market. As noted in chapter 3, the Service’s share of the advertising delivery market, most of which is third-class mail, is about 96 percent. The Service’s huge delivery infrastructure, high volume of third-class mail, and relatively low postage rates, which are structured to retain third-class mailers, reduce the likelihood that the Service would lose as great a percentage of third-class mail as Priority Mail over the next few years. Alternate delivery firms saw third-class mail as their primary market niche. Nearly 90 percent of the alternate delivery firms we interviewed expressed an interest in pursuing additional third-class mail business. Specifically, 12 of 17 said they were very likely and 3 said they were somewhat likely to do so, while only 2 said they were unlikely to seek additional third-class business. Similarly, 13 of 17 (76 percent) of the alternate delivery firms said that they were interested in pursuing additional business-to-household deliveries, without regard to mail class or subclass. According to the Postal Service, most third-class mail (about 73 percent) falls into the business-to-household market segment. Most alternate delivery firms (about 65 percent) also expressed an interest in adding more business-to-business deliveries if the Statutes were changed. Specifically, 11 of 17 said that they were somewhat or very likely do so, while 2 more said they were as likely as unlikely to do so. By contrast, only 3 of 17 alternate delivery firms said they were likely to pursue household-to-business deliveries, and 1 said it was as likely as unlikely to do so. (See fig. 4.2.) Among the mailers we interviewed, about 42 percent (5 of 12) said that they would consider using private carriers for third-class mail delivery, 2 more said they were as likely as not to use private carriers, 3 said they were unlikely to mail privately, and 2 had no opinion (see fig. 4.1). Only 5 of 14 mailers we interviewed (36 percent) rated the timeliness and dependability of third-class mail delivery and third-class postage rates as good or better than competitors. (see fig. 4.3). Recent actions taken by the principal third-class mailers’ groups suggest that their members want greater freedom to select carriers to deliver advertising mail. As indicated previously, the Advertising Mail Marketing Association (AMMA), formerly known as the Third Class Mail Association, twice has participated in formal requests to suspend the Statutes for third-class mail. Most recently, in 1995, they were joined by the Direct Marketing Association (DMA). Collectively, AMMA and DMA members generate the vast majority of advertising mail volume. While third-class mailers may be willing to divert more mail to alternate delivery carriers, current delivery capacity is limited. Most alternate delivery carriers lack the capacity to process and deliver large quantities of third-class mail. If the Statutes were changed, those carriers may have a greater incentive to invest in the necessary capacity, but any significant increases could take several years to develop to the point where the Service’s third-class mail volumes could be affected materially. The Service, however, would appear to have a competitive advantage for third-class mail delivery in the foreseeable future, given the substantial volume of third-class mail that it handles and its worksharing arrangements with advertising mailers. In summary, third-class mailers were not as satisfied with the postage rates they must pay or the timeliness and dependability of third-class mail delivery as with competitors’ rates and delivery service. Many of these mailers said that they would likely divert third-class mail to private firms if the Statutes were relaxed. Similarly, most alternate delivery carriers said that they were likely to pursue additional third-class and business-to-household mail deliveries if the Statutes were relaxed. Because the collective capacity of the alternate delivery industry remains limited in comparison to the Postal Service, we believe the Service faces a lower risk of substantial third-class mail losses, compared to possible Priority Mail and First-Class mail losses. Although our analysis shows that Priority Mail likely would be at greatest risk to private delivery and third-class mail would be at low risk, the effects of a loss of such mail volume on postage rates would be less significant than if the Service experienced similar or smaller percentage losses of First-Class letters. This is because First-Class mail not only represents the largest single component of the mail stream in terms of volume and revenue, but also contributes substantially more than Priority Mail and third-class mail to the Service’s institutional costs, which tend to equate to overhead in the private sector. In fiscal year 1995, First-Class mail accounted for 71 percent of the Services’ institutional costs, compared to 20 percent for third-class mail and 8 percent for Priority Mail. (See fig. 4.4) Under the 1970 Act, the revenue derived from all of the Service’s mail deliveries and other revenue-producing activities, such as philatelic and money order sales, must cover all of its operating costs, both in total and for each class and subclass of mail—the “break-even” requirement. To set postage rates, the Service assigns some of these costs directly to First-Class mail, Priority Mail, third-class mail, and other classes and subclasses. The difference between these costs and the related revenue is available to cover the Service’s “institutional” (or overhead) costs, which are costs not attributed to any class or subclass. In theory, when the Service loses enough volume of a particular mail class or subclass, some portion or eventually all of the cost assigned to that mail class would be avoided altogether by the Service. In contrast, a decline in mail volume would not be expected to reduce institutional costs similarly. Thus, virtually all of the institutional costs previously covered by the lost mail volume must be redistributed. The extent to which the Service’s rates for different classes and subclasses of services would be affected by a decline in mail volume depends in part on the extent to which the Service would continue to incur the cost associated with that lost volume. Conceptually, a loss of mail in those classes and subclasses making the highest contribution to institutional costs would have the most adverse effect on the Service’s rates. Conversely, a loss of mail with the lowest contribution to institutional costs would have the least adverse effect. As indicated above, compared to First-Class mail, Priority Mail and third-class mail represent a relatively small part—28 percent for the total of the latter two, compared to 71 percent for the former—of the Service’s contribution to overhead. Our analysis showed that the greatest risk of financial impact on the Service would result from losses of First-Class mail. Losses of third-class mail and Priority Mail would have a much lesser effect. Because the effects of statutory changes on the Postal Service’s mail volumes cannot be projected with precision for a variety of reasons, some of which are discussed beginning on page 71, we estimated the degree to which the Service’s revenue and postage rates might have been affected if its estimated fiscal year 1995 letter mail volumes, by class and subclass, had been reduced by various percentages. For these estimates, we used Postal Service data that it had provided to the Commission in early 1994 to request new postage rates, including the 32-cent basic letter rate that became effective in January 1995. We also arranged with the Postal Service and its management consulting firm, Price Waterhouse, to develop estimates for us of possible changes in postage rates assuming that the Service’s letter mail volumes were to be reduced by various percentages in future years. To provide an indication of the relative effects on postage rates, we show the estimated effects on the current basic letter rate of 32 cents (for a First-Class letter weighing 1 ounce or less) assuming various percentages of First-Class, Priority, and third-class letter mail volume losses. We used the First-Class rate because of the Service’s mandate in the 1970 Act to provide a uniform rate for at least one class of sealed envelopes, which the Service designated as First-Class letters. The effects of First-Class letter volume losses on the Service’s basic letter mail rate would be far more significant than if the same percentage losses occurred for Priority Mail and third-class letters. (See fig. 4.5.) The estimated 3-cent increase from 32 cents to 35 cents depicted in figure 4.5 is not as large as some actual increases in the basic letter rate in past years. Since 1970, the cost of a First-Class stamp has increased 7 times, and the increase has ranged from 3 cents up to 4 cents. The most recent rate increase, which took effect in January 1995, was 3 cents. Even though our analysis indicates that the basic letter rate might not significantly increase as a result of volume losses ranging up to 25 percent for Priority Mail and third-class mail pieces, the effects of letter mail volume losses on the estimated revenue per mail piece would differ among the Service’s mail classes, subclasses, and selected categories, as table 4.2 shows. Although we believe that our analysis is useful as an indicator of the possible effects on the Service’s basic letter rate, the effects of any mail volume losses on other postage rates likely would differ significantly. The effects depend on the amount of assumed percentage volume losses among the affected letter mail classes and subclasses. For example, if First-Class mail volumes had been 25 percent lower in fiscal year 1995, the Priority Mail rate would have increased by an average of 31 cents per piece (effective January 1995, the single piece rate for 2 pounds or less was $3). Using the same assumption, the rate for Express Mail (with a single-piece cost of $10.25 for 8 ounces or less as of January 1995) would have increased an average of 36 cents per piece. Consistent with the data shown above, estimates provided by Price Waterhouse show that a 25-percent loss of First-Class mail volume could have a greater effect on the Service’s basic letter mail rate than a 25-percent loss of either Priority Mail or third-class mail volumes. Specifically, the price of a First-Class stamp (for a 1-ounce or less letter) would need to be 41 cents in 2005 under the “baseline” estimate, while an assumed 25-percent First-Class mail volume loss would increase this price to 46 cents. In comparison, an assumed 25-percent loss of Priority or third-class mail would increase the price to 42 cents. (See fig. 4.6.) To provide a different estimate of the effects of a loss of Priority Mail even greater than 25 percent, we requested that Price Waterhouse use its model to estimate how the basic letter mail rates might change over a 10-year period if the Service lost 50 percent of its Priority Mail volume in 1 year. As table 4.3 shows, a 50-percent loss of Priority Mail volume would not have any effect on the estimated baseline rates during 1996-2005. The Service’s second-class mail, consisting mainly of newspapers and periodicals, generally is not subject to the Private Express Statutes. However, if alternate delivery firms were allowed to compete with the Service for delivery of First-Class or third-class mail, many of these firms also might compete more aggressively for second-class mail delivery. In fiscal year 1995, the Postal Service reported that it delivered 10.2 billion pieces of second-class mail; although this mail generated revenues of almost $2 billion, it contributed less than 1 percent ($53.1 million) of institutional costs. Thus, a loss of 25 percent of second-class mail would have a minimal impact on the Service’s overall revenue and postage rates, as compared to a similar loss of First-Class mail. The Service’s revenue and its delivery costs could be affected differently among routes in the same part of the country if the Statutes were to be changed and the Service lost mail volume. This is because variations in customer mail density can affect revenue per delivered piece and cost per delivery. The Postal Service believes, and the results of our review tend to support the Service’s belief, that private firms would concentrate their investments and marketing strategies in those areas that would be the easiest to serve and most profitable. If this occurred, a change in the Statutes could have a different impact on the Service’s net revenue in areas where revenue per delivery stop was higher and delivery cost per piece was lower than in areas where the revenue per stop was low and the delivery cost per piece was high. For example, if the Service lost mail volume in some high-volume geographic areas, carrier delivery costs could be unaffected after the volume decline if the carrier must travel the same delivery route and make the same number of stops along the route as before the loss. In this case, the Service would lose revenue to the extent of the volume loss but would not reduce its delivery costs. Postal Service officials told us they had performed some analysis to better understand and measure how revenue and delivery costs might be affected in areas with different demographics as a result of mail volume losses. The analysis showed that a change in the Statutes could most affect those types of mail pieces, such as bank statements, bills, and retail advertisements, that were received in greater numbers by households with higher incomes. Households with the greatest incomes received three times as much mail as those with the smallest incomes, according to the Service’s analysis. If the Service lost some mail now delivered to households with higher incomes, it is likely that it would still have to deliver some mail, but fewer pieces of mail, to those households each day. This could mean that the Service would collect less revenue from the mail delivered to these households but still incur about the same costs for the lesser amount of mail delivered to these same households. Our analysis showed that the geographic concentration of alternate delivery firms specializing in distribution of third-class advertising matter tended to correlate highly with population density. However, we also identified some alternate delivery firms operating in more sparsely populated areas. We visited one such firm that had been in business since 1971 and, according to an independent third-party auditor, made regular, weekly deliveries to more than 98 percent of all households within its geographic market area. Some empirical data exist to show that although private firms may focus on more profitable geographical areas, delivering mail in all but the most sparsely populated area could still be profitable. Postal Rate Commission staff reported the results (A Cost Comparison of Serving Rural and Urban Areas in the United States, dated April 1993) of analysis done on the Service’s historical delivery costs for selected city and rural routes. The study showed that the Service’s average labor and vehicle costs per mailbox and the average cost per piece did not differ greatly between city routes and rural routes. The study also showed that rural mail routes were “profitable” for the Postal Service except for very sparsely populated areas that contained less than 2.9 mailboxes per mile. Although we believe that the above analyses by the Service and the Commission have provided interesting and useful information, sufficient data were not available to estimate how the Service’s revenue, costs, and postage rates have been or might be affected as a result of changes in mail volumes in specific geographic areas, such as with varying customer mail densities. Therefore, to estimate financial effects relating to changes in the Statutes, we used the systemwide, average revenue and cost per mail piece for the various mail classes and subclasses that were used in the most recent ratemaking proceeding (Postal Rate Commission Docket No. R94-1), which resulted in new rates effective January 1995. For ratemaking purposes, the Service does not differentiate its costs among geographic areas of the country but rather uses “systemwide” averages per piece for each class and subclass. (See app. I for further details on the methodologies we used for estimating the effects of mail volume losses on the Service’s revenue, costs, and rates.) Although we have analyzed the risk to future mail volumes and rates assuming a change in the Statutes and a resulting increase in “direct” competition from private firms, a variety of other factors could affect the demand for mail services, operating costs, and postage rates. The effects of these factors on the Service are difficult to predict and measure; they involve unknowns regarding new communications technology—a form of “indirect” competition—and future decisions by the Service, its competitors, and postal customers. In particular, the Service could be affected significantly in the future by losses of mail volume due to indirect competition, i.e., the diversion of correspondence and business transactions to electronic forms of communication. On the basis of the Service’s analysis of its competition, these effects could be as great or greater than any impact that a change in the Private Express Statutes might have on the Service and its rates. According to the Postal Service, six of its seven “product lines” involve some form of mail delivery. A seventh product line—retail services—involves nondelivery activities. Of the six delivery products, all but standard parcels are subject to competition from some form of electronic communication as well as private message and package delivery firms. As noted earlier, private carriers already dominate the expedited and parcel delivery markets. Table 4.4 summarizes the Service’s assessment of the competition for the six delivery product lines and the amount of mail volume potentially at risk. At our request, Price Waterhouse used its model to estimate the effects on the basic letter rate over a 10-year period assuming the mail volume losses shown above were to occur from 1996 to 2005. According to the Price Waterhouse model, if the above estimates of possible mail volume losses occur, the current 32-cent First-Class stamp (1 ounce or less) would need to increase from an estimated baseline rate of 41 cents to 46 cents in 2005. (See fig. 4.7) Recognizing the possibility that its future mail volumes could decline, the Service is taking steps to improve its competitiveness. Many of the Service’s recent actions could have the effect of reducing the effects of greater competition from electronic communication and private delivery firms. Examples of several of these actions are described below. • Under Postmaster General Marvin Runyon’s leadership, the Service was restructured in 1992 and a major emphasis placed on improving customer service. The current national leadership includes recently recruited vice presidents from the private sector who head such functions as marketing, technology applications, facilities, finance, quality, and international business. • At the Service’s request, PRC recently recommended reclassification of most postage rates, in part to encourage mailers to do more barcoding of letter mail. This action, along with the Service’s deployment of additional automated mail barcode sorting equipment, is expected to reduce the Service’s mail processing and delivery costs, thereby helping the Service to set and maintain more competitive postage rates for most of its mail classes and thereby maintain overall mail volume. • The Service’s business plans focus increased attention and resources on those markets that have strong growth potential and hold the promise of strong financial returns to the Service. For example, as we discussed in our recent report, the international mail markets offer strong growth potential. The Service established a new international mail unit and introduced new international services to become more competitive in this arena. More recently, the Service began testing a new structure and process for improving its Priority Mail delivery performance. • As detailed in our recent report, the Service has a top-down, corporatewide initiative under way to focus all employees and management on improving service quality and customer satisfaction. If successful, this initiative could lead to improvements in the Service’s on-time delivery performance and help it to compete more successfully with private delivery firms. We cannot predict the outcome of these initiatives or changes in the Service’s future mail volumes. Although the Service faces strong and perhaps unprecedented competition, historically it has experienced a steady growth in overall mail volume. This growth has occurred despite (1) the threats posed by new communications technology, which has replaced some traditional mail services; and (2) regulatory changes allowing private companies to compete for domestic overnight deliveries and all international mail services. Although the Service has lost mail volume in certain classes, those losses have been offset by increases in other classes. For example, the Service reported losing 35 percent of its business-to-business First-Class mail since 1988 through diversion to electronic communications. Over this same period, however (from 1988 to 1995), overall mail volume grew more than 12 percent. For particular letter mail classes and subclasses, the average annual rate of growth was as follows: for Priority Mail, 14 percent; and for all First-Class mail (except Priority) and third-class mail, about 1.9 percent. As previously stated, the Service has lost most of its share of the expedited mail and package markets despite recent increases in Priority Mail and fourth-class mail volumes. Overall, however, mail volume has grown dramatically over the past 15 years, as shown in figure 4.8. The Postal Service recognizes that despite its efforts to become more competitive, it is constrained by various laws and regulations that limit its ability to compete successfully with private sector firms. In particular, the Service’s ability to control its operating costs and competitively set postage rates is limited under the 1970 Act. Under the 1970 Act, the Service must allocate costs and set rates in accordance with criteria that are tailored largely around its public service mission rather than those factors that tend to drive price and cost decisions in a competitive environment. Similarly, the 1970 Act contains criteria that the Service must follow in providing universal access to postal services through local post offices. The Service’s competitors, on the other hand, have greater flexibility in determining when and where it is most efficient and profitable to offer their services. Further, under the current ratemaking process, the Service believes that it lacks the flexibility to set or adjust postage rates quickly for services that must compete with those of private delivery firms. The Service and various study groups have said that the cycle time to implement new or adjusted rates can take up to 10 months and is a barrier to competition that has resulted in lost mail volume and revenue. The Service also may be unable to adjust its work force, in the short run at least, to reduce operating costs commensurate with any future decline in mail volumes. The Service’s approximately 600,000 craft employees who collect, sort, and deliver mail generally have job protection through union contracts. This could complicate and delay significant reductions of the postal work force and labor costs to offset the effects of any quick downturn in mail volume. However, the Service has been able to make some reductions in the career work force. For example, as part of a long-range automation plan initiated by former Postmaster General Anthony Frank, the Service reduced the career work force from about 774,000 to 718,000 career employees, or 8.5 percent, between May 1989 and August 1992 through reduced hiring. Postmaster General Runyon reduced the career work force by an additional 7.1 percent, from about 718,000 to 667,000 career employees, from August 1992 to April 1993 by offering employees monetary incentives for early retirement. In total, the Service reduced the career work force by almost 14 percent in about 4 years. Since April 1993, however, the Service’s career work force has grown, by approximately 11 percent, to about 740,000 career employees in November 1995. Overall, because of increased numbers of employees, higher wages and benefits, and growth in mail volumes, the Service has been unable to stop the growth of its labor costs. These costs accounted for the vast majority (more than 80 percent) of the Service’s total operating expenses in 1970 and in 1995. This trend has continued even though the Service has invested or plans to invest more than $5 billion in automation equipment since the early 1980s to reduce labor costs. Many postal administrations around the world have mail monopolies to help meet universal letter delivery and other public service obligations, much like the U.S. Postal Service. However, unlike the Service, many of the postal administrations that we reviewed have made major legislative and policy changes in the past 15 years to give them greater freedom to operate like a private business. Some governments have narrowed their letter mail monopolies and one government, Sweden, has eliminated the letter mail monopoly. However, all postal administrations we reviewed continue to make certain postal services readily accessible to all citizens. As part of our review, we obtained information on the postal monopolies in eight foreign countries: Australia, Canada, France, Germany, the Netherlands, New Zealand, Sweden, and the United Kingdom. Postal administrations in these eight countries were described in a recent Price Waterhouse report as among the most “progressive postal administrations.” Most of the eight have been reformed in the past 15 years to change their structure and operations and give them greater freedom from governmental control. The information in this report concerning the postal laws of other countries does not reflect our independent analysis of those laws; rather, it rests primarily on the views and analysis provided to us by officials of foreign postal administrations and other secondary sources. We previously reported that the postal reform experiences of these countries are relevant to postal reform in the United States. The scope of our work did not include an evaluation of the effectiveness of postal reforms in these countries. As we have reported, detailed comparisons of the Postal Service’s performance and specific practices with other postal administrations can be difficult because of size differences alone. For example, the Service is required to deliver to a larger geographic area than seven of the eight countries. Further, the Service has at least seven times the mail volume and at least twice the number of employees as any of the other eight postal administrations. (See figs. 5.1 and 5.2.) The governments of many other countries provide some form of universal mail service and generally have restricted private delivery of certain mail to ensure the financial viability of postal administrations in those countries. By way of background, it may be helpful to summarize historical developments related to the development of foreign postal monopolies. In the mid-19th century, European governments developed “universal governmental postal service” to deliver mail to many homes and businesses and introduced uniform postal rates. These developments led to increased demand for international as well as domestic postal services. Since these governments had asserted a monopoly over postal service, there was no private mail system for international mail delivery. Instead, international mail was governed under bilateral agreements, which resulted in a complex set of rates calculated under different currencies, weights, and measures. To address this issue, postal administrations from 21 European nations and the United States agreed in 1874 to form a permanent international organization, the Universal Postal Union (UPU), to develop standard rules for exchanging international mail. As discussed in our earlier report, 189 countries (including the United States) participated in UPU in 1995. UPU now functions as a specialized agency of the United Nations and governs international postal services. Member countries have agreed to fulfill statutory universal service obligations on an international level by accepting mail from each other and delivering it to its final destination. In effect, each UPU member country has agreed to provide some form of universal mail delivery service for international mail to be delivered within their country. In the eight countries we reviewed, the postal administrations provided certain services to their citizens at uniform rates before reform and continued to provide them following reform. However, the definition of universal mail service varies somewhat from country to country. Some of the countries provided the same level of service for urban and rural customers, while others had different service standards for urban and rural areas. For example, although Canada Post is required by law to maintain service that meets the needs of Canadian citizens, the service only needs to be similar for communities of the same size. Canadian citizens in very remote areas in the far north may receive mail delivery less frequently each week than those in some other areas of Canada. Similarly, some citizens in rural areas of Australia and New Zealand receive mail delivery less often each week. In Australia, the frequency of rural delivery is based on a system agreed on between Australia Post and the government that takes into account the cost of delivery and special needs for educational materials and medical supplies. In New Zealand, a written agreement between New Zealand Post and the government specifies the proportion of delivery points that may receive delivery less frequently. The legal basis of universal postal service also varies from country to country. Universal service requirements can be based on the country’s constitution, statutes, written agreements between the postal administration and the government, policies established by the government minister who oversees the postal administration, or a combination of these. The way requirements are specified—and the degree of specificity—also varies from one country to another. In some countries, changes in universal service practices have been controversial. For example, in New Zealand, New Zealand Post increased a long-standing rural delivery fee for service, paid by the addressee; this decision proved unpopular, and the fee was eliminated, effective April 1, 1995. In Canada, changes in universal service have provoked debate and led in some cases to further changes in policy. After Canada Post was incorporated in 1981, it started to close and consolidate some rural post offices in order to increase the number of those that were privately operated through franchises. This policy was controversial and, in February 1994, the government minister overseeing Canada Post announced that rural post offices should no longer be closed. Governments in seven of eight countries that we reviewed imposed restrictions on the delivery of certain mail by private firms. One country (Sweden) recently eliminated its restrictions altogether, while several of the other seven countries had reduced the restrictions after reforming their postal administrations. A variety of conditions led to the reform of postal administrations in other countries. However, according to the Price Waterhouse report mentioned above, a key reason for reform was an “increase in competition in the delivery and communications markets.” Further, some other countries found direct enforcement of mail monopolies to be difficult and used other means to achieve compliance with legal restrictions on private mail delivery. For example, like the Postal Service, Canada’s postal administration, Canada Post Corporation, primarily uses education and persuasion rather than legal action to get violators to comply with the Canadian law restricting the delivery of certain letter mail. In many of the other eight countries, the mail monopoly exists for reasons similar to those supporting the U.S. mail monopoly. For example, Canada’s mail monopoly has been justified on the grounds that the Canadian government was believed to be the only entity that could and would provide a postal service universal in scope. According to a recent study of Canada’s postal system, the private sector has been judged to be unwilling to make the large, expensive investment in infrastructure and commitments required to serve all areas, including outlying and low-density ones, with a full range of postal services at equitable prices. As a result of the monopoly, a single charge has been levied for basic nonlocal letter mail service in Canada since the nation was founded, regardless of the distance traveled or any complications associated with the route. Although Canada reformed its postal laws in 1981, Canada Post officials said that the monopoly, or “exclusive privilege,” continues to be justified on both economic and social grounds. In contrast with the United States, none of the eight countries have laws that give their postal administrations exclusive access to the mailbox. However, there may be certain limitations to mailbox access in some countries. For example, in Canada, if Canada Post owns the mailbox, it is locked and only Canada Post has access to it. In some rural areas, there are mailboxes that are grouped together at various crossroads and are locked for security reasons. This also applies to some centralized mailboxes in secure apartment buildings. Advertisers cannot have access to apartments that do not allow door-to-door solicitation. By comparison, mailboxes owned by customers in Canada are accessible to anyone, including advertisers. The postal monopoly is defined differently by individual postal administrations. However, a common practice among the eight countries we reviewed was to define the scope of the postal monopoly according to price, weight, urgency, or a combination of these factors. This is in contrast to the definition of a letter in this country, as defined by the Postal Service, where these measurable characteristics are not used except with regard to extremely urgent letters, for which the Service has suspended the Statutes. In the other eight countries, the postal monopoly generally is defined in terms of minimum dollar or weight limits for items that may be delivered by private firms. These restrictions are generally contained in legislation. In some countries, the definition of the postal monopoly is clarified further by regulations. (See table 5.1.) In the eight countries, common exclusions cover unaddressed advertising mail, intracompany mail, and outbound international mail. In particular, like the United States, all seven countries with a monopoly over letter mail excluded unaddressed advertising mail from their monopolies. Sweden was the only country of the eight to have eliminated its mail monopoly altogether. Sweden’s postal monopoly ended on January 1, 1993, when full competition was allowed for letter mail. The Swedish government, not the postal administration, has the obligation to provide universal mail service. The Swedish government currently contracts exclusively with Sweden Post to provide universal service. The government can extend this arrangement in the future to other competitors that are able to provide the entire service or parts of universal service. According to Sweden Post, as no competitors currently fulfill this condition, an extension appears unlikely in the immediate future. The elimination of the postal monopoly in Sweden occurred in a different context from that in the United States, where an estimated 82 percent of the U.S. Postal Service’s mail revenues are subject to the Private Express Statutes and implementing Service regulations. When the Swedish postal monopoly was in effect, it was much more narrowly construed. Sweden Post has estimated that before the monopoly was abolished, the revenues from business within the monopoly represented about 30 percent of Sweden Post’s total revenues. The monopoly applied to the regular transmission, for a fee, of sealed letters and open items containing personalized information. Postal monopolies in several countries have been narrowed in the years following postal reform. Although many other countries were reviewing the scope of mail monopolies at the time of our review, we identified several countries that reduced the scope of their mail monopolies in the wake of postal reforms. According to Robert Campbell’s study of Canada Post, mentioned earlier, protection under the monopoly was weakened as a result of developments following passage of the 1981 law that established Canada Post and defined the postal monopoly. When a number of utility companies and municipalities in Ontario began delivering bills themselves, claiming they were not letters, Canada Post proposed a changed definition of “letter” in July 1982. After considerable protest, Canada Post, businesses, and the government agreed on a mutually acceptable definition that was approved in May 1983. A letter was redefined to mean “one or more messages or information in any form.” New exemptions covered transmission of electronic mail and allowed utility company employees to deliver bills made up on the spot. The monopoly was relaxed for minor financial documents such as interbank transactions. Germany’s postal monopoly was narrowed in January 1995, when licenses were granted to private companies to deliver bulk advertising and printed matter weighing more than 250 grams (about 8.8 ounces). This lifted the monopoly over advertisements and bulk mail, which opened to competition about one-quarter of the estimated DM 3 billion ($2 billion U.S.) market in bulk printed material. In January 1996, the weight limit for granting competitors licenses to deliver direct mail was lowered again, to more than 100 grams (about 3.5 ounces). In New Zealand, the monopoly weight threshold was reduced from 500 grams (about 1.1 pounds) to 200 grams (about 7.1 ounces) in 1990, and the price threshold was reduced in phases from $1.25 NZ to 80 cents NZ by December 1991. In Australia, the monopoly price threshold was reduced in 1994 from 10 times the basic letter rate to 4 times the price, and the weight threshold was reduced from 500 grams to 250 grams. Other changes lifted the monopoly over outbound international mail, third-party carriage of intracompany mail, and carriage of bulk mail between cities. As competitive pressure continues to increase, further postal reform was being contemplated in some other countries, including additional steps to narrow or eliminate the postal monopoly. In 1995, various postal policy issues, including the postal monopoly and universal service, were under review in Australia, Canada, Germany, the Netherlands, and New Zealand. In Australia, a review started in 1995 by Parliament was focused primarily on public service obligations (PSOs) and postal performance but also included an evaluation of the postal monopoly. The narrowing of the mail monopoly in 1994 generated concern in Parliament over Australia Post’s ability to maintain services to rural and remote areas. Even before the changes to the postal monopoly became effective, the Country Mail Services Working Party (a subcommittee of the Australian government’s Primary Industries and Energy Committee) reviewed the provision of Australia Post’s PSOs. In February 1994, it recommended to the Minister for Communications and the Arts, who oversees postal matters, that Australia Post’s services to rural and remote communities should be protected through a review of PSOs once in the life of Parliament (i.e., every 3 years). This recommendation was accepted by the government. As a result, the Minister requested that the Australian House of Representatives’ Standing Committee on Transport, Communications and Infrastructure review rural and remote letter delivery services, including “the effect of any further reduction in reserved services” on Australia Post’s performance of its public service obligations. In 1994 and 1995, 3.3 billion of the 4.1 billion pieces of mail were covered by the postal monopoly, accounting for 56 percent of Australia Post’s revenues. The Committee also was reviewing the need for, extent, and cost of public service obligations, as well as Australia Post’s ability to maintain or increase performance standards. The Committee had not yet reported back to the Minister when a federal election was called. Australia Post told us the review was overtaken by the federal election, and thus a report had not been issued. The Post said “no legislation is expected” from the 1995 and 1996 review but added that a similar review process may take place in the future. In 1994, when the monopoly was narrowed, the Australian government had announced its intention to further review the monopoly in 1996 and 1997. According to Price Waterhouse, the outcome of future governmental review is uncertain but is likely to continue the “gradual erosion of the postal monopoly.” In Canada, a comprehensive review of Canada Post was to be completed by July 1996 on behalf of the Canadian government. The review was to consider whether Canada Post’s letter mail monopoly should be “adjusted or discontinued.” In 1995, the mail monopoly covered about half of the mail stream in Canada. The review was to also identify functions that Canada Post should continue to provided in the future, postal rate setting, the financial position of Canada Post, and the social costs of its “public policy functions,” including how these costs should be allocated. In its submission to the review commission, Canada Post restated its commitment to universal service, defended its postal monopoly as necessary to support universal service, and responded to concerns about the fairness of its postal ratesetting. The New Zealand government announced in November 1994 that it would introduce legislation to completely abolish the postal monopoly. Political pressures have held back legislative action, according to Price Waterhouse. New Zealand Post told us “. . . it remains Government policy to introduce legislation to remove the monopoly.” Although no final decision has been made, New Zealand Post officials said last year that they had shaped their business plans to expect an open, competitive environment. New Zealand Post supported complete elimination of the postal monopoly. “Unfortunately the Government has not had the Parliamentary support to be able to effect this change,” the Post said. New Zealand Post did not believe that monopoly protection was necessary. The Post explained that: “Indeed, we believe that it represents a barrier to our achieving a truly market and customer focused positioning for our business. The ‘monopoly’ protection risks breeding a false sense of complacency and certainly impacts on public perceptions of our business. New Zealand Post has actively supported the Government Policy of completely deregulating the letter market.” The German postal administration has been implementing postal reform in stages, with privatization and deregulation planned for the near future. Under “Postreform I,” started in 1989, separate entities were established for postal administration, postal banking, and telecommunications. Under “Postreform II,” the German postal service was transformed on January 1, 1995, into a state-owned stock corporation. Under “Postreform III,” further deregulation and privatization is planned. Deutsche Post declared in its 1994 annual report that “the company’s privatization will be completed when it goes public in 1998.” Elimination of the postal monopoly in Germany has been under consideration. The German postal minister outlined legislation to begin deregulating the postal market in 1998 and introduce full competition in 2003. In its 1994 annual report, Deutsche Post stated that it “accepts the idea of a gradual and calculable limitation of its reserved areas” provided that (1) liberalization is in line with the postal policy of other European Community states, (2) competition is on a level playing field, and (3) there is “realistic moderation” based on the burden of universal service and payments related to pensions that stem from the previous personnel statute. In the Netherlands, the government’s Department of Transport had a review underway in early 1996 regarding the scope of universal service obligations and the postal monopoly. The review was expected to be completed in 1997, according to the Dutch postal administration, PTT Post. About 57 percent of PTT Post’s sales in 1995 were derived from activities subject to competition. PTT Post strongly supported liberalization of postal monopoly restrictions, provided there was “national and international reciprocity and a level playing field for all suppliers of postal services.” Along with reform policies on universal service and postal monopolies, some countries have sold or are contemplating the transfer of some portion of ownership of postal administrations to the private sector. A majority share of the postal corporation in the Netherlands is owned by private parties. Canada passed legislation in 1993 authorizing the sale of up to 10-percent ownership of Canada Post to its employees, but this had not been implemented as of August 1996. The Dutch postal administration remains the only partially privatized postal administration in Europe. In addition to these developments in individual countries, the European Community has been considering the adoption of common limits for the postal monopoly as part of an effort to achieve harmonization in postal policy among member nations. The European Commission began a comprehensive review of public policy towards postal services in 1988, which resulted in the 1992 publication of the “Green Paper.” In this document, the European Commission expressed the view that the universal postal service required throughout the Community (1) should be affordable to all, of good quality, and readily available; and (2) needed to be defined. Additionally, the Commission said that “this universal service objective can justify the establishment of a set of reserved services (subject to the decision of each Member State that this was necessary), which would help to ensure the financial viability of the universal service network.” It also said that “The list of services that could be included in this set of reserved services should be established at Community level.”In other words, the Commission supported a system that would allow member states to retain a limited postal monopoly, where necessary, to give postal administrations sufficient economic resources to guarantee universal service. “shall ensure that users enjoy the right to a universal service involving the provision of a good-quality postal service for all users at all points on their territory at affordable prices. To that end, Member States shall take steps to ensure that the density of the points of contact, and of the points where mail is collected, take account of the needs of users.” The directive defines universal service as including “every working day, and not less than five days a week save in exceptional circumstances or geographical conditions: one collection from the clearance points, one door-to-door delivery for every natural or legal person.” “To the extent necessary to ensure the maintenance of the universal service, the services which may be reserved to the universal service provider(s) in each Member State are the collection, sorting, transport, and delivery of items of domestic [as opposed to international] correspondence whose price is less than five times the public tariff for an item of correspondence in the first weight step, provided that they weigh less than 350 grammes ...” The European Commission has told us “It is important to note that the draft directive does not oblige member states to maintain any monopoly in their postal sector, but allows them to do so within the limits of the reserved services set out in the draft directive, to the extent necessary to ensure the maintenance of the universal service.” The draft directive also provides that “direct mail” (mass advertising and marketing mail) and inbound international mail can also be reserved to the postal monopoly, “wherever their reservation is necessary for the financial equilibrium of the universal service provider(s)”, until December 31, 2000, at which time those services must be opened to competition unless the European Commission decides (by June 30, 1998) that the continuation of the monopoly in those areas is justified beyond that date. In addition, the draft directive provides that outbound international mail is excluded from the postal monopoly. The draft directive also sets out minimum standards of quality of service (such as delivery times) to be met by the universal service providers in each member state. It provides for an overall review of the application of the directive to be conducted 3 years after it is adopted and at the latest by the first half of 2000. Discussion on the terms of the directive were taking place this year within the institutions of the European Community. In response to us, the European Commission said “it is hoped that the directive will be adopted in early 1997.” As a result of reform initiatives, some other postal administrations were set up to operate more competitively than the U.S. Postal Service. These postal administrations have been granted and were using greater commercial freedom to meet growing competition from electronic communications alternatives and private delivery firms. Among the actions that some have taken were downsizing the work force; increasing productivity; and taking initiatives to compete in electronic mail, facsimile, electronic bill payment, and other electronic communications services. In addition, many foreign postal services have used their commercial freedom to acquire subsidiaries, participate in joint ventures, and contract out some functions that the U.S. Postal Service handles itself. For example, according to Price Waterhouse, a number of foreign postal services operated a majority of their post offices through private franchises. Foreign postal administrations reported that they have used greater commercial freedom to become more competitive and provide more efficient and responsive postal services to the customer. At the same time, public concerns have surfaced in the wake of postal reform, notably regarding the continued provision of universal service, that have led to independent reviews and reexaminations of some foreign postal administrations. Despite these concerns, foreign postal administrations were continuing to make changes to enable them to respond to even stronger competition in the future. “Confronted with the reality of increasingly aggressive competition, higher service expectations from customers and an explosion of new communication technologies, Canada Post moved decisively to refocus its attention from its operations to its customers . . . we must face the fact that change is our only real constant.” Similarly, in Australia, where the postal monopoly was narrowed in December 1994, Australia Post Chairman Maurice Williams wrote:”...it is clear that direct and indirect competition will continue to increase,” and he said the Post will function in “an increasingly competitive environment where rapid technological changes are taking place.” “Sweden Post is in the midst of an ongoing—and accelerating—process of transformation. This is based on a progressive technological shift and a changed competitive situation. New information technology offers a path to a range of new opportunities for Sweden Post, but it also means that the Company will, by degrees, have to reshape its organization and working practices. Through a combination of new technology and a local presence—in the form of the post office and mail delivery networks—Sweden Post can continue to deliver messages, goods and payments for the foreseeable future.” We requested comments on a draft of volumes I and II of this report from the Postal Service and the Postal Rate Commission. The Postal Service responded in a letter, with enclosure, dated August 29, 1996. Because the enclosure to the letter raised technical matters related to the content of volume II, the letter with the enclosure is reprinted in appendix II of volume II, and our comments on those technical matters are provided below. The letter is also reprinted in appendix I of volume I, and our comments on the letter itself are provided on pages 34-36 of volume I. The Commission did not provide written comments. However, Commission officials suggested several changes to volumes I and II of the draft to improve technical accuracy and completeness of the report. We incorporated those changes where appropriate. In the letter portion of its comments, the Postal Service said that our report presents credible information on the purpose and application of the Private Express Statutes and related regulations. However, the Service expressed concern that we had ventured into speculating about the possible financial effects of eliminating or substantially relaxing the statutes. The Service said that it is difficult to forecast the Service’s financial situation 5 or 10 years into the future and that using different assumptions produces different results. Our detailed response to the Service’s concerns in this regard is contained in volume I. In the enclosure to its letter, the Postal Service reiterated its concern regarding estimates of financial conditions. Thus, we believe it is important for us to reiterate here that we did not attempt to make long-range forecasts and predict future financial effects of changing the Statues. Rather, our purpose was to show the sensitivity of the Service’s revenue, costs, and postage rates to various “what if” assumptions about changes in mail volume by class and subclass. The Service said that we used the price of a First-Class stamp as a proxy for its future financial position. It also said that this measure does not consider certain future revenue and expense requirements such as recovery of prior-year losses and funding of future retirement and workers’ compensation costs. Further, the Service said that we should have considered what happens to the price of classes of mail other than First-Class because volume losses across all mail classes would require it to lay off at least 100,000 employees. Our report presents a number of reasons why we chose the First-Class stamp price as the major, but not exclusive, focus for examining the price sensitivity of volume changes (e.g., see pp. 59, 61, and 64). A key consideration was that the First-Class stamp pays for about 70 percent of the Services overhead costs and, as such, reductions in its volume would substantially impact the revenue available to pay for these costs. With regard to the completeness of our estimates, the baseline postage rates used by the Commission in our sensitivity analysis are the same rates that the Service put into effect in January 1995 for all classes and subclasses of mail. These rates include all of the revenue and expense items cited by the Service in its comments — including estimates for recovery of prior year losses, employee retirement, worker’s compensation, and other revenue and expense requirements used by the Service and the Commission. In addition, the Price Waterhouse estimates in our report include all of the same revenue and expense requirements used by the Commission, except for recovery of prior-year losses through higher future revenue and postage rates. According to Price Waterhouse, it had made estimates at different times for the Service that included and excluded the prior-year loss recovery. When a revenue requirement to recover such losses is included in the Price Waterhouse estimates, the baseline postage rates are higher and other postage rates derived from the model also are higher. In this regard, it should be noted that the effects of the Service’s decisions to recover prior-year losses and provide for other such expenses through future rate increases would occur notwithstanding any change in the Statutes. We considered not only the First-Class stamp price but all mail classes and postage rates. However, in light of the Service’s concerns about how other classes of mail and the Service’s total revenue might be affected, we included additional information in volume I (table 3) and volume II on the effects on various classes and subclasses of mail if the Service were to lose mail volume in the future. With respect to the Service’s comments that at least 100,000 employees would be laid off, we did not attempt to measure the impact of revenue losses on future postal employment. Such impact could be affected not only by possible mail volume losses but also by the time period over which such losses might be sustained and the Service’s ability to adjust the postal work force commensurate with any reduced mail volumes and workload. We agree, however, that a 25-percent loss of First-Class mail volume could have a significant impact on the Service. The Service also said that our estimates did not include the effect of price changes on volume—commonly known as price elasticity. However, the estimates made by both the Commission and Price Waterhouse for our report include price elasticity for each mail class and subclass. In response to the Service’s comments, we have further explained the elasticity rates used for our report, in appendix I, volume II. The Service also said that if the Statutes are eliminated altogether and the Service and its competitors operate in a nonmonopoly environment, elasticity rates could change significantly. We agree with this basic proposition, but we did not make any assumptions about how or to what extent—if at all—Congress might change the Statutes or what elasticity rates might be in the future, both of which are unknown. For the purposes of our report, the Commission and Price Waterhouse used the same elasticity rates that were used by (1) the Service and the Commission for recent rate making purposes and (2) the Service and Price Waterhouse for financial forecasting purposes. In examining the assumptions underlying the estimates in our report, the Postal Service analyzed three scenarios. Its approach to developing scenarios was similar to ours. However, the Service used different baseline estimates, assumed much larger volume losses, and included higher elasticity rates than those used by the Commission and Price Waterhouse when examining possible changes in postage rates for our report. For example, the Service assumed that it would lose most presorted First- Class mail to private delivery firms by 2005. Because of these differences, the Service’s estimates of the impact on the First-Class, 1-ounce postage rate, which are presented on pages 2 and 3 of the enclosure to its letter, are greater than the estimates in our report. The Postal Service also estimated more severe effects on its revenue than we did—particularly for Priority Mail. In essence it suggested that, because of the competitive environment, it could lose 85 percent of Priority Mail volume and calculated the effect of an immediate 85 percent loss. While we made no forecasts of future mail volume losses, it appears to us that the Service’s forecast is a worse-case outlook for its Priority Mail business. Even now, a major portion of Priority Mail is not protected by the Statutes and Priority Mail volume is growing. The Service also presents its view about the behavior of presort and alternate delivery firms if the Statutes were removed or relaxed. Our assessment was that the relative risk of third-class mail volume loss is low compared to that of Priority and First-class Mail. Our assessment was based primarily on existing private sector capability as well as interviews with mailers and carriers. A key factor in our assessment was the Service’s share of the expedited and advertising (third-class) mail markets, i.e., an estimated 15 percent of the former and 96 percent of the latter. We have no basis to comment on the Service’s view that a substantial industry that combines mail preparation and delivery capability could quickly emerge if the Statutes are relaxed. Our report recognized that the Service believes this could occur, and we have revised volume I to further emphasize the Service’s view of this possibility.
|
Pursuant to a congressional request, GAO reviewed the restrictions in federal, civil, and criminal law on private letter delivery, focusing on: (1) the Postal Service's experience in administering and enforcing the private express statutes since 1970; (2) the growth and development of private message and package delivery companies since 1970; (3) the possible effects of changing private letter delivery restrictions on the Service's mail volume, revenues, costs, and postal rates; and (4) other countries' postal reform efforts, particularly regarding private letter delivery. GAO found that: (1) supporters believe that the private express statutes are necessary to protect the Postal Service's revenue base and to ensure that the Service provides universal service and meets other public service obligations; (2) private carriers have challenged the assumption that a monopoly results in lower postage rates and less service disruption; (3) because of outside pressure, the Service has suspended the statutes for extremely urgent letters and has stopped direct enforcement of the statutes due to the difficulty in enforcing the statutes; (4) in 1971, the Service faced little competition, but by 1994, the Service had only a 16 percent share of the expedited mail and package delivery market; (5) the Service's volume and revenues for protected mail classes has increased since 1970, but volumes and revenues for classes subject to competition have shown little growth; (6) despite the rapid increase in alternative mail delivery systems since 1970, the Service delivers the vast majority of advertising and periodicals; (7) if the statutes are changed or repealed, the Service's loss of volumes and revenues would vary among mail classes, but Priority Mail would be at the greatest risk; (8) postage rates would be affected by the loss of first-class mail, but the effects of statutory changes on the Postal Service's mail volumes are difficult to estimate; (9) the Service has taken actions to become more competitive, but various laws and regulations limits its competitiveness; and (10) some other countries have narrowed their letter mail monopolies as part of their overall postal reform efforts and have given their postal administrations greater flexibility in providing universal mail service.
|
Nuclear research reactors are used for training and research purposes throughout the world. Research reactors are generally smaller than nuclear power reactors, ranging in size from less than 1 to 250 megawatts compared with 3,000 megawatts generated by a typical power reactor. In addition, unlike power reactors, many research reactors use HEU fuel instead of LEU in order to produce the appropriate conditions in the reactor cores for conducting a wide variety of research. DOE has identified 161 operating research reactors that were designed to use HEU fuel and has included 105 of them in the reactor conversion program. The research reactors included in the program are spread out among the United States and 40 other countries, including Canada, France, Germany, and Russia (see fig. 1). In addition to the 105 research reactors covered under the reactor conversion program, DOE has targeted six medical isotope producers that use HEU as an ingredient in their production processes, including four large medical isotope producers located in Belgium, Canada, the Netherlands, and South Africa. For a variety of reasons, DOE has excluded from its reactor conversion program 56 research reactors that use HEU fuel, including 9 in the United States. Some of the reactors are used for military or other purposes, such as space propulsion, that require HEU. Others are located in countries such as China that so far have not cooperated with the United States on converting their reactors to LEU. Finally, the time and costs associated with developing LEU fuel for some of the reactors may exceed their expected lifetime and usefulness. The United States has historically provided nuclear technology to foreign countries in exchange for a commitment not to develop nuclear weapons. Starting in 1953, the Atoms for Peace program supplied research reactors and the fuel needed to operate them to countries around the world. The research reactors supplied by the Atoms for Peace program initially used LEU fuel, but many countries gradually switched from LEU to HEU in order to conduct more advanced research. In addition, HEU fuel could remain in the reactor core longer and was less expensive than LEU fuel. By the late 1970s, most research reactors were using HEU fuel and the United States was exporting about 700 kilograms of HEU a year to foreign research reactors. Like the United States, the Soviet Union also exported research reactors and the HEU fuel to operate them to other countries. In order to achieve the program’s objective of reducing the use of HEU in civilian research reactors, Argonne is developing new LEU fuels in cooperation with counterparts in other countries, including Argentina, France, and Russia. Developing LEU fuels involves testing fuel samples in research reactors to determine how the fuels behave under normal operating conditions. Fuel manufacturers and reactor operators around the world participate in the program by manufacturing and testing LEU fuels. Owners of foreign research reactors fund conversion of their reactors from HEU to LEU. In 1993, Argonne expanded the reactor conversion program to include cooperation with Russia on the conversion of Russian-supplied research reactors to LEU fuel. The Soviet Union had independently initiated a program in 1978 to reduce the enrichment of HEU fuel in research reactors but suspended the program in 1989 due to lack of funding. Russian-supplied research reactors use fuels manufactured in Russia that are not interchangeable with fuels used by U.S.-supplied research reactors. Therefore, DOE’s reactor conversion program differentiates between U.S.-supplied and Russian-supplied research reactors. Since the reactor conversion program’s inception in 1978, 39 of the 105 research reactors included in the program have either converted or are in the process of converting to LEU fuel. (See app. II for a list of converted research reactors.) Of the remaining 66 research reactors that still use HEU fuel, 35 can convert using currently available LEU fuels but have not done so, and 31 cannot convert to any currently available LEU fuels and still require HEU in order to conduct the research for which they were designed (see fig. 2). A research reactor can begin the conversion process after a suitable LEU fuel is developed and available commercially. The decision to convert from HEU to LEU also depends on research reactor owners having the necessary financial resources, including for the purchase of new fuel. In the United States, NRC regulations require that research reactors under its jurisdiction, including reactors operated by universities, convert to LEU fuel when an LEU fuel that can be used to replace HEU fuel has been developed and when federal funding is made available for the conversion. The conversion process begins with analyses to determine whether the reactor can safely convert and the impact of conversion on the reactor’s performance. After the analyses are completed and regulatory approval for conversion is obtained, the operator can remove the HEU from the reactor and replace it with the new LEU fuel. The HEU fuel can be disposed of once it has been removed from the reactor core and has cooled. According to Argonne’s analysis, conversion to LEU fuel is technically feasible for 35 of the 66 research reactors worldwide that still use HEU fuel. However, only 4 of the reactors—3 foreign reactors that use U.S.- origin HEU and 1 Russian-supplied reactor—currently have plans to convert. Eight U.S. research reactors, including 6 university reactors, could convert to LEU fuel, but according to DOE officials, DOE has not provided the funding to convert them. In addition, DOE has not removed HEU fuel from a university research reactor that has been storing HEU since it converted to LEU in 2000. According to Argonne officials, of the 20 foreign research reactors that currently use U.S.-origin HEU fuel, 14 do not have plans to convert to LEU because they generally have a sufficient supply of HEU and either do not want to incur the additional cost of conversion or do not have the necessary funding. Finally, since DOE’s reactor conversion program initiated cooperation with Russia in 1993, no research reactors that use HEU fuel supplied by Russia have converted. According to Argonne officials, only 1 of 7 Russian-supplied research reactors that could use LEU fuel is scheduled to convert. They said that 5 other Russian- supplied reactors are likely to convert to LEU fuels that are currently available or are expected to become available within the next year. In the United States, there are 6 university research reactors and 2 other research reactors that could convert to LEU fuel but still use HEU fuel. Although DOE has funded the conversion of 11 university research reactors to LEU fuel, the last university reactor converted in 2000. DOE officials said DOE has not provided the funding to convert the 6 remaining U.S. university reactors. DOE recently added 2 other domestic reactors to the reactor conversion program, and neither of these reactors currently has plans to convert to LEU, also because DOE has not provided the necessary funding. (See table 1 for a list of the 8 reactors.) In addition, the university research reactor that converted to LEU in 2000 is still storing HEU fuel because DOE has not removed it. Because the reactor now uses LEU fuel and has no need for HEU, the reactor operator told us that he is eager to return the HEU to DOE for long-term storage and disposal. DOE has a separate program that supports university research reactors, including provision of DOE-owned fuel, and funds their conversion to LEU and removal of spent fuel. According to the DOE official in charge of the university reactor support program, the program has limited funding, and requests for additional funding to support conversion have not been approved by the Office of Management and Budget. Furthermore, the university reactor support program did not receive additional funding to remove HEU fuel from the research reactor that converted to LEU in 2000 until fiscal year 2004, after a group of domestic research reactor operators successfully lobbied Congress to add $2.5 million to the program’s budget to pay for the removal of spent fuel from the reactors. Officials at NRC, which regulates the 6 university reactors, told us that they consider the conversion of the reactors to LEU, the timely removal of HEU fuel after conversion, and the removal of HEU from the reactor that converted to LEU in 2000 as a security enhancement and one of their priorities. NRC officials said that converting the 6 reactors is technically feasible and that the delay in converting the reactors is purely a matter of funding and should be expedited by DOE. However, DOE officials said that DOE had not made the conversion of these reactors a priority. Furthermore, while operators at all 6 universities told us they are willing to convert to LEU fuel, they said it is not a high priority because they do not consider their HEU fuel to be a likely target for theft. For example, one reactor operator explained that the reactor is structured in such a way that the HEU is located inside a concrete enclosure that even experienced reactor staff need almost 2 days to access. These 6 reactors use only a small amount of HEU fuel—less than a kilogram per year, which is not enough to make a nuclear weapon. In contrast, there are other research reactors included in DOE’s reactor conversion program that are larger than the 6 university reactors and use tens of kilograms per year. Nevertheless, operators of the 6 university research reactors said they would convert to LEU when DOE provides funding. Furthermore, the DOE official in charge of the university reactor support program said that converting domestic university reactors is an issue of U.S. nonproliferation policy. He said that converting domestic reactors to LEU would support U.S. efforts to influence foreign reactors to convert to LEU in accordance with the U.S. nonproliferation policy to reduce the use of HEU in civilian research reactors worldwide. Although they did not consider conversion a priority from a security perspective, two of the university reactor directors we spoke with recognized the importance of converting university reactors to LEU as part of U.S. nonproliferation policy. According to DOE officials, conversion for each reactor is projected to cost between $5 million and $10 million. However, a project engineer at DOE’s Idaho National Engineering and Environmental Laboratory who tracks DOE expenditures on conversions of U.S. university reactors had originally told us that conversion would cost between $2 million and $4 million per reactor, depending on the type of reactor. DOE could not provide documentation to support either of the estimates. DOE officials said that conversion costs for 4 of the university reactors are higher because their fuel is no longer manufactured in the United States and must be purchased in France. Other than funding, there are no significant obstacles to converting the 6 university reactors to LEU. Based on our visits to 3 converted university research reactors and interviews with Argonne officials and the operators of the 6 remaining university reactors, converting to LEU does not reduce the performance of the reactors to the point that they cannot be used to conduct research and train students effectively. Operators at 5 of the 6 university reactors still using HEU fuel told us they expected performance to be adequate after conversion. In addition, operators of converted reactors told us that using LEU instead of HEU reduced security concerns and had a minimal impact on the cost of operating the reactors. Argonne officials said that one of their objectives when providing technical assistance to convert reactors to LEU is to complete the process with only minimal effects on performance and operating costs. In fact, two reactor operators (one in Rhode Island and one in Massachusetts) told us that performance at their reactors had improved as a result of conversion. According to Argonne officials, 2 other reactors in the United States (the DOE NRAD and General Electric NTR reactors) could convert to LEU but are not currently planning to do so. The officials said they recently added these 2 reactors to the scope of the reactor conversion program so that the program would be comprehensive in its coverage of civilian research reactors that use HEU. The NRAD research reactor is a DOE reactor, and DOE would have to fund the purchase of new LEU fuel if a decision were made to convert the reactor. According to a DOE official responsible for the reactor, the budget for the NRAD reactor is limited, and purchasing new LEU fuel to convert the reactor would take funding away from other activities at the facility where the reactor is located. The DOE official considers the conversion of this reactor a lower priority because it has a sufficient supply of HEU fuel to last for the life of the reactor and because the facility has other nuclear material that would be more attractive to terrorists than the HEU fuel in the reactor. The General Electric NTR is a privately owned reactor and is also not required to convert until DOE provides funding. Fourteen of the 20 foreign research reactors that currently use U.S.-origin HEU fuel do not have plans to convert to LEU. According to Argonne officials, these reactors generally have a supply of HEU sufficient to last many years (in some cases for the life of the reactor) and either do not want to incur the additional cost of conversion or do not have the necessary funding. Three of the reactors are planning to convert to LEU, and 3 others currently plan to shut down (or, in the case of 2 reactors, convert to LEU fuel if they do not shut down). See table 2 for a list of the 20 reactors. Some of the foreign research reactors would like to convert but do not have the necessary funding. For example, the operator of a research reactor in Jamaica told us that converting to LEU would improve the reactor performance but that purchasing LEU fuel for the reactor would cost $1.5 million, which is more than the reactor operator can afford. Therefore, the reactor operator is planning to continue using its current supply of HEU, which will last possibly 20 years. Similarly, according to Argonne officials, the reactor operator in Mexico would be willing to convert to LEU but does not have the necessary funding. While funding may not be an issue for other foreign reactors, many of them are designed to operate on a small amount of fuel meant to last for the life of the reactor. Converting to LEU would require the disposal of the fuel that the reactor operator had already purchased and is still usable. According to Argonne officials, operators of certain reactors in France, Japan, the Netherlands, and the United Kingdom do not have plans to convert because the reactors have lifetime cores that do not need to be replaced. To support the objective of the reactor conversion program to reduce and eventually eliminate the use of HEU in research reactors, the United States has implemented policies designed to influence foreign research reactors to convert to LEU. For example, DOE’s Foreign Research Reactor Spent Nuclear Fuel Acceptance program provides foreign reactors that use HEU fuel of U.S.-origin the opportunity to return their spent fuel to the United States if they agree to convert their reactors to LEU fuel. In addition, the Energy Policy Act of 1992 authorizes NRC to approve the export of HEU to foreign research reactors only if the recipients agree to convert the reactors once a suitable LEU fuel is developed. Since there are limited suppliers of HEU fuel and few options for disposing of spent fuel, the U.S. policies in support of the reactor conversion program have been effective in influencing some research reactors to convert to LEU. In particular, of the 20 foreign reactors that can convert to LEU but are still using HEU, the 2 that use the greatest amount of HEU per year are planning to convert by 2006. One research reactor in the Netherlands (HFR Petten) formally agreed with the United States to convert to LEU in order to continue receiving U.S.-origin HEU fuel until conversion could take place and to ship spent fuel back to the United States. The U.S. policies in support of conversion were effective in influencing the reactor operator because the reactor uses 38 kilograms of HEU fuel per year and regularly needs to obtain new HEU fuel and dispose of spent fuel. Similarly, the FRJ-2 reactor in Germany has an agreement with DOE to convert to LEU fuel as a condition of returning spent fuel to the United States. However, U.S. policies in support of the reactor conversion program do not influence foreign reactors using so little HEU that they can operate for many years without replacing their fuel or disposing of spent fuel. While Argonne provides technical assistance for conversion, current DOE policy precludes purchasing new LEU fuel for foreign reactors that use U.S.-origin HEU fuel. Under this policy, purchasing new LEU fuel—which, according to a DOE project engineer, is the main cost of conversion—is the responsibility of the reactor operator. According to a DOE official, DOE has paid for new LEU fuel only once, in Romania, in exchange for the return of Russian-origin HEU fuel to Russia. DOE spent $4 million to purchase LEU fuel for the Romanian reactor, which is still only partially converted and requires more LEU fuel before conversion is complete. DOE officials said that current DOE policy allows purchasing LEU fuel for research reactors that use Russian-origin HEU fuel in exchange for returning the HEU to Russia. However, DOE does not have a similar policy for research reactors that use U.S.-origin HEU fuel. DOE officials said they are considering revising this policy to allow purchasing LEU fuel for U.S.-supplied research reactors. According to Argonne officials, 7 Russian-supplied research reactors, all located outside Russia, could convert using LEU fuels that are currently available or are expected to become available within the next year. However, only 1 of the 7 reactors, located in Ukraine, is scheduled to convert. (See table 3 for a list of the 7 reactors.) The Ukrainian reactor operators told us that they expect to begin conversion to LEU at the end of 2004 at the earliest and that they are currently analyzing the safety of converting to LEU with the assistance of DOE’s reactor conversion program. Unlike many of the U.S.-supplied research reactors that are not planning to convert because they have an adequate supply of HEU, the Ukrainian reactor is running out of HEU fuel and will have to place an order for new fuel by the end of 2004. The reactor operators told us they support conversion to LEU fuel because the negative impact on the reactor’s performance will be tolerable, the operating costs will be about the same after conversion to LEU, and converting to LEU would eliminate the threat that HEU could be stolen from the facility. The reactor operators are scheduled to complete the safety analysis in November 2004 and then submit an application to obtain approval for conversion from the Ukrainian nuclear regulatory authority. However, Argonne officials said the schedule for converting the Ukrainian reactor is ambitious and conversion of the reactor could be delayed. According to Argonne officials, if the Ukrainian reactor does not get regulatory approval for conversion to LEU before it runs out of fuel, it may decide to place an order with the Russian supplier for more HEU fuel instead. According to DOE officials, 5 other Russian-supplied reactors that can use LEU fuel are likely to convert. Conversion of the reactors in Bulgaria and Libya depends on the commercialization of the Russian-origin LEU fuel, which DOE expects to take place in 2004. DOE has also engaged in discussions on conversion with the operators of the research reactor in Vietnam. According to Argonne officials, conversion of the research reactor in Hungary requires at least several more years of analysis. In particular, the reactor must test an LEU fuel sample before the Hungarian government approves conversion, and this process will take several years. Argonne officials said the research reactor in Germany has a sufficient supply of HEU fuel and therefore is not planning to convert to LEU. Technical setbacks in developing new LEU fuels have postponed the conversion of 31 research reactors worldwide that cannot use currently available LEU fuels until 2010 at the earliest. Argonne is pursuing the development of LEU dispersion fuel and LEU monolithic fuel to convert these reactors. Argonne officials said the failures during testing of dispersion fuel are the worst they have ever experienced during fuel development. As a result, Argonne has delayed completion of dispersion fuel until 2010 and may recommend that DOE cancel further development altogether if solutions cannot be found. This would leave the reactor conversion program with only one alternative LEU fuel—monolithic fuel. According to Argonne officials, monolithic fuel has performed well in the one test conducted so far. However, many more tests are required. Because of lessons learned from dispersion fuel failures, Argonne recently delayed the projected completion date of monolithic fuel from 2008 to 2010 in anticipation of the need for additional tests. Argonne officials said they have compressed the development schedule of both dispersion and monolithic fuel as much as possible and any further technical problems will result in additional delays. Moreover, Argonne is focusing all LEU fuel development efforts on dispersion and monolithic fuel, and if both fuels fail, no LEU fuel will be available to convert the remaining reactors in the reactor conversion program. The 31 research reactors worldwide that cannot convert to currently available LEU fuels include some of the largest reactors in terms of amount of HEU used per year. Argonne officials estimate the reactors use a total of about 728 kilograms of HEU per year. Many of the 31 reactors are used to conduct advanced scientific research that could not be done if they were to convert to currently available LEU fuels. Representatives of 8 of the research reactors told us they need HEU fuel to operate and conduct research until LEU fuel with the right performance characteristics is developed. (See table 4 for a list of the 31 reactors.) DOE’s reactor conversion program has run into problems in developing new LEU fuels intended to replace HEU in these research reactors. The most serious problems have occurred in tests of dispersion fuel, the development of which began in 1996. According to Argonne officials, dispersion fuel would be usable in the Russian-supplied research reactors and 1 U.S. reactor. Most recently, tests of the dispersion fuel have revealed weaknesses that would make the fuel unsuitable for use in research reactors. In particular, when samples of dispersion fuel were tested in research reactors, the fuel failed unexpectedly under reactor operating conditions the fuel was designed to withstand. A number of factors illustrate the seriousness of the problems with the dispersion fuel. First, according to Argonne officials, the same problems have been encountered in separate tests and under different operating conditions in reactors in the United States, Belgium, France, and Russia. Second, the problems were unexpected and worse than encountered in previous LEU fuel development efforts. Finally, if the failures were serious enough, the fuel could leak radioactive material into the reactor coolant and cause facility contamination. If this occurred, the dispersion fuel would not be approved for use in research reactors. Argonne officials said that, as a result of these test failures, they have delayed projected completion of dispersion fuel from 2006 until 2010 to allow time for additional development and testing. Argonne officials plan to pursue options to modify dispersion fuel to make it resistant to failures. However, they said they would also consider recommending that DOE cancel further development of dispersion fuel if it is determined the fuel cannot be sufficiently improved. In addition, because of the problems encountered in the development of dispersion fuel, Argonne has shifted its primary focus to the development of monolithic fuel. Initial testing of monolithic fuel has produced positive results under the same operating conditions under which dispersion fuel failed. According to Argonne officials, if they are successful in developing monolithic fuel, it will offer better reactor performance than dispersion fuel and could be used to convert the remaining research reactors in the reactor conversion program to LEU. Nevertheless, the successful development of this fuel is still uncertain, and Argonne has not yet demonstrated that all remaining research reactors still using HEU could convert to it. Argonne officials said they began developing monolithic fuel relatively recently, in 2000, and to date have conducted only one test. Additional testing could reveal problems that have not yet surfaced. Furthermore, this fuel requires development of a new manufacturing method because the methods used to manufacture other research reactor fuels are not suitable for monolithic fuel. Argonne is conducting research on different manufacturing options but has not yet demonstrated that monolithic fuel can be manufactured on a large scale. Three reactor operators hoping to convert to this fuel told us it is impossible to predict whether the new LEU fuel will be successfully developed and that creating a reliable LEU fuel could take many years more than expected. Development of monolithic fuel may be delayed if Argonne encounters any problems in the fuel development process. Argonne officials said they have already delayed projected completion from 2008 to 2010 to allow time for additional testing. The schedule for developing monolithic fuel does not factor in any technical problems that may occur during testing but rather assumes that every phase of development will be successful. Argonne officials said they have already compressed the schedule as much as possible and that it would be difficult to significantly accelerate fuel development any further because each set of tests requires a fixed amount of time. The officials also stated that fuel development would have been delayed even further had Congress not increased funding for the reactor conversion program from $6.1 million in fiscal year 2003 to $8.5 million in fiscal year 2004, which enabled Argonne to pursue a more aggressive fuel development schedule. Assuming no further delays in fuel development, Argonne officials said the first research reactors could begin ordering new LEU for conversion within 6 months of completing the development of either dispersion fuel or monolithic fuel in 2010. In our visits to foreign and domestic research reactors that cannot convert to currently available LEU fuels, we found that reactor operators’ response to the prospect of conversion to LEU fuels varies widely. For example, the operator of the BR-2 reactor in Belgium said it had agreed to convert to LEU when feasible as a condition for continuing to receive U.S.-origin HEU fuel. In contrast, a new German reactor at the Technical University Munich designed to use HEU (the FRM-II reactor) may still not be able to convert to LEU even if Argonne is successful in developing monolithic fuel. The reactor operator has agreed to convert to a lower enrichment of HEU that is less usable in nuclear weapons. However, during our visit to the reactor, the operator said it had no plans to convert the reactor to LEU fuel because conversion would require expensive reconstruction. Argonne has contracted with Russia to work jointly on development of new LEU fuels, but DOE has not negotiated a formal agreement with the Russian government to convert research reactors in Russia to LEU. DOE’s reactor conversion program includes 14 research reactors operating in Russia that, combined, use 225 kilograms of HEU fuel per year. In 2002, the Secretary of Energy and Russia’s Minister of Atomic Energy issued a joint statement identifying acceleration of LEU fuel development for both Russian-supplied and U.S.-supplied research reactors as an area where joint cooperation could lead to reduction in the use of HEU. However, the Russian officials responsible for developing LEU fuels told us they are focusing on converting Russian-supplied reactors in other countries first. The officials also do not consider the conversion of research reactors in Russia to LEU a priority because security has been improved at the reactors and the reactors need HEU fuel to conduct advanced research. Furthermore, Russian officials told us that under Russian law, operators of HEU reactors in Russia are not required to convert to LEU. In fact, since 1986, Russia has been building a new research reactor that is designed to use HEU fuel rather than LEU. Three U.S. research reactors (at the Massachusetts Institute of Technology, the University of Missouri, and the National Institute of Standards and Technology) where conversion is not currently feasible fall under NRC regulations that would require conversion to LEU if the reactor conversion program is successful in developing new LEU fuels. Furthermore, the Secretary of Energy committed to the conversion of all U.S. research reactors by 2013 in a speech on May 26, 2004. However, without federal funding to support the conversion, the reactors may continue to use HEU. For example, the operator of the Massachusetts Institute of Technology reactor said that conversion to LEU could be delayed even after a new LEU fuel is developed if DOE does not provide funding in a timely manner. The reactor conversion program has demonstrated the potential for using LEU to produce medical isotopes on a small scale, but large-scale producers are concerned that the cost of conversion could be prohibitive. With assistance from the reactor conversion program, one reactor in Argentina used for the production of medical isotopes converted from HEU to LEU in 2003. However, Argonne officials said the conversion was feasible only because the reactor produces medical isotopes on a small scale, using a relatively small amount of material in the production process. (Prior to converting to LEU, the Argentine reactor used less than a kilogram of HEU per year. In contrast, four large medical isotope producers targeted by the reactor conversion program, located in Belgium, Canada, the Netherlands, and South Africa, each use as much as 25 kilograms of HEU per year.) Argonne is still working to overcome problems with using LEU that limit the ability of the Argentine reactor to increase its production capacity. Argonne officials said they are 2 to 3 years away from completing work that would allow the large medical isotope producers to convert from HEU to LEU. Argonne officials said they have developed LEU materials that can be used by all medical isotope producers and only the adaptation of the production processes from using HEU to LEU remains. They said that adapting the medical isotope producers’ processes, each of which is unique in some aspect, is technically feasible and is just a matter of time. One reason why the production processes must be modified is that almost five times more LEU than HEU is required to produce the same amount of medical isotopes. The increased amount of nuclear material creates obstacles to conversion. For example, using LEU would produce more waste, which in turn could increase the burden of treating and storing the waste. In addition, the facilities, chemical processes, and waste management systems for producing medical isotopes are customized to use HEU and would require modifications to accommodate LEU. In discussions with the two large medical isotope producers in Belgium and Canada, both cited a number of factors that would make conversion to LEU costly and difficult, including the fivefold increase in the amount of LEU that would be required to achieve the same level of output when using HEU. As part of its technical analysis, the Canadian producer is currently conducting an assessment of converting to LEU to determine whether conversion would be economically feasible. The Canadian producer currently uses U.S.-origin HEU and, under U.S. law, must agree to convert to LEU when a suitable LEU alternative is developed. (The other three large medical isotope producers currently receive their HEU from countries other than the United States and are therefore not subject to U.S. requirements to convert to LEU.) U.S. law also allows for an exception to the requirement to convert to LEU if conversion would result in a large percentage increase in operating costs. Officials at DOE and NRC, which implements the law governing U.S. HEU exports, acknowledge that medical isotope producers operate on small profit margins, and as a result, the cost of converting to LEU may be prohibitive. However, Argonne officials said that conversion to LEU could result in a more economic process. DOE officials said they would not accept a statement by the Canadian producer that conversion of medical isotope production to LEU is not economically feasible without documentation to support that conclusion. Research reactor operators at most reactors we visited said that security had been improved because of DOE or NRC efforts. However, DOE and NRC have recognized the need to further improve security at research reactors throughout the world, including in the United States, and are engaged in separate efforts to assess research reactor security and its effectiveness. At the foreign research reactors we visited, we observed security improvements to storage areas for HEU fuel, systems for controlling personnel access to the reactors, and alarm systems, including motion detectors and camera monitoring. DOE provided assistance to some of the foreign reactors to make the security improvements; other reactor operators had made the improvements with their own funding based on DOE recommendations. At U.S. research reactors, we saw physical security improvements around the reactor buildings, such as new fences and concrete barriers. Several operators of university research reactors told us they were using funding from DOE’s university reactor support program to purchase new security equipment. We also observed areas where further improvement could be made. For example, we visited one foreign research reactor’s facility for storing spent HEU fuel where DOE had provided only minimal assistance to improve security. According to DOE officials, DOE has generally not provided assistance to improve the security of spent HEU fuel because it is radioactive and too dangerous for potential terrorists to handle. DOE has placed a higher priority on protecting fresh fuel—fuel that has not been irradiated in a reactor—because it is easier to handle. However, operators of the fuel storage facility said that the spent fuel had been in storage for a long time and had lost enough radioactivity to be handled and potentially stolen. During a visit to another foreign research reactor, we observed a new alarm system monitoring the entrance to the reactor building, a fresh fuel vault, and motion detectors that had been installed with DOE assistance. DOE is in the process of adding further enhancements to the security of the facility. However, we also observed that the fence surrounding the facility was in poor condition, security guards at the front gate were unarmed, and there were no guards at the reactor building, which we entered without escort. At another research reactor, DOE identified security weaknesses and offered assistance to make security improvements. However, according to the U.S. embassy in the country where the reactor is located, the improvements had not been made as of March 2004 because the reactor operator did not act on DOE’s offer of assistance. We discussed examples that raised questions about security of foreign research reactors with DOE officials during meetings on March 12 and 22, 2004, and they agreed that DOE needs to do more to address potential security concerns. Recognizing that the security at some research reactors may need to be improved, DOE established a task force in 2004 to identify the highest risk reactors and to develop options for improving security at reactors believed to be of greatest concern. The task force is currently gathering information on all research reactors worldwide, including reactors that are shut down, and prioritizing them based on a number of factors, including how much HEU is stored on site, the vulnerability of the reactors to theft of HEU or sabotage, plans for conversion to LEU and removal of HEU fuel, and the potential terrorist threat to countries where the reactors are located. The scope of the initiative comprises 802 research reactors and associated facilities, including 128 facilities possessing 20 kilograms or more of HEU on site. DOE officials said the task force addresses the need to combine and coordinate information from different sources within DOE, which did not have a comprehensive database prior to the task force to document visits and security observations made by various DOE program officials to foreign research reactors. According to DOE officials, the task force has submitted a report to the Secretary of Energy with recommendations for possible implementation by DOE, such as expediting conversion to LEU and providing additional assistance to foreign research reactors to improve security. According to task force members, security assistance to foreign reactors could be provided by DOE, the International Atomic Energy Agency, or countries other than the United States. NRC is also engaged in efforts to assess and improve the security at the U.S. research reactors it regulates. NRC took actions after the attacks of September 11, 2001, to improve security at U.S. research reactors—for example, by requiring some reactor operators to consider installing additional physical barriers and strengthening screening requirements for entrance to facilities. In addition, NRC is conducting assessments of the security at the research reactors it regulates and may increase security requirements based on the results of the assessments. According to NRC officials, the agency’s security evaluations of U.S. research reactors will be completed in December 2004. Based on the results of the evaluations, NRC will decide to strengthen current regulations, leave regulations as they are, or address security concerns at each reactor on a case-by-case approach. While several research reactors are scheduled to convert to LEU fuel in the next few years, progress in converting many remaining reactors has stalled. In part, converting these reactors is a matter of completing development of new LEU fuels, which has been delayed by unforeseen technical problems. However, if DOE’s reactor conversion program is to achieve its objective to reduce and eventually eliminate the use of HEU in civilian research reactors, DOE may need to re-evaluate its policies with regard to the program. Many of the research reactors that could use currently available LEU fuels have not converted because they lack incentives, funding, or both. Until recently, the policy of DOE’s reactor conversion program has been to provide technical assistance to support conversion of research reactors to LEU but not to pay for conversion or, in particular, purchase new LEU fuel. In the case of six U.S. university reactors, DOE has not made purchasing LEU fuel for conversion (and completing the conversion process at another reactor by removing HEU fuel and shipping it to a DOE facility for disposal) a high priority. While many of the U.S. reactors that could convert to LEU use only a small amount of HEU per year, converting them would demonstrate DOE’s commitment to the nonproliferation objective of the reactor conversion program. DOE has generally expected the operators of foreign research reactors that use U.S.-origin HEU fuel to purchase new LEU fuel with their own funds. The policies DOE has relied on to influence operators to convert to LEU— requiring that reactor operators agree to convert as a condition of receiving U.S. HEU exports or returning spent fuel to the United States—do not work for reactors using so little HEU that they can operate for many years without replacing their fuel. Without funding for conversion, it is possible these reactors could continue using HEU for years. DOE may need to consider offering additional incentives to foreign reactors, including purchasing new LEU fuel, to influence them to convert to LEU. Regardless of progress in converting domestic and foreign research reactors to LEU in the near term, delays in completing the development of new LEU fuels mean that other research reactors will continue to use HEU until at least 2010. If the reactor conversion program experiences additional problems in one or both of the two LEU fuels currently under development, some research reactors could be left without a viable option for conversion to LEU. Given the continuing use of HEU at these research reactors, DOE and NRC efforts to evaluate and improve reactor security are essential components of the overall effort to reduce the risk of proliferation of HEU at civilian research reactors. In order to further reduce the use of HEU in research reactors in the United States and abroad, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration take the following three actions: consider placing a higher priority on converting the six remaining university research reactors in the United States that can use currently available LEU fuel; once a reactor has been converted, place a high priority on removing the HEU fuel and transporting it to the appropriate DOE facility; and evaluate the costs and benefits of providing additional incentives to foreign research reactors that use U.S.-origin HEU fuel to convert to LEU, particularly to reactor operators that are willing to convert but do not have sufficient funding to do so. We provided draft copies of this report to the Departments of Energy and State and to NRC for their review and comment. Comments from the Departments of Energy and State are presented as appendixes III and IV, respectively. NRC’s written comments were not for publication. DOE, State, and NRC generally agreed with the recommendations in our report and provided detailed comments, which we incorporated into the report as appropriate. In its comments, DOE noted that the United States has 11 more research reactors to convert to the use of LEU fuels, with conversion currently feasible for 6 of the reactors. However, DOE’s February 2004 project execution plan for its reactor conversion program identifies 14 U.S. research reactors still using HEU fuel that are included in DOE’s reactor conversion program, with conversion currently feasible for 8 of the reactors. We used the number of reactors from DOE’s project execution plan in our report. In its comments, State questioned DOE’s cost estimate for converting U.S. research reactors where conversion to LEU fuel is currently feasible. State noted that DOE’s cost estimate of $5 million to $10 million per reactor where conversion to LEU fuel is currently feasible seems much too high, especially in comparison with DOE’s expenditures of about $0.4 million to $1.6 million per reactor to convert 11 U.S. university reactors to LEU fuel between 1984 and 2000. State wrote that the DOE office that administers the program for supporting U.S. university research reactors has been reluctant to fund the conversion of more research reactors and has a tendency to overstate the potential costs to deflect pressure to spend money on conversions. We asked DOE officials what support they had for the cost estimate. In response, a DOE official said that DOE does not have documentation to support its cost estimate. In another comment, State suggested we include recognition of the growing number of new and planned research reactors around the world that have been designed to use LEU fuel. State wrote that modern world- class reactors do not need HEU fuel to conduct high-quality research. DOE officials also provided information on the use of LEU fuel in new research reactors constructed since the inception of its reactor conversion program in 1978. Although our report does not focus on new research reactors designed to use LEU fuel, we agree that this is a positive development in keeping with the objective of DOE’s reactor conversion program and we added a footnote recognizing these new reactors. To review the progress of the reactor conversion program, we analyzed program documentation, including DOE’s February 2004 RERTR Program Project Execution Plan. We also interviewed key DOE, Argonne, NRC, and State Department officials; conducted site visits to foreign and U.S. research reactors and interviewed reactor operators by telephone; and attended an annual international conference organized by DOE’s reactor conversion program. For site visits and telephone interviews, we selected foreign and domestic research reactors from three categories: reactors that had converted to LEU, reactors that could convert using currently available LEU fuels but were still using HEU, and reactors that could not convert using currently available LEU fuels. Within each of the three categories of reactors, we selected a nonprobability sample of reactors based on a number of criteria such as reactor types, including U.S.-supplied reactors, Russian-supplied reactors, and reactors that use HEU in the production of medical isotopes. We visited 5 research reactors in the United States, including 3 that had converted to LEU and 2 that cannot convert to currently available LEU fuels and are still using HEU. We conducted phone interviews with reactor operators from 1 other U.S. reactor that cannot use currently available LEU fuels and all 6 of the U.S. university research reactors that can convert to LEU but are still using HEU. We also visited 10 foreign research reactors in Belgium, Germany, the Netherlands, Poland, Portugal, Romania, Russia, and Ukraine. These included 2 converted reactors, 4 reactors that can use LEU fuel but have not yet converted, and 4 reactors that still require HEU. (See table 5.) In our site visits and telephone interviews, we asked a standard set of questions (depending on the conversion status of the reactor) on technical aspects of converting to LEU, cost of conversion, impact of conversion on reactor performance, and assistance provided by DOE’s reactor conversion program. To review the progress in developing new LEU fuels for use in research reactors, we conducted in-depth interviews with Argonne officials responsible for managing LEU fuel development; operators of reactors that plan to convert to new LEU fuels when they are developed; and fuel development experts at the Bochvar Institute in Russia, which is collaborating with Argonne. At the annual international conference organized by DOE’s reactor conversion program, we participated in sessions on LEU fuel development, and we reviewed technical papers on the progress of fuel development. For technical expertise, we relied on GAO’s Chief Technologist, who participated in meetings with Argonne officials and reviewed the information that Argonne provided. We used the interviews and annual conference to also review progress in the development of LEU for use in the production of medical isotopes. In addition, we interviewed two of the four large medical isotope producers (in Belgium and Canada) that are currently using HEU to produce medical isotopes and that would be candidates for conversion to LEU once Argonne completes development. To gather information on DOE and NRC efforts to improve research reactor security, we interviewed officials at those agencies and discussed security improvements with reactor operators we interviewed. We also observed security improvements at research reactors we visited. However, we did not evaluate the effectiveness of the security at research reactors or DOE and NRC efforts to improve security. We obtained data from DOE and Argonne on the conversion status of the 105 research reactors included in the reactor conversion program, the amount of HEU used per year by the 105 reactors (including the amount used prior to conversion for the 39 research reactors now using LEU), and DOE expenditures for the reactor conversion program since its inception in 1978. All amounts are in constant 2003 dollars, unless otherwise noted. We assessed the reliability of data we obtained through discussions with Argonne officials. We also obtained responses from Argonne officials to a series of data reliability questions covering issues such as quality control procedures and the accuracy and completeness of the data. Based on our assessment, we determined that the data we obtained from DOE and Argonne was sufficiently reliable for our purposes. We conducted our work from July 2003 to July 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Energy; the Administrator, National Nuclear Security Administration; the Secretary of State; the Chairman, NRC; the Secretary of Homeland Security; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report include Joseph Cook, Jonathan McMurray, Kirstin B.L. Nelson, Peter Ruedel, F. James Shafer Jr., and Keith Rhodes, GAO’s Chief Technologist. DOE estimates that the reactor conversion program will cost approximately $213 million through the program’s projected end in 2012. Expenditures since the program’s inception in 1978 through fiscal year 2003 totaled approximately $139 million in constant 2003 dollars. (See fig. 3.) Costs for the reactor conversion program are broken into four categories: Fuel development includes all of the activities associated with testing and analyzing new LEU fuels, such as the LEU dispersion and monolithic fuels that are currently under development. This activity also includes developing the methods for manufacturing new LEU fuels. Most of the reactor conversion program costs over the life of the program are in this category. Reactor analysis includes studying the conversion of individual research reactors, both domestic and foreign, once a suitable LEU has been developed. For example, Argonne provides technical assistance to research reactors to determine the impact of conversion on the reactors’ performance and safety. This category does not include the cost of purchasing LEU fuel for research reactors. For example, the responsibility for purchasing LEU fuel for U.S. university reactors belongs to another program in DOE that is separate from the reactor conversion program. Development of LEU for medical isotope production includes activities associated with testing and analyzing LEU materials to replace HEU in the production of medical isotopes. This activity also includes development of manufacturing and waste management processes for using LEU instead of HEU and technical assistance to medical isotope producers. Assistance to Russia includes funding to support research and development on new LEU fuels for Russian-supplied reactors. It also includes analysis of the impact of conversion to LEU on Russian- supplied reactors. The assistance to Russia was previously funded through a one-time grant of approximately $1.7 million, about two-thirds of which has been spent, from the State Department’s Nonproliferation and Disarmament Fund (NDF). In addition to the $139 million spent by the reactor conversion program, DOE’s university reactor support program spent approximately $10 million between 1984 and 2000 to convert 11 university research reactors in the United States, according to an official at the Idaho National Engineering and Environmental Laboratory (INEEL). The cost of converting each reactor varied from around $400,000 to $1.6 million and was primarily for the cost of fabricating the fuel. The costs varied depending on the type of fuel and where it was manufactured. DOE’s projected costs for completing the reactor conversion program total about $74.7 million. (See table 6.) This amount includes $26.3 million for reactor analysis, $25.8 million for fuel development, $4.8 million for the development of LEU for medical isotope production, and $17.8 million for assistance to Russia. DOE’s cost estimates are based on the assumption that at least one of the two LEU fuels that Argonne is developing will be successful and will be used for the reactor conversion program. DOE also assumes that Russia and other countries will continue to assist Argonne in conducting fuel tests as necessary for fuel development. DOE’s estimates do not include the cost of purchasing new LEU fuel to convert research reactors. These costs are expected to be funded by other DOE programs or by the operators of foreign research reactors. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Nuclear research reactors worldwide use highly enriched uranium (HEU) as fuel and for the production of medical isotopes. Because HEU can also be used in nuclear weapons, the Department of Energy's (DOE) Reduced Enrichment for Research and Test Reactors program is developing low enriched uranium (LEU), which would be very difficult to use in weapons, to replace HEU. To date, 39 of the 105 research reactors in the United States and abroad targeted by DOE have converted to LEU fuel. GAO was asked to examine (1) the status of the remaining research reactors in converting to LEU fuel, (2) DOE's progress in developing new LEU fuels for reactors where conversion is not yet technically feasible, (3) DOE's progress in developing LEU for the production of medical isotopes, and (4) the status of DOE and Nuclear Regulatory Commission (NRC) efforts to improve security at research reactors. Currently, conversion to LEU fuel is technically feasible for 35 of the 66 research reactors in DOE's program that still use HEU fuel, but most do not have plans to convert. In the United States, 8 research reactors, including 6 university research reactors, have not converted because DOE has not provided the necessary funding. Of the 20 foreign research reactors that use U.S.-origin HEU fuel, 14 do not have plans to convert because they have a sufficient supply of HEU fuel and either do not want to incur the additional cost of conversion or do not have the necessary funding. Finally, only 1 of 7 Russian-supplied research reactors that could use LEU fuel is scheduled to convert. Conversion to LEU fuel is not technically feasible for 31 research reactors worldwide that still use HEU fuel. DOE has experienced technical setbacks in fuel development that have postponed the conversion of the 31 reactors until 2010 at the earliest. One fuel failed unexpectedly in testing, and DOE may cancel further development, depending on the results of additional tests. Initial testing of another LEU fuel produced positive results, but additional testing is required and the fuel will not be developed until 2010 at the earliest. Separately from the development of LEU fuel, DOE is developing LEU to replace HEU in the production of medical isotopes. DOE has not yet completed the work that would enable conversion of large-scale medical isotope production to LEU. One reactor has converted to LEU for smallscale production. However, large-scale producers are concerned that the cost of converting to LEU could be prohibitive. DOE and NRC have taken steps to improve security at foreign and U.S. research reactors. While operators at most research reactors we visited said that security had been upgraded through DOE or NRC efforts, we observed areas where further improvements could be made. Recognizing the possible need for further improvements, DOE and NRC are engaged in separate efforts to assess and improve security.
|
Children with mental health conditions can be treated with psychotropic medications, psychosocial therapies, or a combination of both. Psychotropic medications can be very effective in treating children with mental health conditions, but they may also produce side effects, some of which can be serious. For example, according to the American Academy of Child & Adolescent Psychiatry (AACAP), medications used to treat ADHD can reduce symptoms such as hyperactivity in children, as well as improve their attention and increase their ability to get along with others. These medications have been widely tested in children and are generally considered safe; however, ADHD medications have also been associated with side effects ranging from mild to serious, such as sleeplessness, loss of appetite, tics, agitation, hallucinations, liver problems, and suicidal thoughts. Research has shown that foster children take psychotropic medications at much higher rates than other children covered by Medicaid. For example, we previously reported that while about 5 to 10 percent of nonfoster children in five states’ Medicaid programs took a psychotropic medication in 2008, rates among children in foster care were about 20 to 39 percent. The use of five or more medications concomitantly (that is, at the same time), while rare, was also higher among children in foster care. We reported that several factors may have contributed to the higher utilization rates among foster children, such as the increased prevalence and greater severity of mental health conditions among these children. Studies have shown that rates of psychotropic medication use among all children have increased. For example, one study found that the percentage of children’s doctor visits involving psychotropic medications increased by 75 percent from 1996 to 2007—from 6 percent to about 11 percent. Among these visits, those involving two or more medications rose from about 14 percent to about 20 percent. Experts have identified a number of potential factors that could account for the increased use of psychotropic medications. For example, AACAP’s guidelines on the use of psychotropic medications for children identified explanations such as the expanding evidence base demonstrating the efficacy of these medications for children and the efforts of pharmaceutical companies to market drugs to prescribers and consumers. Pharmacy benefit managers are firms that administer prescription drug benefits on behalf of health insurance plans. Furthermore, some studies found that the prescribing of antipsychotics was higher among publicly insured children than among privately insured children, and even higher among foster children. One study found that children enrolled in Medicaid were prescribed antipsychotic medications at over four times the rate of children with private insurance in 2004. In addition, AHRQ funded a study of antipsychotic prescribing based on Medicaid claims from 13 states, which found that utilization of antipsychotics in 2007 was much higher among foster children than among nonfoster children in Medicaid—12.4 percent on average versus 1.4 percent, respectively. Prescribing of antipsychotics is a concern for CMS and state Medicaid programs not only because of safety issues, but also because these medications are costly. They represented the single largest drug expenditure category for Medicaid in 2007—over $2.8 billion. One reason for particular concern about growing use of antipsychotics and other psychotropic medications in children is that manufacturers do not always test medications for use in children. Manufacturers are responsible for conducting clinical trials and demonstrating their products’ safety and efficacy to FDA, which is responsible for making decisions about whether and how medications can be marketed for children and for ensuring that manufacturers incorporate information from pediatric clinical trials into medication labels when required. However, because children are a small part of the overall population and physicians can prescribe medications off-label to children even if the medications have been tested only in adults, manufacturers may lack economic incentives to conduct trials with children. According to a recent analysis by FDA scientists, fewer than half of all medications were adequately labeled for pediatric use in 2009. The Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA) address testing of medications in children by authorizing FDA to provide incentives for or require manufacturers to conduct pediatric studies in certain circumstances. (For more information on BPCA and PREA and label changes for psychotropic medications resulting from these laws, see app. III.) Psychosocial therapies are mental health treatments that generally involve sessions with a mental health professional that are designed to reduce patients’ emotional or behavioral symptoms. Such therapies may be used instead of, or in combination with, psychotropic medications to Psychosocial therapies that treat children with mental health conditions.have been shown to be effective in treating mental health conditions may be referred to as evidence-based therapies (EBT). While there is no standard definition of what constitutes “evidence-based,” some federal agencies and provider organizations evaluate and compile information on available therapies. For example, SAMHSA maintains the National Registry of Evidence-based Programs and Practices, a list of treatments that have been assessed by independent evaluators and rated on the strength of the evidence showing their effectiveness. Provider organizations may also make recommendations to providers on which treatments to use. For example, AACAP publishes practice parameters that contain recommendations for treating specific disorders, with each recommendation labeled to indicate the strength of the evidence underlying it. Although psychosocial therapies may be effective for many children, the Institute of Medicine and others have reported that a shortage of mental health providers in general is a major factor affecting access to services, especially for children. Furthermore, finding a mental health professional who has been trained to provide a specific EBT can be a challenge because training in EBTs is not uniformly required in medical and professional schools. Children enter foster care when they have been removed from their parents or guardians and placed under the responsibility of a state child welfare agency, often because of maltreatment at home. Removal from the home can occur for multiple reasons, including parental violence, substance abuse, severe depression, or incarceration. According to ACF, 46 percent of children investigated by child welfare services came to a state’s attention primarily because of a report of neglect, and 27 percent had experienced physical abuse as the most serious form of recorded maltreatment. Other children are referred when their own behaviors or conditions are beyond the control of their families or they pose a threat to themselves or the community. Children in foster care can experience traumatic stress due to maltreatment experienced at home as well as the trauma of being removed from their homes. Trauma significantly increases the risk of mental health problems, difficulties with social relationships and behavior, physical illness, and poor school performance. Furthermore, child mental health experts have stated that the traumatic stress symptoms foster children may experience are often the same as symptoms that can indicate other mental health conditions, which may lead to misdiagnosis and inappropriate treatments. When children are taken into foster care, the state’s child welfare agency becomes responsible for determining where the child should live and providing the child with needed supports. The agency may place the foster child in the home of a relative, with unrelated foster parents, or in a group home or residential treatment center, depending on the child’s needs. The agency is responsible for arranging needed services, including mental health services. Federal officials, providers, and child and mental health advocacy groups have identified several factors that create challenges for foster children in receiving appropriate mental health services. Foster children can experience frequent changes in their living placements, which can lead to a lack of continuity in mental health care, and new providers may not have the medical history of the patient.This lack of stability can lead to treatment disruptions and can increase the number of medications prescribed. Coordinating mental health care for foster children may be difficult for both the medical provider and the case worker because multiple people are making decisions on the child’s behalf. In addition, caseworkers in child welfare agencies may have large caseloads, making it difficult for them to ensure that foster children’s mental health needs are being met. As a condition of receiving federal child welfare grant funding, state child welfare agencies must annually submit plans to ACF that, among other things, address the mental and other health needs of foster children. In 2011, the Child and Family Services Improvement and Innovation Act (Child Improvement Act) required states, as part of these plans, to identify protocols for monitoring foster children’s use of psychotropic medications and to address how emotional trauma associated with children’s maltreatment and removal from their home will be monitored and treated. As part of their June 2012 Annual Progress and Service Reports to ACF, states provided information on their plans for monitoring psychotropic medication prescribing and treatment of emotional trauma for foster children. Prior to the Child Improvement Act, many states had already implemented policies and practices regarding prescribing psychotropic medications for foster children, such as issuing written guidelines for prescribers, collecting data to monitor prescribing, requiring informed consent from relevant parties before filling a prescription, and requiring consultation with a mental health specialist for prescriptions exceeding certain dosage thresholds. On average, 6.2 percent of noninstitutionalized children in Medicaid nationwide took psychotropic medications during a calendar year from 2007 through 2009, and 21 percent of those children took an antipsychotic medication. The estimates for privately insured children were lower. About 14 percent of children in Medicaid had a potential need for mental health services, and over two-thirds of them did not receive any services. CMS and states have made efforts to ensure that children receive appropriate mental health services, but CMS’s ability to monitor their receipt of services for which they were referred is limited because CMS does not collect information from states on whether children in Medicaid have received services for which they were referred. Our analysis of MEPS data from 2007 through 2009 found that, on average, 6.2 percent of children in Medicaid nationwide—those who were not in institutions or in foster care and were ages 0 through 20—took at least one psychotropic medication during a calendar year. The comparable rate for privately insured children was lower—4.8 percent. The utilization rate was over twice as high for boys as for girls in Medicaid—8.4 percent versus 3.9 percent. There is a higher prevalence among boys of certain mental health disorders for which psychotropic medications are prescribed, which is one possible explanation for this difference. Among children in Medicaid who took psychotropic medications, utilization was highest among youth ages 18 through 20— 12.7 percent—and lowest among children under age 5—less than 1 percent. Utilization rates for privately insured children were highest among children ages 12 through 17—7.5 percent. (See app. III for more- detailed information on children who took psychotropic medications.) Nationwide, almost half of children in Medicaid who took psychotropic medications took multiple psychotropic medications in a 1-year period. Specifically, our analysis found that 28 percent of the children in Medicaid who took psychotropic medications took two medications within a year, and 16 percent took three or more. Among privately insured children who took psychotropic medications, 22 percent took two medications within a year, and 11 percent took three or more. This finding may reflect concomitant use of multiple medications—that is, medications taken in combination—or providers prescribing different drugs over time to find one that works best for the child. The most common types of psychotropic medications taken by children were ADHD medications, antidepressants, and antipsychotics. We found that, of children who took psychotropic medications, about three-fourths took ADHD medications. According to a recent analysis by FDA officials, utilization of ADHD medications increased 46 percent from 2002 to 2010, and methylphenidate, a stimulant used to treat ADHD, was the most commonly prescribed medication for adolescents ages 12 through 17. Antidepressants were the second most common type of medication taken—about one-fourth of the children in Medicaid and one-third of privately insured children who were taking psychotropic medication took an antidepressant. (See table 1.) G. Chai et al., “Trends of Outpatient Prescription Drug Utilization in US Children, 2002- 2010,” Pediatrics, vol. 130, no. 1 (2012). The difference between children in Medicaid and privately insured children was statistically significant. Our analysis also found that children in Medicaid were over twice as likely as privately insured children to take an antipsychotic medication. Overall, about 1.3 percent of children in Medicaid and 0.5 percent of privately insured children took antipsychotics.were covered by Medicaid or private insurance, the majority of children who took an antipsychotic were males ages 6 through 17. Differences between children in Medicaid and those with private insurance were not statistically significant. visits in the same year.medication-management follow-up visit, which pediatric provider organizations recommend. For example, guidelines from the American Academy of Pediatrics state that physicians consider follow-up visits every 3 to 6 months for children taking ADHD medications to monitor the child’s behavior and medication side effects. Our analysis of national survey data from 2007 through 2009 indicated that 14 percent of noninstitutionalized children in Medicaid and 9 percent of noninstitutionalized privately insured children had a potential mental health need. As described earlier, these estimates of potential mental health need are based on a broad measure of a child’s emotional or behavioral impairment, as reported by parents. One possible explanation for the higher level of potential mental health need among Medicaid children could be the lower average family incomes of children enrolled in Medicaid compared to those of children with private insurance. Some studies have found that low income is associated with an increased prevalence of mental health conditions. Our analysis also indicated that most children with a potential mental health need did not receive mental health services, regardless of their insurance type. For example, over 80 percent of children with a potential need, whether covered by Medicaid or private insurance, did not receive any psychosocial therapy, and over 70 percent did not have any mental health office visits. (See table 2.) While it is not possible to assess which services a child may need on the basis of survey data, our analysis indicates that most children whose parents indicated a significant level of behavioral impairment did not even receive a mental health evaluation. Even when a child received at least one mental health service, it is not possible to know whether his or her mental health needs were fully met. About one-fourth of all children with a potential mental health need had a mental health office visit. Among Medicaid children who had mental health office visits, about 40 percent saw a psychiatrist and 30 percent saw a pediatrician at least once in a calendar year. (See table 3.) Just over half of children with any mental health office visits received psychosocial therapy, which suggests that nearly half of all mental health office visits involved another type of service, such as diagnostic assessment or medication management. Among children in Medicaid with at least one mental health office visit, the average number of visits in a year was about seven. CMS has initiated activities to help ensure that children in Medicaid receive appropriate mental health services. For example, CMS has begun working with states to improve their monitoring of the prescribing of psychotropic medications, and in August 2012 the agency issued an Informational Bulletin to states titled, “Collaborative Efforts and Technical Assistance Resources to Strengthen the Management of Psychotropic Medications for Vulnerable Populations.” The bulletin provided a link to additional information on the CMS website that discusses practices some states have employed to enhance the ability of their Drug Utilization Review programs to monitor psychotropic medication prescribing, such as using Drug Utilization Reviews to identify and contact providers whose prescribing patterns vary significantly from recommended standards of care for children.Measures Program, CMS is working with states to collect data on three In addition, through its voluntary Pediatric Quality quality measures related to mental health, including receipt of follow-up care for children prescribed ADHD medication. CMS has also begun working with states to improve access to mental health services through the EPSDT benefit. Most children in Medicaid are entitled to services under EPSDT, which covers regular checkups and screenings as well as treatment services. Children found to have mental health needs at a screening are generally entitled to treatment services to address those needs, whether or not the services are typically covered under the Medicaid program in the child’s state. According to CMS, its National EPSDT Improvement Workgroup has a Behavioral Health Subgroup that is developing an action plan to determine steps CMS can take to work with states to promote access to effective mental health care for children through EPSDT. The agency also told us it plans to issue an Informational Bulletin on children’s behavioral health screening, referral, and treatment, which it will disseminate to state Medicaid agencies and others in early 2013. As required under federal law, states annually submit data to CMS on the provision of services under EPSDT, such as the number of children referred for additional services as a result of checkups and screenings. These data may include referrals for mental health services for children found to have mental health needs during screenings. We previously reported that CMS does not impose requirements beyond this federal law and, accordingly, does not collect information from states on whether children received the services for which they were referred, which limits CMS’s ability to monitor whether children in Medicaid are receiving the services they need. We recommended that CMS work with states to identify options for collecting such information. CMS officials told us that the agency had contracted for a study examining options for collecting data from state Medicaid programs on EPSDT referrals; the study concluded that states were unlikely to be able to produce accurate information on referred services using existing data sources. The officials also said that, as of September 2012, CMS was in the process of identifying alternative approaches for collecting data on children’s receipt of treatment services, such as analyzing national survey data. However, the alternatives currently under consideration would not produce the data from all states that would enable CMS to monitor at the state level children’s receipt of treatment services for which they were referred through Medicaid. States have also made efforts to increase children’s access to mental health services. Because many children receive mental health services from nonspecialists, 28 states have programs to facilitate consultation between primary care providers and child psychiatrists. For example, under Washington state’s Second Opinions program, psychiatrists review Medicaid psychotropic prescriptions that exceed certain safety thresholds—such as dose or use of combinations of medications—and provide a second opinion to the prescribing physician. An evaluation of Washington’s Second Opinions program and its Partnership Access Line—the state’s voluntary program for physicians seeking child psychiatry consultations—found that such consultations are cost-effective and that they have resulted in an increase in referrals for psychosocial therapy and a decrease in prescribing of antipsychotic medications. Similarly, Massachusetts’s program, the Massachusetts Child Psychiatry Access Program, has been found to increase the ability of primary care providers to treat or refer their patients with mental health needs. In a survey of primary care providers participating in the Massachusetts program, 63 percent reported that they were usually able to meet the needs of their psychiatric patients, an increase from only 8 percent of providers before they enrolled in the program. Some states have also begun to integrate mental health services into primary care settings in an effort to increase access to mental health services. For example, through the “State Option to Provide Health Homes for Enrollees with Chronic Conditions,” authorized under the Patient Protection and Affordable Care Act, states have the option to implement person-centered health care delivery systems, known as health homes.render comprehensive services for individuals with chronic conditions, must address mental health and substance abuse treatment needs. According to CMS, a number of states are developing proposals to establish health homes as part of their Medicaid programs, and CMS is providing technical assistance to these states. ACF reported that 18 percent of foster children ages 1 through 19 took psychotropic medications. In addition, ACF found that 30 percent of foster children with a potential mental health need did not receive mental health services in a 12-month period. HHS is taking several steps to promote appropriate mental health treatment for foster children. ACF reported that 18 percent of foster children were taking one or more psychotropic medications at the time they were surveyed, although utilization varied widely by living arrangement.children who lived in group homes or residential treatment centers had much higher rates of psychotropic medication use than foster children ACF reported that foster living in nonrelative foster homes or formal kin care—48 percent versus 14 percent and 12 percent, respectively. The higher utilization rate among children living in group homes or residential treatment centers may be related to these children having higher rates of potential mental health need—about 69 percent had a potential mental health need compared to about 44 percent of children living in nonrelative foster homes. Another study found that child welfare workers were more likely to place children with behavior problems in a group living arrangement than with a foster family. In addition to reporting on overall use of psychotropic medications, ACF reported on concomitant use of psychotropic medications and on the use of antipsychotics by foster children. Among foster children who took psychotropic medication, ACF reported that 13 percent took three or more psychotropic medications concomitantly. 6.4 percent of foster children took an antipsychotic medication and that the majority were ages 6 through 11. Estimates of concomitant use are based on NSCAW II phase 2 data (collected during October 2009 through January 2011). This estimate does not include children in formal kin care. Less than 1 percent of children in formal kin care were taking three or more medications at the time of the survey. Stambaugh et al., OPRE Report #2012-33, 4. ACF reported that 30 percent of foster children with a potential mental health need had not received any mental health services within the previous 12 months or since the start of the child’s living arrangement, if less than 12 months. (See fig. 1.) Although 70 percent of foster children with a potential mental health need received at least one mental health service, it is not possible to know the extent to which the child’s mental health needs were met. In response to concerns related to the prescribing of psychotropic medications for foster children, HHS convened an interagency workgroup on the use of psychotropic medications for foster children in summer 2011. Led by ACF, the workgroup also has representatives from CMS, SAMHSA, NIH, AHRQ, and FDA. The workgroup developed a plan to expand the use of evidence-based screening, diagnosis, and interventions; strengthen the oversight and monitoring of psychotropic medications; and expand the overall knowledge and evidence base regarding medications and psychosocial treatments for foster children with mental health or trauma-related needs. The primary activity of the workgroup is to meet regularly to share information about individual and joint agency activities related to foster children’s mental health. For example, the workgroup convened a meeting in September 2011 with researchers to increase federal staff’s knowledge about psychotropic medication use among foster children, their mental health needs, the extent to which best practices on state oversight and monitoring of psychotropic medications exist, and future data needs. On the basis of the efforts of the workgroup, HHS has identified several areas where more research is needed, including clinical trials related to foster children’s use of psychotropic medications, studies on differences between trauma and mental health diagnoses, and studies on EBTs for foster children. Three member agencies of the workgroup—ACF, CMS, and SAMHSA— have collaborated on certain activities regarding use of psychotropic medications among foster children. In November 2011, the three agencies sent a joint letter to state Medicaid directors, mental health authorities, and child welfare directors. This letter contained information on the three agencies’ activities and resources that aim to improve the mental health and well-being of foster children, such as ACF’s Child Welfare Information Gateway website and SAMHSA’s National Traumatic Stress Initiative grants. These agencies also jointly held several webinars to disseminate this information. In addition, they hosted a summit in August 2012 for state child welfare, Medicaid, and mental health agencies to help states develop psychotropic drug oversight practices and to facilitate interagency coordination for foster children. ACF is also working with CMS to develop quality measures specific to foster children as part of CMS’s Pediatric Quality Measures Program. In addition to participating in the workgroup, ACF is working with states to build the capacity of child welfare agencies to effectively respond to the complex needs of foster children. For example, in addition to overseeing the NSCAW surveys, ACF took the following actions: ACF issued a Program Instruction to help states implement the new requirements in the Child Improvement Act. In the Program Instruction, ACF described five components that state child welfare agencies should include in their plans to monitor psychotropic medications. In addition, the instructions provided guidelines to states on treating and monitoring children experiencing emotional trauma, another requirement of the law. As of August 2012, ACF officials told us that all states had submitted their plans as required by the Child Improvement Act and the agency was reviewing them. ACF published two Information Memoranda for state child welfare agencies—one on promoting social and emotional well-being for children in child welfare and one on best practices in state psychotropic drug oversight. ACF has provided information about psychotropic medication prescribing and monitoring on its Child Welfare Information Gateway—a website that provides resources for child welfare agencies. SAMHSA has provided funding, guidance, and information to states and others to increase appropriate mental health services among foster children. For example: For its fiscal year 2012 National Child Traumatic Stress Initiative grants, SAMHSA placed a specific emphasis on child welfare. The initiative supports services for children experiencing trauma-related behavioral health problems and places a strong focus on using evidence-based psychosocial therapies. SAMHSA worked with AACAP to develop guidance for service providers and agency leaders on the use of psychotropic medications among children, including foster children. Through its Systems of Care grants, SAMHSA has also provided funding for foster children’s mental health services; the agency reported in 2011 that 22 of its grantees had a child welfare focus or a partnership with a child welfare organization. The 2012 SAMHSA-sponsored National Children’s Mental Health Awareness Day event had a special focus on foster children, and SAMHSA is planning a meeting in January 2013 titled “Improving Outcomes for African American Males in Foster Care.” NIH spent about $1.2 billion on research projects related to children’s mental health; NIMH was responsible for most of this funding, with the rest being spent by NICHD. Other HHS agencies—FDA, AHRQ, and CDC—spent an estimated $16 million on external research projects related to children’s mental health. During fiscal years 2008 through 2011, NIH spent about $1.2 billion to support over 1,200 children’s mental health research projects, most of NIMH accounted for 81 percent—about which were supported by NIMH.$956 million—of NIH’s funding for children’s mental health research. (See fig. 2.) NIMH’s $956 million (which represented about 18 percent of NIMH’s total research budget in fiscal years 2008 through 2011) supported slightly over 900 research projects related to mental health in children ages 18 or younger. The most-commonly studied mental health conditions were ADHD, major depression, and bipolar disorder. Nearly all of the projects (97 percent) were conducted externally under grants and contracts, with expenditures of about $861 million. The remaining research projects, for which NIMH spent about $95 million, were conducted internally by NIMH scientists. For example, NIMH scientists used brain imaging techniques to examine how brain development in children with ADHD differed from typical brain development in children. About half of NIMH-sponsored research projects (482) examined mental health treatments for children, with more projects studying psychosocial therapies than psychotropic medications. (See table 4.) Of the 482 treatment research projects, 325 involved testing psychosocial therapies. For example, one project tested the effect of therapies such as parent training and skill building on preventing negative outcomes such as substance use and school truancy in middle-school-aged girls in foster care. Psychotropic medication studies accounted for 137 of the 482 For example, one project tested the effect of an research projects.antidepressant medication on brain activity in children with anxiety disorders, with the goal of better understanding how the medication works. NIMH also funded 66 research projects that tested the effects of combinations of treatments, such as two medications or a medication and a psychosocial therapy. For example, one project tested the effect of parent training—a psychosocial treatment—with and without the addition of a stimulant (or of a stimulant and an antipsychotic medication) on children with a disruptive behavior disorder in combination with ADHD and severe aggression. NIMH officials told us that the agency does not have quotas for the kinds of treatment studies it funds and encourages grant applications from researchers for both psychosocial and psychotropic treatment studies. However, officials noted that NIMH expects to fund more projects examining psychosocial therapies than projects that test medications because researchers may perceive psychosocial therapies to be less risky for children and not all psychotropic medications are approved for pediatric use. NICHD spent about $218 million to sponsor 324 research projects related to mental health in children in fiscal years 2008 through 2011. Almost all projects (98 percent) were conducted externally under grants and contracts, with expenditures of about $195 million. NICHD’s own scientists conducted the remaining research projects, for which NICHD spent about $23 million. Of the 324 research projects, 72 focused on mental health conditions, such as depression and ADHD.NICHD sponsored one study that examined the effectiveness of a mood stabilizing psychotropic medication—lithium—in children ages 7 to 17 with bipolar disorder. For example, NIMH and NICHD collectively spent about $309 million during this period to support almost 350 research projects that examined children’s mental health in minority and health disparity populations. NIMH sponsored 297 such projects, with expenditures of about $283 million, and NICHD spent about $26 million to sponsor 50 projects. For example, NIMH sponsored a study of Latino adolescents with depression that tested the effectiveness of a psychosocial therapy—cognitive behavioral therapy—with and without an additional educational component for parents. NIMH and NICHD also funded translational research, treatments in real world settings and identifies effective strategies for implementing them in communities. For example, NICHD sponsored a research project that evaluated the effectiveness of a psychosocial therapy for girls with post-traumatic stress disorder and concurrent substance abuse disorders who were involved with the juvenile justice system. The study examined ways to adapt the treatment for this population, and the researchers planned to use the study as a model to adapt other EBTs for children involved in the juvenile justice and child welfare systems. The data NIMH and NICHD provided did not identify which projects were translational, so it was not possible to determine the total number of projects or how much was spent on this research. Together, FDA, AHRQ, and CDC spent about $16 million on research projects on children’s mental health in fiscal years 2008 through 2011. FDA spent about $4.5 million on four external research projects on the safety and effectiveness of psychotropic medications for children. For example, FDA cosponsored a study with AHRQ that examined serious cardiovascular events such as heart attack and stroke among children taking ADHD medications. In addition, at the end of fiscal year 2011, FDA awarded a contract for about $6 million for a study on antipsychotic medications and the risk of type 2 diabetes in children; this study is currently under way. AHRQ spent about $8.6 million to fund five external research projects related to children’s mental health in fiscal years 2008 through 2011. Consistent with the agency’s focus on improving outcomes by encouraging the use of evidence to make health care decisions, two of the five research projects AHRQ supported compared outcomes of using different mental health treatments. For example, one AHRQ-sponsored project compared outcomes of using various antipsychotic medications to treat children with ADHD who were enrolled in Medicaid, including children in foster care. Child mental health experts we spoke with highlighted the need for more research that compares different psychotropic medications. For example, one federal official noted that all antidepressant medications bear the same “black box” warning describing the potential for increases in suicidal thinking in childrenis limited information about whether any antidepressants are safer or more effective for children than other antidepressants. CDC spent about $2.7 million on two external research projects during fiscal years 2008 through 2011, one on ADHD and the other on Tourette Syndrome and other tic disorders. The ADHD project examined topics including the prevalence of ADHD, the prevalence of ADHD in combination with other mental health conditions, and children’s utilization of psychosocial and medication treatments. The project on tic disorders also looked at children’s utilization of different kinds of treatments, as well as their school performance, social relationships, and factors associated with poorer functioning. Both projects were designed to include a diverse sample of children with the aim of detecting differences due to sex or race/ethnicity. For example, the ADHD study oversampled girls and included large samples of Black, Latino, and American Indian children. Mental health conditions such as ADHD, depression, and bipolar disorder can have debilitating effects on a child’s life, and early detection and treatment of childhood mental health conditions can improve children’s outcomes into the future. While some children receive highly effective mental health treatments, our findings suggest that many children, including some in Medicaid and in foster care, may not be receiving appropriate treatment. Our analysis of national survey data indicates that concerns raised by providers, children’s advocates, and others about potentially inappropriate prescribing of psychotropic drugs for some children and a lack of needed mental health services for some children may be warranted. Antipsychotic medications are of particular concern in light of the very serious side effects they can have for children, the limited understanding of their long-term effects, and their high cost. Children in Medicaid and in foster care are prescribed these medications at higher rates than other children. Furthermore, most children in Medicaid and many children in foster care with potential mental health needs do not receive mental health services that could help them. Differing rates of medication among the groups of children do not necessarily represent a problem, because potential factors beyond the scope of this review could help explain the differences. Nonetheless, these findings do suggest that the recent federal and state initiatives to improve monitoring and oversight are appropriate, and that continued assessment of the prescribing of psychotropic medications to vulnerable populations and of the receipt of mental health services is important. Initiatives such as CMS’s National EPSDT Improvement Workgroup and plans to help states improve their monitoring efforts, as well as the ACF-led interagency workgroup on the use of psychotropic medications for foster children, have the potential to help ensure that vulnerable children in Medicaid and foster care have access to effective mental health treatments that are appropriate for their mental health conditions. We recommended in 2011 that CMS identify options to collect information from states on whether children in Medicaid receive the services for which they are referred through their EPSDT-mandated screenings and check-ups, which would include referrals for specialty care such as mental health services. CMS has begun to explore options to address this recommendation, but it has not yet identified ways to systematically collect such data from state Medicaid programs. We continue to believe this recommendation is valid, and our findings in this report underscore the importance of HHS’s monitoring the receipt of needed referral and treatment services, including those for mental health, by children in Medicaid. HHS did not comment on our findings but provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. To provide information on the use of psychotropic medications and other mental health services by children covered by Medicaid, the Children’s Health Insurance Program (CHIP), and private insurance, we analyzed nationwide data from the Medical Expenditure Panel Survey (MEPS) from 2007 through 2009. MEPS is a nationally representative survey of families, medical providers, and employers across the United States administered by the Agency for Healthcare Research and Quality (AHRQ). MEPS collects self-reported information on individuals’ demographics, health and insurance status, and use of medical services by setting and provider type, among other things. Specifically, we examined the use of psychotropic medications and mental health-related office visits, therapy visits, and behavioral impairment. We pooled 3 years of data to obtain a sample size large enough to conduct our analyses, and reported average annual estimates. To conduct our analysis, we constructed two cohorts of children who were insured for at least 10 months in 2007, 2008, or 2009—those who were enrolled in Medicaid or CHIP and those covered by private insurance. We included children ages 0 through 20 and excluded children in foster care from our analysis.In total, the sample Medicaid/CHIP cohort had 11,224 children, and the sample private insurance cohort had 11,639 children. The Food and Drug Administration (FDA) is responsible for reviewing the results of pediatric studies conducted under two related laws—the Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA). BPCA is a program under which manufacturers may receive an additional 6 months of market exclusivity in exchange for conducting pediatric studies. Under PREA, subject to certain exceptions, drug manufacturers must conduct pediatric studies before a drug can be marketed. FDA is responsible for reviewing the results of pediatric studies submitted under BPCA and PREA and ensuring manufacturers incorporate new information on pediatric use into medication labels when required. During fiscal years 2008 through 2011, 168 label changes resulted from studies conducted under these laws, 16 of which were for psychotropic medications. For example, the label for Invega, an atypical antipsychotic medication, was changed to indicate that it is safe and effective for the treatment of schizophrenia in children as young as age 12. (See table 6 for information on all 16 label changes.) In addition to reviewing label changes related to BPCA and PREA, in 2009 FDA began working with manufacturers to change the labels of all atypical antipsychotics to more clearly present data on their metabolic risks for children, which an FDA official said will help prescribers in selecting medications for their patients. Tables 7 and 8 provide estimates of children’s utilization of psychotropic medications and other mental health services, based on our analysis of MEPS data. In addition to the contact named above, Helene F. Toiv, Assistant Director; Laura Brogan; Britt Carlson; Sandra George; Giselle Hicks; Hannah Locke; Roseanne Price; and Hemi Tewarson made key contributions to this report. Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions. GAO-12-201. Washington, D.C.: December 14, 2011. Foster Children: HHS Guidance Could Help States Improve Oversight of Psychotropic Prescriptions. GAO-12-270T. Washington, D.C.: December 1, 2011. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Pediatric Research: Products Studied under Two Related Laws, but Improved Tracking Needed by FDA. GAO-11-457. Washington, D.C.: May 31, 2011. Medicaid and CHIP: Reports for Monitoring Children’s Health Care Services Need Improvement. GAO-11-293R. Washington, D.C.: April 5, 2011. Foster Care: State Practices for Assessing Health Needs, Facilitating Service Delivery, and Monitoring Children’s Care. GAO-09-26. Washington, D.C.: February 6, 2009. Prescription Drugs: FDA’s Oversight of the Promotion of Drugs for Off- Label Uses. GAO-08-835. Washington, D.C.: July 28, 2008.
|
Experts have concerns that children with mental health conditions do not always receive appropriate treatment, including concerns about appropriate use of psychotropic medications (which affect mood, thought, or behavior) and about access to psychosocial therapies (sessions with a mental health provider). These concerns may be compounded for low-income children in Medicaid and children in foster care (most of whom are covered by Medicaid)--populations who may be at higher risk of mental health conditions. Within HHS, CMS oversees Medicaid, and ACF supports state child welfare agencies that coordinate health care for foster children. GAO was asked to provide information on children's mental health. This report examines (1) the use of psychotropic medications and other mental health services for children in Medicaid nationwide, and related CMS initiatives; (2) HHS information on the use of psychotropic medications and other mental health services for children in foster care nationwide, and related HHS initiatives; and (3) the amount HHS has invested in research on children's mental health. GAO analyzed data from HHS's MEPS --a national household survey on use of medical services--from 2007 through 2009 for children covered by Medicaid and private insurance. GAO reviewed two recent ACF foster care reports with data from a national survey conducted during 2008 through 2011. GAO analyzed data from HHS agencies that conduct or fund research and interviewed HHS officials and children's mental health providers, researchers, and advocates. An annual average of 6.2 percent of noninstitutionalized children in Medicaid nationwide and 4.8 percent of privately insured children took one or more psychotropic medications, according to GAO's analysis of 2007-2009 data from the Department of Health and Human Services' (HHS) Medical Expenditure Panel Survey (MEPS). MEPS data also showed that children in Medicaid took antipsychotic medications (a type of psychotropic medication that can help some children but has a risk of serious side effects) at a relatively low rate--1.3 percent of children--but that the rate for children in Medicaid was over twice the rate for privately insured children, which was 0.5 percent. In addition, MEPS data showed that most children whose emotions or behavior, as reported by their parent or guardian, indicated a potential need for a mental health service did not receive any services within the same year. The Centers for Medicare & Medicaid Services (CMS) and many states have initiatives under way to help ensure that children receive appropriate mental health treatments. However, CMS's ability to monitor children's receipt of mental health services is limited because CMS does not collect information from states on whether children in Medicaid have received services for which they were referred. GAO recommended in 2011 that CMS identify options for collecting such data from state Medicaid programs. Findings in this report underscore the continued importance of CMS's monitoring of children's receipt of mental health services. HHS's Administration for Children and Families (ACF) reported that 18 percent of foster children were taking psychotropic medications at the time they were surveyed, although utilization varied widely by the child's living arrangement. ACF also reported that 30 percent of foster children who may have needed mental health services did not receive them in the previous 12 months. HHS agencies are taking steps to promote appropriate mental health treatments for foster children, such as by sending information to states on psychotropic medication oversight practices. HHS's National Institutes of Health spent an estimated $1.2 billion on over 1,200 children's mental health research projects during fiscal years 2008 through 2011. Most of the funding--$956 million--was awarded by the National Institute of Mental Health, with more research projects studying psychosocial therapies than psychotropic medications. Other HHS agencies spent about $16 million combined on children's mental health research during this period. HHS reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate.
|
To accomplish its mission of protecting federal facilities, FPS has become increasingly reliant on its guard force. As of June 2009, FPS’s guard program has cost $613 million and represents the single largest item in its fiscal year 2009 budget. While the contractor has the primary responsibility for training and ensuring that the guards have met certification requirements, FPS is responsible for oversight of the guards and relies on about 930 law enforcement personnel located in its 11 regions to inspect guard posts and verify that training, certifications, and timecards are accurate. Figure 1 shows the location of FPS’s 11 regions and the number of guards and federal facilities with guards in each of these regions. Some of the key responsibilities of FPS’s guards include controlling access; enforcing property rules and regulations; detecting and reporting criminal acts; and responding to emergency situations involving the safety and security of the facility. Guards may only detain, not arrest, an individual, and their authority typically does not extend beyond the facility. Before being assigned to a post or an area of responsibility at a federal facility, FPS requires that all guards undergo background suitability checks and complete approximately 128 hours of training provided by the contractor or FPS, including 8 hours of x-ray and magnetometer training. Guards must also pass an FPS-administered written examination and possess the necessary certificates, licenses, and permits as required by the contract. Table 1 shows the training and certifications that FPS requires its guards to (1) obtain before standing post and (2) maintain during the course of their employment. FPS also requires its guards to complete 40 hours of refresher training every 2 to 3 years depending on the terms of the contract. In addition to FPS’s requirements, some states require that guards obtain additional training and certifications. FPS currently has contracts with 67 private companies for guard services. These contractors are responsible for providing and maintaining all guard services as described in the contract statement of work, including management, supervision, training, equipment, supplies and licensing. FPS is also required to actively monitor and verify the contractors’ performance and ensure that the terms of the contract are met. FPS does not fully ensure that its guards have the training and certifications required to be deployed to a federal facility. While FPS requires that all prospective guards complete approximately 128 hours of training, including 8 hours of x-ray and magnetometer training, it was not providing some of its guards with all of the required training in the six regions we visited. For example, in one region, FPS has not provided the required 8 hours of x-ray or magnetometer training to its 1,500 guards since 2004. X-ray and magnetometer training is important because the majority of the guards are primarily responsible for using this equipment to monitor and control access points at federal facilities. Controlling access and egress to a facility helps ensure that only authorized personnel, vehicles, and materials are allowed to enter, move within, and leave the facility. According to FPS officials, the 1,500 guards were not provided the required x-ray or magnetometer training because the region does not have the employees that are qualified or who have the time to conduct the training. Nonetheless, these guards continue to control access points at federal facilities in this region. In absence of the x-ray and magnetometer training, one contractor in the region said that they are relying on veteran guards who have experience operating these machines to provide some “on-the-job” training to new guards. Moreover, in the other five regions we visited where FPS is providing the x-ray and magnetometer training, some guards told us that they believe the training, which is computer based, is insufficient because it is not conducted on the actual equipment located at the federal facility. Lapses and weaknesses in FPS’s x-ray and magnetometer training have contributed to several incidents at federal facilities in which the guards were negligent in carrying out their responsibilities. For example, at a level IV federal facility in a major metropolitan area, an infant in a carrier was sent through the x-ray machine. Specifically, according to an FPS official in that region, a woman with her infant in a carrier attempted to enter the facility, which has child care services. While retrieving her identification, the woman placed the carrier on the x-ray machine. Because the guard was not paying attention and the machine’s safety features had been disabled, the infant in the carrier was sent through the x-ray machine. FPS investigated the incident and dismissed the guard. However, the guard subsequently sued FPS for not providing the required x-ray training. The guard won the suit because FPS could not produce any documentation to show that the guard had received the training, according to an FPS official. In addition, FPS officials from that region could not tell us whether the x- ray machine’s safety features had been repaired. We also found that some guards were not provided building-specific training, such as what actions to take during a building evacuation or a building emergency. This lack of training may have contributed to several incidents where guards neglected their assigned responsibilities. For example, at a level IV facility, the guards did not follow evacuation procedures and left two access points unattended, thereby leaving the facility vulnerable; at a different level IV facility, the guard allowed employees to enter the building while an incident involving suspicious packages was being investigated; and, at a level III facility, the guard allowed employees to access the area affected by a suspicious package, which was required to be evacuated. In addition to insufficient building-specific training, some guards said they did not receive scenario-based training and thus were not sure what they should do in certain situations. During our site visits at 6 FPS regions, we interviewed over 50 guards and presented them with an incident that occurred at a federal facility in 2008. Specifically, we asked the guards whether they would assist an FPS inspector chasing an individual escaping a federal facility in handcuffs. The guards’ responses varied. Some guards stated that they would assist the FPS inspector and apprehend the individual, while others stated that they would likely do nothing and stay at their post because they feared being fired for leaving their post. Some guards also told us that they would not intervene because of the threat of a liability suit for use of force and did not want to risk losing their job. The guard’s different responses suggest that more scenario-based training may be needed. FPS’s primary system—CERTS—for monitoring and verifying whether guards have the training and certifications required to stand post at federal facilities is not fully reliable. We reviewed training and certification data for 663 randomly selected guards in 6 of FPS’s 11 regions maintained either in CERTS, which is the agency’s primary system for tracking guard training and certifications, databases maintained by some of FPS’s regions, or contractor information. We found that 62 percent, or 411 of the 663 guards who were deployed to a federal facility had at least one expired certification, including for example, firearms qualification, background investigation, domestic violence declaration, or CPR/First Aid training certification. More specifically, according to the most recent information from a contractor, we found that over 75 percent of the 354 guards at one level IV facility had expired certifications, or the contractor had no record of the training. Based on the contractor information for another contract, we also found that almost 40 percent of the 191 guards at another level IV facility had expired domestic violence declarations. Without domestic violence declarations certificates, guards are not permitted to carry a firearm. FPS requires its guards to carry weapons in most cases. Moreover, five of the six regions we visited did not have current information on guard training and certifications. According to FPS officials in these five regions, updating CERTS is time consuming and they do not have the resources needed to keep up with the thousands of paper files. Consequently, these five regions were not generally relying on CERTS and instead were relying on the contractor to self-report training and certification information about its guards. In addition, not having a fully reliable system to better track whether training has occurred may have contributed to a situation in which a contractor allegedly falsified training records. In 2007, FPS was not aware that a contractor who was responsible for providing guard service at several level IV facilities in a major metropolitan area had allegedly falsified training records until it was notified by an employee of the company. According to FPS’s affidavit, the contractor allegedly repeatedly self-certified to FPS that its guards had satisfied CPR and First Aid training, as well as the contractually required bi-annual recertification training, although the contractor knew that the guards had not completed the required training and was not qualified to stand post at federal facilities. According to FPS’s affidavit, in exchange for a $100 bribe, contractor officials provided a security guard with certificates of completion for CPR and First Aid. The case is currently being litigated in U.S. District Court. FPS has limited assurance that its 13,000 guards are complying with post orders. FPS does not have specific national guidance on when and how guard inspections should be performed. FPS’s inspections of guard posts at federal facilities are inconsistent and the quality and rigor of its inspections varies across regions. At each guard post, FPS maintains a book, referred to as post orders, that describes the duties that guards are to perform while on duty. However, we found that in one region some of the post orders were not current and dated back to 2002 when FPS was part of GSA. In addition, the frequency with which FPS inspects these posts varied. For example, one region we visited required its inspectors to complete 5 guard inspections each month, while another region we visited did not have any inspection requirements. According to the regional staff, there is no requirement that every guard post be inspected each month; rather inspectors are required to complete 5 inspections per month which leads to some guard posts being inspected multiple times per month and some guard posts not being inspected at all. For example, while we were observing guard inspections in this region, one guard told us she had been inspected twice that week. In contrast, according to FPS officials, guards assigned to posts at federal facilities in remote locations or during the night shift are rarely inspected. During our site visits we also found that the quality of FPS’s guard inspections varied. According to FPS’s procedures for conducting guard inspections, FPS should inspect the guard’s uniform and equipment, knowledge of post orders, and ID and certification cards. For example, an inspector in one region performed a more thorough inspection than other inspectors. The inspector included an inspection of guard certifications, knowledge of post orders, uniform and equipment check, inspection of the post station, and timecards. The inspector also asked the guard a number of scenario-based questions and asked the guard if he had any questions or concerns. The results of the inspection were documented immediately following the inspection. Conversely, in a different FPS region we visited, the FPS inspector asked the guard if all his certifications and training were current; but never physically inspected the guard’s certifications or asked any scenario-based questions. During another inspection we observed, an inspector in another region performed a uniform and equipment check but did not ask for any certifications. We also found that in the 6 regions we visited that guard inspections are typically completed by FPS during regular business hours and in cities where FPS has a field office. In most FPS regions, FPS is only on duty during regular business hours and according to FPS, inspectors are not authorized overtime to perform guard inspections during night shifts or on weekends. However, on the few occasions when inspectors complete guard inspections at night or on their own time, FPS has found instances of guards not complying with post orders. For example, as shown in figure 2, at a level IV facility, an armed guard was found asleep at his post after taking the pain killer prescription drug Percocet during the night shift. FPS’s guard manual states that guards are not permitted to sleep or use any drugs (prescription or non-prescription) which may impair the guard’s ability to perform duties. FPS’s post orders also describe a number of items that guards are prohibited from doing while on post. For example, guards are prohibited from sleeping, using government property such as computers, and test firing a weapon unless at a range course. However, FPS has found incidents at level IV facilities where guards were not in compliance with post orders. Some examples follow. A guard was caught using government computers, while he was supposed to be standing post, to further his private for-profit adult website. A guard attached a motion sensor to a pole at the entrance to a federal facility garage to alert him whenever a person was approaching his post. Another law enforcement agency discovered the device and reported it to FPS. A guard, during regular business hours, accidentally fired his firearm in a restroom while practicing drawing his weapon. A guard failed to recognize or did not properly x-ray a box containing semi-automatic handguns at the loading dock at one federal facility we visited. FPS only became aware of the situation because the handguns were delivered to FPS. While the guards were fired or disciplined in each of these incidents, they illustrate that FPS is able to identify some instances where guards are not complying with post orders and the importance of why it should improve the oversight of its guard program. We identified substantial security vulnerabilities related to FPS’s guard program. Each time they tried, in April and May 2009, our investigators successfully passed undetected through security checkpoints monitored by FPS’s guards, with the components for an IED concealed on their persons at 10 level IV facilities in four cities in major metropolitan areas. The specific components for this device, items used to conceal the device components, and the methods of concealment that we used during our covert testing are classified, and thus are not discussed in this testimony. Of the 10 level IV facilities we penetrated, 8 were government owned and 2 were leased facilities. The facilities included field offices of a U.S Senator and U.S. Representative as well as agencies of the Departments of Homeland Security, Transportation, Health and Human Services, Justice, State and others. The two leased facilities did not have any guards at the access control point at the time of our testing. Using publicly available information, our investigators identified a type of device that a terrorist could use to cause damage to a federal facility and threaten the safety of federal workers and the general public. The device was an IED made up of two parts—a liquid explosive and a low-yield detonator—and included a variety of materials not typically brought into a federal facility by employees or the public. Although the detonator itself could function as an IED, investigators determined that it could also be used to set off a liquid explosive and cause significantly more damage. To ensure safety during this testing, we took precautions so that the IED would not explode. For example, we lowered the concentration level of the material. To gain entry into each of the 10 level IV facilities, our investigators showed photo identification (state driver’s license) and walked through the magnetometer machines without incident. The investigators also placed their briefcases with the IED material on the conveyor belt of the x- ray machine, but the guards detected nothing. Furthermore, our investigators did not receive any secondary searches from the guards which might have revealed the IED material that we brought into the facilities. At security checkpoints at 3 of the 10 facilities, our investigators noticed that the guard was not looking at the x-ray screen as some of the IED components passed through the machine. A guard questioned an item in the briefcase at one of the 10 facilities but the materials were subsequently allowed through the x-ray machines. At each facility, once past the guard screening checkpoint, our investigators proceeded to a restroom and assembled the IED. At some of the facilities, the restrooms were locked. Our investigators gained access by asking employees to let them in. With the IED completely assembled in a briefcase, our investigators walked freely around several floors of the facilities and into various executive and legislative branch offices, as described above. Because of the sensitivity of our review, we have already briefed FPS and GSA on the results of our covert testing at 10 level IV facilities and other preliminary findings regarding the guard program. FPS subsequently identified and began taking several actions in response to our findings. According to FPS officials, it recently authorized the use of overtime to monitor guards during non-routine business hours and is requiring penetration tests to identify weaknesses at access control guard posts. FPS has conducted limited intrusion testing in the past and experienced difficulty in executing such tests. For example, in 2008, one FPS region conducted an intrusion test at a level IV facility and successfully brought a “fake bomb” into the building through a loading area. During the test, FPS agents misplaced the box containing the “fake bomb” and it was picked up by a guard who took it to the mail room for processing. It was opened by the guard who panicked. After this incident, the intrusion testing program in that region was cancelled, according to FPS officials in that region. FPS has also accelerated the implementation of a new directive designed to clarify organizational responsibilities for conducting and reporting the results of inspections and evaluations. For example, under the March 2009 directive, at a level IV facility, FPS is planning to inspect 2 guard posts a week. Prior to the new directive, FPS did not have a national requirement for when to conduct inspections at federal facilities and each region we visited had requirements that ranged from no inspection requirements to each inspector having to conduct 5 inspections per month. Meeting these new requirements may be challenging, according to FPS management and regional staff we contacted. FPS management in several regions we visited told us that the new directive appears to be based primarily on what works well from a headquarters or National Capital Region perspective, not a regional perspective that reflects local conditions and limitations in staffing resources. A FPS official in one region also said the region is not adequately staffed to complete all the current mission-essential tasks that are required, and another FPS official in that region does not believe the region will be able to conduct the additional inspections as required in the new policy. Finally, according to the Director of FPS, while having more resources would help address the weaknesses in the guard program, the additional resources would have to be trained and thus could not be deployed immediately. We provided FPS a detailed briefing on June 5, 2009 on our preliminary findings. We also provided FPS with a draft of this testimony. FPS provided no comments on this testimony. We plan to provide this Committee with our complete evaluation and a final report on FPS’s oversight of its guard program in September 2009. This concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark Goldstein at 202-512-2834 or by email at goldsteinm@gao.gov. Individuals making key contributions to this testimony include Jonathan Carver, Tammy Conquest, John Cooney, Colin Fallon, Daniel Hoy, George Ogilvie, Susan Michal-Smith, and Ramon Rodriguez. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
To accomplish its mission of protecting about 9,000 federal facilities, the Federal Protective Service (FPS) currently has a budget of about $1 billion, about 1,200 full time employees, and about 13,000 contract security guards. This testimony discusses GAO's preliminary findings on (1) the extent to which FPS ensures that its guards have the required training and certifications before being deployed to a federal facility, (2) the extent to which FPS ensures that its guards comply with their assigned responsibilities (post orders) once they are deployed at federal facilities, and (3) security vulnerabilities GAO recently identified related to FPS's guard program. To address these objectives, GAO conducted site visits at 6 of FPS's 11 regions, interviewed numerous FPS officials, guards, contractors, and analyzed FPS's policies and data. GAO also conducted covert testing at 10 judgmentally selected level IV facilities in four cities. A level IV facility has over 450 employees and a high volume of public contact. FPS does not fully ensure that its contract security guards have the training and certifications required to be deployed to a federal facility. FPS requires that all prospective guards complete about 128 hours of training including 8 hours of x-ray and magnetometer training. However, in one region, FPS has not provided the x-ray or magnetometer training to its 1,500 guards since 2004. Nonetheless, these guards are assigned to posts at federal facilities. X-ray training is critical because guards control access points at facilities. Insufficient x-ray and magnetometer training may have contributed to several incidents where guards were negligent in carrying out their responsibilities. For example, at a level IV facility, an infant in a carrier was sent through an x-ray machine due to a guard's negligence. Moreover, GAO found that FPS does not have a fully reliable system for monitoring and verifying guard training and certification requirements. GAO reviewed 663 randomly selected guard records and found that 62 percent of the guards had at least one expired certification including a declaration that guards have not been convicted of domestic violence, which make them ineligible to carry firearms. FPS has limited assurance that its guards are complying with post orders. FPS does not have specific national guidance on when and how guard inspections should be performed. FPS's inspections of guard posts at federal facilities are inconsistent and the quality varied in the six regions GAO visited. GAO also found that guard inspections are typically completed by FPS during regular business hours and in locations where FPS has a field office; and seldom on nights and on weekends. However, on an occasion when FPS did conduct a post inspection at night it found a guard asleep at his post after taking the pain killer prescription drug Percocet. FPS also found other incidents at level IV facilities where guards neglected or inadequately performed their assigned responsibilities. For example, a guard failed to recognize or did not properly x-ray a box containing handguns at the loading dock at a facility. FPS became aware of the situation because the handguns were delivered to FPS. GAO identified substantial security vulnerabilities related to FPS's guard program. GAO investigators carrying the components for an improvised explosive device successfully passed undetected through security checkpoints monitored by FPS's guards at each of the 10 level IV federal facilities where GAO conducted covert testing. Of the 10 level IV facilities GAO penetrated, 8 were government owned, 2 were leased, and included offices of a U.S. Senator and U.S. Representative, as well as agencies such as the Departments of Homeland Security, State, and Justice. Once GAO investigators passed the control access points, they assembled the explosive device and walked freely around several of floors of these level IV facilities with the device in a briefcase. In response to GAO's briefing on these findings, FPS has recently taken some actions including increasing the frequency of intrusion testing and guard inspections. However, implementing these changes may be challenging, according to FPS.
|
In our May 2004 report on federal data mining efforts, we defined data mining as the application of database technology and techniques—such as statistical analysis and modeling—to uncover hidden patterns and subtle relationships in data and to infer rules that allow for the prediction of future results. We based this definition on the most commonly used terms found in a survey of the technical literature. For the purposes of this report, we are using the same definition. Data mining has been used successfully for a number of years in the private and public sectors in a broad range of applications. In the private sector, these applications include customer relationship management, market research, retail and supply chain analysis, medical analysis and diagnostics, financial analysis, and fraud detection. In the government, data mining was initially used to detect financial fraud and abuse. For example, we used data mining techniques in our prior reviews of federal government purchase and credit card programs. Following the terrorist attacks of September 11, 2001, data mining has been used increasingly as a tool to help detect terrorist threats through the collection and analysis of public and private sector data. Its use has also expanded to other purposes. In our May 2004 report, we identified several uses of federal data mining efforts. The most common were improving service or performance; detecting fraud, waste, and abuse; analyzing scientific and research information; detecting criminal activities or patterns; and analyzing intelligence and detecting terrorist activities. While the characteristics of each data mining effort can vary greatly, data mining generally incorporates three processes: data input, data analysis, and results output. In data input, data are collected in a central data warehouse, validated, and formatted for use in data mining. In the data analysis phase, data are typically searched through a query. The two most common types of queries are pattern-based queries and subject-based queries. Pattern-based queries search for data elements that match or depart from a predetermined pattern (e.g., unusual claim patterns in an insurance program). Subject-based queries search for any available information on a predetermined subject using a specific identifier. This could be personal information such as an individual identifier (e.g., a Social Security number or the name of a person) or the identifier of a specific thing. For example, the Navy uses subject-based data mining to identify trends in the failure rate of parts used in its ships. The data analysis phase can be iterative, with the results of one query being used to define criteria for a subsequent query. The output phase can produce results in printed or electronic format. These reports can be accessed by agency personnel, and can also be shared with other personnel from other agencies. Figure 1 depicts a generic data mining process. The impact of computer systems on the ability of organizations to protect personal information was recognized as early as 1973, when a federal advisory committee on automated personal data systems observed that “The computer enables organizations to enlarge their data processing capacity substantially, while greatly facilitating access to recorded data, both within organizations and across boundaries that separate them.” In addition, the committee concluded that “The net effect of computerization is that it is becoming much easier for record-keeping systems to affect people than for people to affect record-keeping systems.” More recently, the federal government’s increased use of data mining has raised public and congressional concerns. A December 2003 report by a task force on information sharing and analysis in homeland security noted that agencies at all levels of government are now interested in collecting and mining large amounts of data from commercial sources. The report noted that agencies may use such data not only for investigations of specific individuals, but also to perform large-scale data analysis and pattern discovery in order to discern potential terrorist activity by unknown individuals. As we noted in our May 2004 report, mining government and private databases containing personal information creates a range of privacy concerns. Through data mining, agencies can quickly and efficiently obtain information on individuals or groups by exploiting large databases containing personal information aggregated from public and private records. Information can be developed about a specific individual or a group of individuals whose behavior or characteristics fit a specific pattern. The ease with which organizations can use automated systems to gather and analyze large amounts of previously isolated information raises concerns about the impact on personal privacy. Before data aggregation and data mining came into use, personal information contained in paper records stored at widely dispersed locations, such as courthouses or other government offices, was relatively difficult to gather and analyze. The 1973 federal advisory committee recommended that the federal government adopt a set of fair information practices to address what it termed a poor level of protection afforded to privacy under contemporary law. These practices formed the basis of the main federal privacy law, the Privacy Act of 1974. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes “records” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his name or another personal identifier. It also describes systems of records as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public by a notice in the Federal Register identifying the type of data collected, the types of individuals that information is collected about, the intended routine uses of the data, and procedures that individuals can use to review personal information. The Federal Information Security Management Act of 2002 (FISMA) also addresses the protection of personal information. FISMA defines federal requirements for securing information and information systems that support federal agency operations and assets; it requires agencies to develop agencywide information security programs that extend to contractors and other providers of federal data and systems. Under FISMA, information security includes protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction, including controls for confidentiality—that is, those controls necessary to preserve authorized restrictions on access and disclosure to protect personal privacy. “an analysis of how information is handled: (i) to ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (ii) to determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (iii) to examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks.” Agencies must conduct a privacy assessment (1) before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form or (2) before initiating any new electronic data collections containing personal information on 10 or more individuals. Among other actions that should require a privacy assessment, according to guidance from OMB, is significant merging of information in databases, for example, in a linking that “may aggregate data in ways that create privacy concerns not previously at issue” or “when agencies systematically incorporate into existing information systems databases of information in identifiable form purchased or obtained from commercial or public sources.” These laws, along with OMB guidance that outlines how agencies are to comply with the laws, lay out a series of steps that agencies should take to protect the privacy of personal information. Each of the steps includes detailed procedures agencies are to follow to fully implement the requirements. Table 1 lists the key steps, with examples of the procedures agencies are to use to address the step, and the primary statutory source for the protections. While the federal laws and guidance previously outlined provide a wide range of privacy protections, agencies are allowed to claim exemptions from some of these provisions if the records are used for certain purposes. For example, records compiled for criminal law enforcement purposes can be exempt from a number of provisions of the Privacy Act, including the requirement to notify individuals of the purposes and uses of the information at the time of collection and the requirement to ensure the accuracy, relevance, timeliness, and completeness of records. A broader category of investigative records compiled for criminal or civil law enforcement purposes can also be exempted from a somewhat smaller number of Privacy Act provisions, including the requirement to provide individuals with access to their records and to inform the public of the categories of sources of records. In general, the exemptions for law enforcement purposes are intended to prevent the disclosure of information collected as part of an ongoing investigation that could impair the investigation or allow those under investigation to change their behavior or take other actions to escape prosecution. The Privacy Act allows, but does not require, agencies to claim an exemption for certain designated purposes. If the agency decides to claim an exemption, the act requires the agencies to do so through a rule that provides the reason behind its decision. Table 2 shows provisions of the Privacy Act from which systems of records used for law enforcement may be exempt. Similarly, the requirement to conduct a privacy impact assessment does not apply to all systems. For example, no assessment is required when the information collected relates to internal government operations, the information has been previously assessed under an evaluation similar to a privacy impact assessment, or when privacy issues are unchanged. Nonetheless, OMB encourages agencies to conduct privacy impact assessments on systems that contain personal information in identifiable form about government personnel, when appropriate. In addition, individual agencies have adopted policies that require assessments for all systems, including those used for government operations. In June 2003, we reported on our assessment of agencies’ compliance with the Privacy Act and related OMB guidance. At that time, we determined that the agencies’ compliance was high in many areas, but uneven across the federal government. Agency officials attributed the areas of noncompliance in part to a need for more leadership and guidance from OMB. In our report, we recommended that the Director, OMB, take a number of steps aimed at improving agencies’ compliance with the Privacy Act, including overseeing and monitoring agencies’ actions, assessing the need for additional guidance to agencies, and raising agency awareness of the importance of the act. In response, OMB established an Interagency Privacy Committee to discuss privacy issues and issued updated guidance. However, it has not addressed our other recommendations: to work with agencies to ensure that they address the areas of noncompliance we identified; institute a governmentwide effort to determine the level of resources needed to fully implement the Privacy Act; and develop a plan to address identified gaps in resources devoted to protecting privacy. The data mining efforts that we reviewed have a variety of purposes, uses, and outputs. For example, the efforts are used for program management, law enforcement, and analyzing intelligence. The efforts fulfill these purposes through a mix of subject-based and pattern-based queries, as previously defined, and result in reports that are used by program officials or shared with others. A detailed summary of each of the efforts we reviewed is included in appendixes II through VI. A short summary of the purpose and characteristics of each of the efforts is included here. The purpose of RMA’s data mining effort is to detect fraud, waste, and abuse in the federal crop insurance program. It is used to identify potential abusers, improve program policies and guidance, and improve program performance and data quality. RMA uses information collected from insurance applicants as well as from insurance agents and claims adjusters. It produces several types of outputs, including lists of names of individuals whose behavior matches patterns of anomalous behavior, which are provided to program investigators and sometimes insurance agencies. It also produces programmatic information, such as how a procedural change in the federal crop insurance program’s policy manual would impact the overall effectiveness of the program, and information on data quality and program performance, both of which are used by program managers. The purpose of the Citibank Custom Reporting System used by State is to detect fraud, waste, and abuse by its employees who use the government purchase card program. The purchase card program is a governmentwide program run by the General Services Administration (GSA). Agencies like State use GSA’s master contract to provide their employees with charge cards from an approved vendor. Citibank, the vendor chosen by State, provides its customers with a custom reporting system, which includes several tools that can be used for managing card accounts. State uses the system to analyze government charge card spending patterns by its employees. System outputs include summaries of card account holder information and purchases and can include personal information. Summaries are used by program managers and are on occasion provided to interested parties such as such as State’s inspector general, GAO, and OMB for oversight. The purpose of IRS’s Reveal system is to detect criminal activities or patterns, analyze intelligence, and detect terrorist activities. IRS uses the system to identify financial crime, including individual and corporate tax fraud, and terrorist activity. Its outputs include reports containing names, Social Security numbers, addresses, and other personal information of individuals suspected of financial crime, including individual and corporate tax fraud and terrorist activity. Reports are shared with IRS field office personnel, who conduct investigations based on the report’s results. The purpose of the data mining effort used by the FBI’s Foreign Terrorist Tracking Task Force is to detect criminal or terrorist activities or patterns and to analyze intelligence. The effort uses two information systems—one classified and one unclassified—to support ongoing investigations by law enforcement agencies and the intelligence community, including locating foreign terrorists and their supporters who are in or have visited the United States. Its outputs include reports based on a request received from field investigators. Reports range from lists of individuals who might meet a certain profile to detailed information on a certain suspect and typically contain personal information. Reports are shared with field investigators, field offices, and other federal investigators. The purpose of SBA’s Lender/Loan Monitoring System is to improve service or performance. The system was developed by Dun & Bradstreet under contract to SBA. SBA uses the system to identify, measure, and manage risk in two of its business loan programs. Its outputs include reports that identify the total amount of loans outstanding for a particular lender and estimate the likelihood of loans becoming delinquent in the future based on predefined patterns. These systems use information that the agency collects directly, as well as information provided by other agencies, such as the Social Security Administration, and private sector sources, such as credit card companies. Table 3 details the inputs of each effort we reviewed and summarizes each effort by the types of information sources used. While the agencies responsible for the five data mining efforts took many of the key steps needed to protect the privacy and security of personal information used in the efforts, none followed all the key procedures. Most of the agencies provided a general public notice about the collection and use of the personal information used in their data mining efforts. However, fewer followed other required steps, such as notifying individuals about the intended uses of their personal information when it was collected or ensuring the security and accuracy of the information used in their data mining efforts. In addition, three of the five agencies completed a privacy impact assessment of their data mining efforts, but none of the assessments fully complied with OMB guidance. Complete assessments are a tool agencies can use to identify areas of noncompliance with federal privacy laws, evaluate risks arising from electronic collection and maintenance of information about individuals, and evaluate protections or alternative processes needed to mitigate the risks identified. Agencies that do not take all the steps required to protect the privacy of personal information limit the ability of individuals to participate in decisions that affect them, as required by law, and risk the improper exposure or alteration of their personal information. The Privacy Act requires agencies to notify the public, through notices published in the Federal Register, when they create or modify a system of records. The act’s provisions include requirements for agencies to provide general notice about the operation and uses of a system of records. According to OMB’s guidance on implementing the act, this public notice provision is central to one of the act’s basic objectives: fostering agency accountability through a system of public scrutiny. This echoes the 1973 federal advisory committee’s statement that public involvement is essential for an effective consideration of the pros and cons of establishing a personal data system. Of the five efforts we reviewed, the personal information used in four (IRS, RMA, FBI, and SBA) were the subject of published system of records notices in the Federal Register. The public was not notified in the case of the fifth system—State. Table 4 details the steps agencies took to notify the public about the five efforts we reviewed. The published system of records notices related to the data mining efforts at IRS, FBI, and RMA generally included the information required by the Privacy Act. However, the notice published by SBA was only partially compliant with the act because it did not clearly describe the process individuals could use to review their information. For example, SBA’s notice listed several dozen contacts and indicated that individuals should identify the appropriate contact from the list when making requests related to their information. However, the notice did not describe how to identify which contact would be appropriate. No notice was published for the Citibank purchase card management tool used by State. As the agency responsible for the governmentwide purchase card program, GSA is responsible for ensuring that the program follows statutory requirements, including those in the Privacy Act. However, it has not published a system of records notice that would cover the activities of State or other agencies participating in the program. According to GSA officials, the agency did not consider purchase card records to be a system of records because it believed the names and addresses it collects pertain to government employees and thus are exempt from the Privacy Act. The GSA officials added that a programwide system of records notice has been partially drafted, but it has not been finalized because it is waiting for guidance from OMB on a recent change to the program that could require the collection of additional personal information. Without adequate notice of this information collection effort, the ability of State employees and the public to participate in decisions about the collection and use of personal information, as envisioned under the Privacy Act, is limited. IRS, RMA, and FBI did not include in their notices a description of how individuals can review their personal information because they claimed the exemption available for records used in law enforcement. The Privacy Act requires agencies to, among other things, allow individuals to (1) review their records (meaning any information pertaining to them that is contained in the system of records), (2) request a copy of their record or information from the system of records, and (3) request corrections in their information. Such provisions can provide a strong incentive for agencies to correct any identified errors. State and SBA provided mechanisms by which individuals could review the information the agencies collected and used in their data mining efforts; the three other agencies claimed allowable exemptions from this requirement. Table 5 details the steps the agencies took to provide individuals with access to their personal information used in the data mining efforts. Citibank provides State cardholders with monthly statements detailing their purchase card activity and account information—the personal information used in the data mining effort—that cardholders are required to review. State also has a process with Citibank to dispute and resolve any inaccuracies in this information. SBA’s system of records notice described a general procedure that individuals could use to review personal information SBA collects (which is one of the information sources used in the data mining effort.) In addition, the agency has procedures that detail how individuals are permitted to review records relating to them and request amendment. FBI, IRS, and RMA claimed an allowable exemption for their efforts because their records are used in law or tax enforcement. FBI and IRS have adopted procedures under which they could waive the exemption and allow individuals to access their information in cases where disclosure would not endanger ongoing investigations or reveal investigative methods. The Privacy Act requires that, when collecting personal information from individuals, agencies should provide those individuals with notice that includes the purpose for which the information was collected and the potential effect of not providing the information. Among other requirements, the act requires that the notification be located on the form the agency uses to collect information from the individual or on an accompanying form that the individual can keep, and that the notice cite the legal authority for the information request. According to OMB, this requirement is based on the assumption that individuals should be provided with sufficient information about the request to make a decision about whether to respond. The 1973 federal advisory committee report noted that the requirement was intended to discourage organizations from probing unnecessarily for details of people’s lives under circumstances in which people may be reluctant to refuse to provide the requested data. The agencies responsible for two of the five efforts we reviewed generally fulfilled the Privacy Act requirements regarding providing notice at the time of collection, one partially fulfilled these requirements, and two agencies claimed exemptions from these requirements. Table 6 details the steps agencies took to notify individuals when collecting personal information. State and SBA generally provided the required notice when they collected personal information. Since May 2005, SBA has included a notice on applications for its loan programs that addressed the Privacy Act requirements. State provided notification using both a written notice on the purchase card application and a mandatory training program that all potential purchase cardholders must take before applying to the program. However, neither of the methods State used to notify employees identified the legal basis for the information request, as required by the Privacy Act. State officials told us that they were unaware that such a notice was required, but that they intend to notify employees of the legal basis in the future. RMA also provided a notice on application forms, but these notices were not provided to everyone who supplied personal information. In the crop insurance program, participants apply for coverage from an insurance company that collects information from applicants and provides it to RMA. Because the information is collected on its behalf, RMA is responsible for ensuring that individuals receive the required notifications. However, RMA could not demonstrate that all individuals who provided it with data were properly notified. RMA provided documents showing that 16 of the 17 insurance providers included the disclosures required by the Privacy Act on the application forms they provided to borrowers. However, none of the lenders demonstrated that they provided adequate notice to insurance agents or adjusters, who also provided personal information used by RMA. According to RMA officials, they were unaware that this Privacy Act requirement applies to all the individuals about whom they collected information. When agencies do not fully notify individuals about the purpose and uses of the information they collect, the individuals have limited ability to make a reasonable decision about whether or not to supply the requested information. FBI and IRS claimed allowable exemptions to the requirement to provide direct notice to individuals when they collect information under the Privacy Act because they use the collected information for law enforcement purposes. The Privacy Act requires agencies to establish appropriate administrative, technical, and physical safeguards to ensure the security of records and to protect against any anticipated threats or hazards to their security that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual about whom information is maintained. While the act does not specify the types of procedures that agencies should take to ensure information security, FISMA and related OMB guidance define specific procedures for ensuring the security (which encompasses protections for availability, confidentiality, and integrity) of information. These procedures include performing risk assessments and developing security plans. Guidance from OMB and the National Institute of Standards and Technology (NIST) provide further detail on how agencies are to address security. The Privacy Act also requires agencies to maintain all records used to make determinations about an individual with sufficient accuracy, relevance, timeliness, and completeness as is reasonably necessary to assure fairness. For the purposes of this report, we refer to these requirements as data quality requirements. According to OMB, this provision is intended to minimize the risk that an agency will make an adverse determination about an individual based on inaccurate, incomplete, or out-of-date records. In the five efforts we reviewed, agency compliance with the security and data quality requirements was inconsistent. Table 7 summarizes the steps agencies took to ensure the security and accuracy of the information in the data mining efforts. Appendix VII provides additional detail on the specific actions that make up the key requirements and agencies’ compliance with them. Security. While the agencies responsible for the data mining efforts we reviewed followed a number of key security procedures, none had fully implemented all the procedures we evaluated. Although SBA, FBI, and RMA applied many of the key procedures required for the information systems used in their data mining efforts, their documentation did not include all the information called for in federal guidance. Specifically, SBA and RMA did not fully document its incident response capability, and neither FBI nor RMA demonstrated that their systems had tested contingency plans—a key requirement for adequate security planning. IRS produced several of the required security-related documents, but its documentation did not demonstrate that all of the underlying requirements had been met. IRS’s system became operational in February 2005 and is currently undergoing testing. Neither of the two agencies responsible for State’s data mining effort took the steps required to ensure that the information systems used in the effort had adequate security. As the contracting agency for the governmentwide purchase card program, GSA is responsible for ensuring that information and information systems used in the program—including those provided by contractors—follow FISMA guidance. However, according to agency officials, GSA has not evaluated vendors’ systems for compliance with the specific provisions of FISMA; instead, GSA currently relies on the banks to provide security and on the Office of the Comptroller of the Currency for oversight of the banks. Because State uses an information system operated by Citibank, through its task order under the purchase card program contract, FISMA requires that State ensure that Citibank’s system complies with FISMA provisions. While State performed a general review of Citibank’s security processes before starting to use its systems, State did not specifically evaluate Citibank’s compliance with federal security requirements. Agencies that do not take adequate steps to ensure information security risk having information improperly exposed, altered, or destroyed. For example, another bank participating in a related program lost backup tapes containing personal information on government employees. GSA program officials noted that they were satisfied that the situation was an accident and not a reflection of a significant security failing on the bank’s part. Data quality. State took steps to ensure that the information used in its data mining efforts is accurate, relevant, timely, and complete. State used a monthly review process whereby cardholders review the account statements provided by Citibank for accuracy. The same information is also reviewed by the cardholders’ supervisors. In addition, area program coordinators must review the purchase card programs in their area annually. RMA took steps that partially ensure the quality of the data in its data mining effort; for example, it has an editing and data validation process in place. However, while this process addresses the accuracy of the system’s data, it does not address the relevance, timeliness or completeness of the personal information in the data mining system because program officials were unaware of the requirement to do so. Those agencies that do not take adequate steps to ensure the quality of the information they use and collect risk making unwarranted decisions based on inaccurate information. The provision regarding data quality did not apply to three efforts. SBA does not use the information in its data mining effort to make determinations about individuals; rather, it uses it to manage groups of loans. FBI and IRS claimed an allowable exemption because their records are used for criminal law enforcement. According to the rule justifying FBI’s exemption, it is impossible to make such determinations in part because information that may initially appear to be untimely or irrelevant can acquire new significance as an investigation proceeds. The E-Government Act of 2002 requires that federal government agencies conduct privacy impact assessments before developing or procuring information technology or initiating any new electronic data collections containing personal information on 10 or more individuals. According to OMB, such assessments help agencies to determine whether the agency’s information handling practices conform to the established legal, regulatory, and policy requirements regarding privacy; evaluate risks arising from electronic collection and maintenance of information about individuals; and evaluate protections or alternative processes needed to mitigate the risks identified. Thus, a timely and comprehensive privacy impact assessment can be used by agencies as a tool to ensure not only strict compliance with the various laws related to privacy, but also as a means to consider broader privacy principles, such as the fair information practices that formed the basis for those laws. The E-Government Act lays out a series of requirements for assessments, such as (1) they must describe and analyze how the information is secured, (2) they must describe and analyze the intended uses of information, (3) the agency’s chief information officer (or designee) must review the assessment, and (4) the assessment must be publicly available unless making it so would raise security concerns or reveal sensitive or classified information. OMB guidance does not require privacy impact assessments for systems used for internal government operations or for national security systems; however, individual agencies may have more stringent privacy impact assessment requirements. While four of the five agencies were required to conduct assessments by statute or by agency rule, three (RMA, SBA, and IRS) did so. However, none of these assessments adequately addressed all the statutory requirements. Table 8 summarizes agency actions to assess the privacy impacts of their data mining efforts. Three agencies conducted assessments that partially addressed the requirements. For example, while RMA’s plan addressed the information to be collected and how it was to be used, it did not receive the required review by the agency chief information officer or designee. In addition, RMA’s assessment was not made publicly available, even though the document did not include any sensitive information. IRS’s notice stated that it would use the information for queries, but did not analyze the purpose for collecting the information or its intended uses, as required. For instance, IRS’s privacy impact assessment states that the system “is used to identify potential criminal investigations of individuals or groups” in “support of the overall IRS mission.” While this describes the purpose for collecting the information and its intended uses, it does not analyze how the agency reached these decisions. RMA and IRS did not fully address these steps because they used a prior version of guidance that did not address all the current requirements when conducting their assessments. SBA conducted an assessment of a previous loan monitoring effort that addressed several aspects of their current data mining effort. This assessment included general descriptions of what information was to be collected, why the information was to be collected, the intended use of the information, and how the information was to be secured. However, the assessment did not analyze these decisions, as required by OMB’s guidance. According to SBA officials, the privacy assessment was not more specific because at the time it was completed, the possible uses of the system and the format it would take were not certain. SBA officials added that a more specific privacy assessment of the data mining effort has been drafted and is expected to be published later in the current fiscal year. FBI has not conducted a privacy impact assessment for its data mining effort. FBI is not required by statute to conduct assessments on these systems because they are classified as national security systems. However, under FBI regulations, assessments are required for these systems. According to agency officials, FBI is in the process of preparing privacy assessments for the two systems that make up its data mining effort, but these assessments were delayed due to competing priorities for its operational support team. The officials said that the agency does not have a target date for completing the assessments. The lack of comprehensive assessments is a missed opportunity for agencies to ensure that the data mining efforts we reviewed are subject to the most appropriate privacy protections. Because the assessments did not address all the required subjects, including those related to several Privacy Act provisions, agencies were sometimes unaware that they were not following all the requirements of the act. Further, without analyses regarding their approaches to privacy protection, agencies have little assurance that their approaches reflect the appropriate balance between individual privacy rights and the operational needs of the government. GSA, the contracting agency for the governmentwide purchase card program, did not conduct a privacy assessment because OMB guidance does not require them for internal government programs. However, OMB guidance encourages agencies to conduct privacy impact assessments on systems that collect information in identifiable form about government personnel. Further, according to agency officials, GSA is developing guidance requiring assessments for all new agency systems which will apply to the purchase card program. The five data mining efforts illustrate ways in which federal agencies collect and use personal information for purposes such as program oversight and law enforcement. The agencies responsible for these data mining efforts took many of the key steps required to protect the privacy and security of the personal information they used. However, none of the agencies followed all the key privacy and security provisions we reviewed. Those that did not apply key privacy protections limited the ability of the public—including those individuals whose information was used—to participate in the management of that personal information. Those agencies that did not apply the appropriate security protections increased the risk that personal information could be improperly exposed or altered. Until agencies fully comply with the Privacy Act, they lack assurance that individual privacy rights are appropriately protected. Further, none of the agencies we reviewed conducted a complete privacy impact assessment. Had their assessments fully addressed the required Privacy Act provisions, the agencies would have had an opportunity to identify and remedy areas of noncompliance. In addition, none of the privacy impact assessments adequately addressed the choices that agencies made regarding privacy in their data mining efforts. As a result, the basis for their choices regarding tradeoffs between privacy protections and operational needs is unclear. Better analyses of such choices could help agencies strike the appropriate balance between operational needs and individuals’ rights to privacy. To ensure that the data mining efforts reviewed include adequate privacy protections, we are making 19 recommendations to the agencies responsible for them. Specifically, we recommend that the Secretary of Agriculture direct the Administrator of the Risk Management Agency (RMA) to provide the required Privacy Act notices to individuals, including producers, insurance agents, and adjusters, when personal information is collected from them; apply the appropriate information security measures defined in OMB and NIST guidance to the systems used in the RMA data mining effort, specifically, the development of a complete system security plan, a tested contingency plan, and regular testing and evaluation of the systems used in the effort; develop and implement procedures that ensure the accuracy, relevance, timeliness, and completeness of personal information used in the RMA data mining effort to make determinations about individuals; revise the privacy impact assessment for the RMA data mining effort to comply with OMB guidance, including analyses of the intended use of the information it collects, with whom the information will be shared, how the information is to be secured, opportunities for impacted individuals to comment, and the choices made by the agency as a result of the assessment; have the completed privacy impact assessment approved by the chief information officer or equivalent official; and make the completed privacy impact assessment available to the public, as appropriate. We recommend that the Secretary of the Treasury direct the Commissioner of the Internal Revenue Service to apply the appropriate information security measures defined in OMB and NIST guidance to the systems used in the Reveal data mining effort, specifically, the performance of regular system testing and evaluation against NIST guidance; revise the privacy impact assessment for the Internal Revenue Service’s Reveal system to comply with OMB guidance, including analyses of the information to be collected, the purposes of the collection, the intended use of the information, how the information is to be secured, and opportunities for impacted individuals to comment; and make the completed privacy impact assessment available to the public, as appropriate. We recommend that the Attorney General direct the Director of the Federal Bureau of Investigation to apply the appropriate information security measures defined in OMB and NIST guidance to the systems used in the Foreign Terrorist Tracking Task Force data mining effort, including the development of tested contingency plans; establish a date for the completion of a privacy impact assessment for its data mining effort that complies with OMB guidance, including analyses of the information to be collected, the purposes of the collection, the intended use of the information, with whom information will be shared, how the information is to be secured, opportunities for impacted individuals to comment, and the choices made by the agency as a result of the assessment; have the completed privacy impact assessment approved by the chief information officer or equivalent official; and make the completed privacy impact assessment available to the public, as appropriate. We recommend that the Secretary of State direct the Under Secretary for Management to notify purchase card participants of the legal basis under which the department collects their personal information, as required. We recommend that the Administrator of the Small Business Administration amend the system of records notice regarding its data mining effort to clearly identify the individual responsible for the effort, the process by which individuals can request notification that the system includes records about them, and the procedures individuals should use to review records pertaining to them; complete a privacy impact assessment for the data mining effort that complies with OMB guidance, including analyses of the information to be collected, the purposes of the collection, the intended use of the information, how the information is to be secured, opportunities for impacted individuals to comment, and the choices made by the agency as a result of the assessment; and make the completed privacy impact assessment available to the public, as appropriate. We recommend that the Administrator of the General Services Administration publish a system of records notice for the purchase card program that specifies the name of the system, the categories of individuals and records in the system, the categories of information sources used by the system, the routine uses of the system, how the agency stores and maintains the system, the individual responsible for the effort, the process by which individuals can request notification that the system includes records about them, and the procedures individuals should use to review records pertaining to them and ensure that the appropriate information security measures defined in OMB and NIST guidance are applied to the systems used in the Citibank Custom Reporting System data mining effort, including the development of a risk assessment, a system security plan, a tested contingency plan, the performance of regular testing and evaluation, and the completion of certification and accreditation by agency management. We provided Agriculture, Treasury, Justice, State, SBA, and GSA with a draft of this report for their review and comment. We received written comments on the report and its recommendations from SBA, Agriculture, State, and Treasury, and comments via e-mail from GSA’s Assistant Commissioner for Acquisition. These agencies generally agreed with the majority of our recommendations, but disagreed with others. Justice’s Senior Audit Liaison stated that the department had no comments. Agriculture, IRS, State, and SBA also provided technical comments, which we addressed as appropriate. The Administrator, RMA, stated that RMA agreed with the majority of our recommendations and that the agency had taken steps to implement many of them. In response to our recommendation that RMA strengthen security measures, the Administrator stated that RMA has a security plan for its data mining system and performs regular testing and evaluation. While our draft indicated that RMA had implemented some of the necessary security measures, we noted that it did not follow all related guidance. Specifically, the system security plan did not describe its incident response capability, and RMA did not document that it had conducted annual testing or that its tests included penetration or vulnerability testing. We clarified this recommendation to focus on the incomplete and undocumented security measures we identified. In response to our recommendation that RMA develop and implement procedures that ensure the quality of personal information used in its data mining system, USDA commented that they already have an editing and validation process in place. We clarified the discussion of this point in our report. However, while this process addresses the accuracy of the system’s data, it does not address the relevance, timeliness or completeness of the personal information in the data mining system. USDA’s comments are contained in appendix VIII. Treasury’s Chief Information Officer generally agreed with our recommendations regarding a privacy impact assessment, and said that IRS will conduct a new privacy impact assessment that complies with current OMB guidance after Reveal becomes operational. While conducting a new privacy impact assessment is an appropriate step, we note that the E-Government Act and OMB guidance require that assessments be conducted before systems become operational. In responding to our recommendation to ensure that appropriate security measures are applied to IRS’s Reveal data mining effort, Treasury stated that Reveal is in compliance with OMB, NIST, and Treasury security guidance and is operating under an interim authorization to operate while it undergoes certification and accreditation. Our report acknowledges that IRS had applied several security measures, but also notes that required regular testing and evaluation was not yet in place. We clarified this recommendation to focus on these requirements. Treasury’s comments are contained in appendix IX. State’s Assistant Secretary and Chief Financial Officer generally agreed with our recommendation that it notify purchase card participants of the legal basis under which the Department collects their personal information; State responded that it will take the necessary steps to address this recommendation. In addition, regarding a recommendation we made to GSA concerning the Citibank Custom Reporting System, State raised the issue of whether a privacy impact assessment is required for systems that collect information on federal employees, as is the case with this system. As discussed below in our response to GSA, we agree that OMB guidance exempts internal government systems from the requirement to conduct privacy impact assessments and have clarified our report to reflect this. State’s comments are contained in appendix X. SBA’s Associate Deputy Administrator for Office of Capital Access generally agreed with our recommendations and provided information on its planned actions. SBA’s comments are contained in appendix XI. GSA’s Assistant Commissioner for Acquisition generally disagreed with our recommendations. He stated that GSA has not published a system of records notice for the purchase card program because this program does not capture personal information. However, as described in the report, the system retrieves information about individuals by personal identifiers, and thus meets the Privacy Act’s definition of a system of records. In commenting on our recommendation that GSA ensure that appropriate security measures defined in OMB and NIST guidance are applied to the data mining effort, GSA explained that they have reviewed the security standards of the five financial institutions on the GSA SmartPay master contract, and have concluded that the commercial standards and procedures provided by these institutions offer the Citibank Custom Reporting System sufficient security protection. However, GSA is required to ensure that information and information systems used in the program—including those provided by contractors—meet the requirements of FISMA, including the implementing guidance from OMB and NIST. Further, recent OMB guidance requires agencies to ensure implementation of security measures identical to those required under FISMA. GSA also provided a security risk assessment of the security in the SmartPay Master Contract. However, the assessment does not address any of the elements of the NIST guidance for implementing risk assessments, such as identifying the system’s vulnerabilities and threats. Finally, in response to our three recommendations regarding the requirement to conduct a privacy impact assessment, the Assistant Commissioner stated that GSA is not required to conduct a privacy impact assessment because it is contracting for a financial system, not an IT system. Because it is an internal government system, we agree that GSA is not required by OMB guidance to conduct a privacy impact assessment on the Citibank system and have clarified our report to reflect this. As agreed with your office, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will send copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibility for SBA, Agriculture, State, Treasury, GSA, and Justice. Copies will be made available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-6240 or by e-mail at koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. To address our objectives, we used a case study methodology. We selected the data mining efforts to be included in our evaluations from the 122 federal data mining systems reported to us in 2004. In that report, we identified the six most common purposes for the data mining activities reported to us. For the purposes of this review, we excluded systems used for two purposes: we did not select any systems used for analyzing scientific and research information because few of those systems used personal information, and we excluded systems used for managing human resources because such records fall under different privacy rules and regulations. The remaining four most common purposes were improving service or performance; detecting fraud, waste, and abuse; detecting criminal activities or patterns; and analyzing intelligence and detecting terrorist activities. From the systems that were used for these purposes, we selected all those that met each of the following criteria: used data from another agency or private sector data. These criteria were chosen to ensure that the efforts we selected illustrated agency practices regarding personal information. In addition, we selected no more than one system from each department or agency. We analyzed the information provided in 2004 and determined that 11 data mining efforts met all of our initial selection criteria. We contacted the agencies responsible for the systems to confirm the accuracy of the information previously provided. As a result of the updated information, we eliminated from consideration several systems that no longer met all of the selection criteria, resulting in the final selection of five data mining systems for our case study review. To describe the characteristics of the selected federal data mining efforts, we analyzed system documentation, public notices, and other relevant documents and interviewed officials at the responsible department or agency, and, when applicable, the supporting contractor. Agency officials were provided with several opportunities to review our descriptions of the selected systems and the graphical depictions included in appendixes II through VI. To determine whether agencies provided adequate privacy protection for the personal information used in the selected data mining efforts, we analyzed federal privacy and security laws, regulations, and other guidance to identify key steps and procedures for protecting the privacy of individual information. We then developed a data collection instrument consisting of a series of questions about agency actions that followed the key steps and procedures, as well as questions on the detailed characteristics of the data mining systems, and provided the instrument to the responsible agencies. We reviewed the agencies’ responses and any supporting documentation they provided, and assigned an answer of yes (compliant with all of the guidance related to that question), no (not compliant with any of the guidance related to that question), or partial (compliant with some, but not all of the guidance) to each question. We also reviewed rules claiming exemptions. We discussed the results with agency officials and made adjustments as appropriate. Because we studied only five data mining efforts and because of the method of selection, we cannot conclude that our results represent any larger group of data mining efforts. Although they were not representative of all federal data mining efforts, we believe that the five efforts we reviewed illustrate some of the ways in which agencies satisfy federal privacy provisions and the circumstances under which agencies can claim exemptions to these provisions. We conducted our work from May 2004 to June 2005 at the Washington, D.C., area offices of the Departments of State and Agriculture, Internal Revenue Service, Federal Bureau of Investigation, Small Business Administration, and General Services Administration, at an agency facility in Philadelphia, Pennsylvania, and at the Stephenville, Texas, location of an agency contractor. Our work was conducted in accordance with generally accepted government auditing standards. The Risk Management Agency (RMA) uses a data mining system designed by Tarleton State University’s Center for Agribusiness Excellence (CAE) to assist it in detecting fraud, waste, and abuse in the federal crop insurance program. The data mining system is used to identify producers, insurance agents, and loss adjusters who may be abusing the program. Its inputs include insurance records on policy holders, agents, and loss adjusters, as well as data on soil, weather, and land. It produces several types of outputs, including lists of names of individuals whose behavior is anomalous. The purpose of the RMA data mining system is to detect fraud, waste, and abuse in the federal crop insurance program by investigating potential leads and confirming suspicious activity in high-profile cases. It also uses the system to improve program policies, guidance, and data quality. According to RMA officials, the system significantly augmented agency program integrity initiatives and accounted for over $340 million in cost avoidance savings since its inception. According to RMA officials, CAE analysts identify potential abusers of the federal crop insurance program primarily by developing scenarios of abuse of the program by producers, insurance agents, and loss adjusters. Analysts query the data warehouse by using data mining and pattern recognition techniques to identify information, patterns, anomalies, or relationships indicative of fraud, waste, and abuse. CAE analysts then generate reports for RMA regional compliance offices, which use the reports to determine which producers should be inspected for potential abuse. RMA uses reports produced by the data mining system for policy development in the Crop Insurance Handbook and improvement of the federal crop insurance program. RMA’s officials often request data mining reports (1) to help evaluate pilot programs before making policy changes, (2) to determine the best way to change program procedures once the policies are implemented, and (3) to determine ways to enhance the data through quality control reviews. RMA’s data mining effort uses a data warehouse containing crop insurance data and information from weather, soil, and land survey sources to develop and conduct pattern-based searches for identifying information, patterns, anomalies, or relationships indicative of fraud, waste, and abuse. Pattern-based searches are based on scenarios of fraudulent schemes for obtaining crop insurance indemnities (the dollar amount paid in the event of an insured loss) that are developed by analysts and agricultural experts. The data mining system helps analysts uncover these patterns through an iterative process. Each scenario is tested and refined by querying data in the warehouse. The results are then provided to a CAE product review team that approves or rejects the scenario. Once a scenario is approved, analysts can use it to search the data warehouse for individuals who match the scenario patterns. Analysts use multiple scenarios to query the data warehouse in order to identify program participants who are potentially involved in fraudulent activities, resulting in a “spot check list.” Table 9 lists (1) the names and attributes of the scenarios developed by RMA and CAE and (2) the agency-reported summary of potentially fraudulent claims reported by producers whose behavior was identified as anomalous on the 2002 spot check list. According to RMA officials, the eight scenarios listed in table 9 have been the most successful in generating program savings. RMA’s six regional compliance offices use the data mining query results, including the spot check list, to determine which producers should be inspected for potential abuse. Once the regional compliance offices review the list, they forward it to employees of USDA’s Farm Service Agency who send notification letters to the producers on the list, alerting them to pending inspections. According to RMA officials, the notice of a pending inspection is often enough to discourage the producers from filing fraudulent claims. Figure 2 depicts this process. The RMA data mining effort uses government data covered by systems of records notices, including crop insurance data. Data in the RMA system not from systems of records include public land, weather, and soils data. In addition to government data, RMA uses other publicly available information on an as-needed basis. Crop Insurance Information. Insurance companies participating in the program provide crop insurance information to RMA on program participants, including producers, insurance agents, and loss adjusters. The crop insurance data contains personal identifiers that can be linked to program participants, including names, addresses, phone numbers, and Social Security numbers. Land Survey Data. The system uses digital maps from the Public Land Survey System—regulated by the Bureau of Land Management—that depict public survey information, such as township locations referred to in legal land descriptions. Analysts use this information to determine whether there is a discrepancy between a producer’s claim and land records. Weather Data. RMA uses information from public weather records from the National Oceanic and Atmospheric Administration to assist in validating specific causes of loss for further investigation. Soils Data. RMA plans to uses soils data from USDA’s Natural Resources Conservation Service when determining whether soil on a producer’s land is acceptable for growing an insured crop. The agency also uses other publicly available information including information found on public Web sites. RMA’s data mining system produces reports for program investigators on producers whose behavior patterns are anomalous. The system also produces reports for program managers that include programmatic information—such as how a procedural change in the federal crop insurance program’s policy manual would affect the overall effectiveness of the program—and other information on data quality and program performance. The U.S. Department of State (State) contracts with Citibank through the General Services Administration’s GSA SmartPay contract to provide State employees with purchase cards. Under the contract, Citibank provides State and other contracting agencies access to the Citibank Custom Reporting System (CCRS)—a proprietary tool designed by Citibank. State uses this system to analyze transaction data and help prevent fraud, waste, and abuse in its purchase card program. The system’s inputs include account information from State employees and commercial data from transactions made by State employees. System outputs include summaries of card account holder information and purchases. The purpose of State’s data mining effort is to prevent fraud, waste, and abuse in the purchase card program by using CCRS to ensure that credit and purchase limits are in place and to conduct spot checks of individual purchase card expenditures. Officials also use the system to improve program performance through the results of simple subject- and pattern- based queries. According to State officials, the department uses reports containing information on agency purchase card accounts and suspended or cancelled accounts. State officials also regularly review a CCRS report that summarizes single transaction and monthly spending limits for all cardholders to ensure that they are accurate. According to State officials, one of the most important tasks accomplished through system reports is ensuring that the ratio of cardholders to approving officials—a cardholder’s immediate supervisor—is low enough for expenditures to be effectively reviewed. According to State officials, the department also uses reports to assist with overall purchase card program management functions. These reports provide the ability to track overall purchase card expenditures by a number of data elements, including spending by region or embassy, or by vendors used by State employees. State also uses CCRS to collect and compile statistical information about the program for quarterly reports submitted to the Office of Management and Budget. These reports include information on the number of current accounts, dollars spent, rebate amounts earned, and single purchase and monthly expenditure limits for cardholders. The CCRS electronic reporting tool is a Citibank proprietary system. The system interfaces with Citibank’s Global Data Repository, which stores account and transaction data for an 18-month period. A portion of the data resulting from the transaction process is replicated in the primary system database for use in analysis and report preparation. Figure 3 illustrates the transaction process. Reports can be printed or downloaded from the system; the presentation of the data can be edited within the system, or the data can be downloaded to be analyzed in an outside program. When using the system, State users can access reports developed in the system, including reports of purchase card accounts, suspended or cancelled accounts, and summary reports on the vendors State employees purchase from. Reports not already established in the system can be created by Citibank at the request of agency officials. Figure 3 illustrates this process. CCRS includes transaction and account data. Account data are collected from agency employees, with an account number issued by Citibank; transaction data consist of records of purchase card transactions conducted by State employees. Account Data. State collects personal information, including name, last four digits of the Social Security number, and the cardholder’s office phone number and mailing and e-mail addresses as part of the purchase card application process. According to agency officials, State retrieves records by cardholder name. State supplies that information to Citibank. State also supplies required account parameters—such as single transaction and monthly spending limits—and assigns a unique identifying number. Other account information is assigned by Citibank. Transaction Data. The amount and level of detail available in the transaction data varies depending on the technical capabilities of the vendor from whom products are purchased. For example, vendors with the most basic capabilities transfer standard commercial transaction data, including the total purchase amount, date of purchase, vendor’s name and location, date the charge or credit was processed, and a reference number for each charge or credit. Vendors with more advanced technology can provide additional information including, among other things, unit cost and quantity, vendor’s category code, and sales tax amount. CCRS provides reports on purchase card transactions and account information, including a list of all purchase card accounts, a report on suspended or cancelled accounts, and reports summarizing expenditures by region or by vendor. Many reports in the CCRS system are available in a summary form that does not contain personal identifiers and in a detailed form containing personal identifiers, including account number and name. According to State officials, CCRS reports are used within State’s purchase card office to ensure adequacy and accuracy of compensating controls such as credit limits. Reports are also used to track expenditures and are supplied to other State offices, such as State’s Inspector General, for use in analyzing purchases. The Internal Revenue Service (IRS) uses the Reveal system to detect patterns of criminal activity, analyze intelligence, and detect terrorist activities. According to agency officials, IRS uses the system to identify financial crime, including individual and corporate tax fraud, and terrorist activity. Inputs for Reveal include Bank Secrecy Act data, tax information, and counterterrorism information. Its outputs include reports containing names, Social Security numbers, addresses, and other personal information of individuals suspected of financial crime or terrorist activity. The purpose of the Reveal data mining system is to detect criminal activities and patterns in support of IRS’s work in investigating potential criminal violations of the Internal Revenue Code and related financial crimes. This work is conducted by IRS’s Criminal Investigation unit. According to agency officials, Reveal is used to analyze available databases to support ongoing investigations relating to financial crime, including individual and corporate tax fraud, and terrorist activity. The system provides the capability to query data from multiple sources in an effort to identify links in the data. System users develop reports that include query results and graphical depictions of the data. The reports are then provided to field offices, which conduct investigations based on the reports’ results. The system allows users to establish a profile of the actions and persons associated with the search subject by allowing the user to trace numerous financial transactions between individuals and institutions. Reveal uses commercial software to query multiple databases. The system provides Criminal Investigation users with a visual depiction of the results, and allows them to search on names, Social Security numbers, and other information to help narrow their search. Reveal consists of (1) a data retrieval and manipulation tool that performs queries and (2) a software tool that provides a visual depiction of the query results. The retrieval and manipulation tool queries and gathers information on large sets of data that reside locally on a relational database on the system’s database server. This tool allows users to sort, group, and export data from multiple information repositories simultaneously, including combinations of databases. It also can perform two kinds of queries: reactive and proactive. To perform a reactive query, the user must provide a known value of an individual or entities. To perform a proactive query, the user narrows the search criteria to identify groups of individuals and patterns of suspicious activity. When users narrow their search criteria using the query tool, they can use the visualization component to refine and assess the results of the queries. The software visualization tool shows relationships between data in the queries, and facilitates the discovery of relationships among entities, patterns, and trends in the data. It also organizes and presents the information in a variety of graphical formats. Figure 4 depicts this process. Reveal currently uses government system of records data as its only type of input. These inputs include (1) Bank Secrecy Act data, (2) tax data, and (3) counterterrorism data. These three types of data all contain personal information, such as address, Social Security number, and date of birth. Data sets are copied and stored locally. What happens: The database contains lists of data that are compared to each other. Outputs: The data visualization tool creates a visual representation of the relationship of the data pulled in a query. Bank Secrecy Act Data. Bank Secrecy Act (BSA) data are accessed remotely from databases owned by the Financial Crimes Enforcement Network (FinCEN). It consists of Suspicious Activity Reports submitted for a transaction related to a possible violation of a law or regulation. BSA data also include Currency Transaction Reports which are filed by casinos for cash transactions in excess of $10,000 and by financial institutions for payments or transfers in excess of $10,000. Tax Data. Tax data used by Reveal include information from IRS’s Schedule K-1, corporate and individual tax information, and applications for employer and tax identification numbers. It is used to report a beneficiary’s share of income, deductions, and credits from a trust or a decedent’s estate. Counterterrorism Data. Reveal uses counterterrorism data from various sources on individuals. Reveal’s outputs include reports that contain names, Social Security numbers, addresses, and other personal identifiers of individuals suspected of financial crimes, including corporate and tax fraud, and of terrorist activity. Reports are shared with IRS agents who conduct investigations based on the report’s results. The data mining effort used by the Federal Bureau of Investigation’s (FBI) Foreign Terrorist Tracking Task Force analyzes intelligence and detects terrorist activities. In support of its responsibilities, the task force operates two information systems—one unclassified and one classified—that form the basis of its data mining activities. The purpose of the task force’s data mining effort is to analyze intelligence and detect terrorist activities. The task force supports ongoing investigations in law enforcement agencies and the intelligence community by using its data mining effort to respond to requests for information about foreign terrorists from FBI agents or officials from a partner agency. For example, task force program officials informed us that they occasionally receive information about specific threats from the intelligence community or law enforcement partners. When such threat information is received, they identify potential sources of information that may reveal persons capable and motivated to carry out the threat. They then connect this information with persons listed in other databases linked to terrorist information. The task force then provides the names of high risk individuals whose characteristics match the threat profile to FBI field agencies and to Joint Terrorist Task Force(s). According to task force officials, analysts conduct research and analysis based on requests and provide a report of the results to the requesters and to affected agencies, as appropriate. For example, according to agency officials, the task force received a list of possible suicide bombers from a foreign government. Through analysis, the task force determined that several of the bombers had names and other identifiers that were similar to those of individuals currently in the United States. The task force provided the information to law enforcement investigators to determine whether the individuals identified were the same as those on the list of suicide bombers provided by the foreign government. Task force analysts use two systems together in their data mining effort: one sensitive but unclassified, and one classified. After receiving a request for information about a threat or person of interest, task force leadership routes the information to an appropriate analyst. Analysts initially search within the task force’s existing data, including certain immigration records, to determine whether they already have information relevant to the request. Task force analysts use several analytical tools to help search for and analyze information in the systems. According to task force officials, the analysts’ primary query tool is the Query Tracking and Initiation Program. FBI developed this program to allow users to search the systems using, among other things, multiple variants or transliterations of names. It also allows analysts to search within and between different data sets. The unclassified system serves as the initial repository for unclassified data. Through this system, task force analysts can use the query tracking program to submit queries on individuals to commercial databases to find any relevant information. The resulting information is returned to the unclassified system, where analysts can conduct analysis using query tracking and other tools. The classified system contains law enforcement and intelligence data, including FBI case files. Information initially collated in the unclassified system is loaded into the classified system daily. However, if analysts need expedited results, they can perform an initial analysis using data contained in the unclassified system and then conduct a more detailed analysis once data are loaded into the classified system. The two systems are illustrated in figure 5. FBI officials reported that the task force’s systems contain multiple sets of data from multiple government and nongovernment sources, some of which were acquired on a one-time basis and others that are regularly updated. Data from outside sources, including nonpartner government agencies and commercial entities, are typically acquired on an as-needed basis. Twenty-nine of the task force’s government data sets are part of a system of records. Many of these data sets come from within the Department of Justice. Other agencies also supply the task force with data, including information from immigration records, from the Federal Aviation Administration, and from Customs and Border Protection. According to program officials, most data that come from sources outside the Department of Justice are acquired under a provision of the Privacy Act that allows a law enforcement agency to request certain data from a government entity for law enforcement purposes. According to agency officials, outside agencies provided their data sets to FBI on the basis of formal requests. The task force’s data mining effort receives one set of government data that is not part of a system of records because the information does not contain personal identifiers. The task force data mining system also contains 15 data sets that include information on criminal aliens, intelligence data and alerts, and various watchlists. FBI officials responsible for the task force were unaware of whether these data are part of a system of records, but said that the data were supplied to the task force under the same conditions as other government data. The task force data mining effort uses data from several commercial sources, many of which are updated frequently. According to FBI officials, analysts can query commercial sources during the course of an investigation, if needed. Program officials noted that analysts request information from commercial sources using personal identifiers. The task force received 4 data sets from Interpol (an international police organization) on wanted persons, stolen property and other intelligence. The task force’s outputs include reports that contain personal identifiers and other information that is relevant to the initial request. Reports are shared with the requesting entity or agent and as needed with partner agencies. Agents conduct investigations based on the results of the reports. The Small Business Administration (SBA) contracted with Dun & Bradstreet to provide information and analytical capabilities that assist SBA in managing credit risks in two major business loan guarantee programs. The Loan/Lender Monitoring System (L/LMS) combines SBA data with private sector data on businesses and consumers to predict future performance of outstanding business loans. The purpose of L/LMS is to identify, measure, and manage risk in two of its business loan programs. It does this specifically by developing predictive ratings that allow SBA to improve the performance of two of its business loan programs—the 7(a) loan program and 504 program—using risk management principles. The system analyzes SBA loan data, Dun & Bradstreet business data, and data provided by subcontractors, including consumer credit bureau information and business credit scores. It uses a commercially available suite of scorecards to produce business credit scores that predict the likelihood of an SBA loan becoming severely delinquent over the next 18 to 24 months—a leading indicator of default. It also contains trends databases that provide historical data on approximately one dozen performance and credit risk fields on each outstanding loan. Finally, the system contains lender databases that provide information about individual lenders that can be compared to the information about a lender’s peers. Dun & Bradstreet and Fair Isaac use the input data in a proprietary scoring process to generate a predictive risk score for each outstanding loan. In addition, Dun & Bradstreet appends its commercial demographic and risk data to the electronic records of all outstanding SBA business loans, after removing any personal identifiers. Dun & Bradstreet then transfers this information to a module where it can be accessed by SBA. None of the data transferred from Dun & Bradstreet to SBA contains personal identifiers. SBA can use the L/LMS to view its entire business loan or lender portfolio and can perform analysis by various data elements, including dollars outstanding, lender, lender corporate family, SBA region, industry sector, and loan type. According to SBA officials, the agency uses system- produced reports to help them determine which lenders’ SBA business loan portfolios are most at risk of default, driving the selection of lenders for further review. Figure 6 depicts this process. The L/LMS uses two kinds of input data: data from government systems of records and data from commercial sources. The data include information on businesses and individuals. SBA Loan Records. SBA electronically transfers about 10 data files monthly to Dun & Bradstreet. These files contain existing data on individual 7(a) and 504 SBA business loans and on the lending institutions that manage the loans and include information on small businesses; names, addresses, and phone numbers, as well as limited information about business principals, including personal identifiers. Credit Evaluation Data. The L/LMS uses several sources of commercial data, including Dun & Bradstreet demographic and risk data from its global business database, consumer bureau data on the business principals (e.g., information relating to recent delinquencies), and predictive risk scores developed by Dun & Bradstreet and Fair Isaac. This information can contain personal identifiers. The L/LMS analyzes the data to generate reports on each lender’s portfolio. SBA also creates aggregate reports that evaluate loans by portfolio value, projected risk, and historical performance trends. According to SBA officials, system reports are currently used by program officials to support business loan, lender, and portfolio monitoring efforts. The Privacy Act requires agencies to establish appropriate administrative, technical, and physical safeguards to ensure the security of records and to protect against any anticipated threats or hazards to their security that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual about whom information is maintained. Although the act does not specify the procedures agencies should employ to ensure information security, subsequent legislation and guidance from the Office of Management and Budget (OMB) and the National Institute of Standards and Technology (NIST) provide specific procedures that agencies should take to protect the security of information. For example, the Federal Information Security Management Act (FISMA) requires that agencywide information security programs include detailed plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. OMB requires that agencies prepare IT system security plans consistent with NIST guidance, and that these plans contain specific elements, including rules of behavior for system use, required training in security responsibilities, personnel controls, technical security techniques and controls, continuity of operations, incident response, and system interconnection. In addition, OMB requires that agency management officials formally authorize their information systems to process information and thereby accept the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operational, and technical controls established in an information system's security plan. NIST guidelines detail the requirements for certification and accreditation, including the requirement that the certification documents include the system security plan, risk assessment, and tested contingency plan. In addition, NIST guidance on recommended security controls for federal information systems requires agencies to develop, implement, and test contingency plans for their systems and risk assessments. Table 10 lists each of the security requirements that we evaluated and the results of our evaluation for each of the five data mining efforts included in this report. In addition to the contact named above, Barbara Collier, Neil Doherty, Mirko Dolak, Nancy Glover, Alison Jacobs, Kathleen S. Lovett, David Plocher, James R. Sweetman, Jr., and Marcia Washington made key contributions to this report.
|
Data mining--a technique for extracting knowledge from large volumes of data--is being used increasingly by the government and by the private sector. Many federal data mining efforts involve the use of personal information, which can originate from government sources as well as private sector organizations. The federal government's increased use of data mining since the terrorist attacks of September 11, 2001, has raised public and congressional concerns. As a result, GAO was asked to describe the characteristics of five federal data mining efforts and to determine whether agencies are providing adequate privacy and security protection for the information systems used in the efforts and for individuals potentially affected by these data mining efforts. The five data mining efforts we reviewed are used by federal agencies to fulfill a variety of purposes and use various information sources, including both information collected on behalf of the agency and information originally collected by other agencies and commercial sources. Although the systems differed, the general process each used was basically the same. Each system incorporates data input, data analysis, and results output. While the agencies responsible for these five efforts took many of the key steps required by federal law and executive branch guidance for the protection of personal information, they did not comply with all related laws and guidance. Specifically, most agencies notified the general public that they were collecting and using personal information and provided opportunities for individuals to review personal information when required by the Privacy Act. However, agencies are also required to provide notice to individual respondents explaining why the information is being collected; two agencies provided this notice, one did not provide it, and two claimed an allowable exemption from this requirement because the systems were used for law enforcement. In addition, agency compliance with key security requirements was inconsistent. Finally, three of the five agencies completed privacy impact assessments--important for analyzing the privacy implications of a system or data collection--but none of the assessments fully complied with Office of Management and Budget guidance. Until agencies fully comply with these requirements, they lack assurance that individual privacy rights are being appropriately protected.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.